Building a Simple Bottle to Extend FubuMVC

WARNING:  I dashed out this code *fast* just for the blog post.  All the same, the code referenced here is on GitHub.

If you follow FubuMVC at all you’ve surely seen the term “Bottles” being tossed around in regards to extensibility and modularity in FubuMVC applications, but not a lot of detail about what it is or how to use it.  While I think that most people focus on using Bottles as a way to split functional areas of your application into separate projects or even code repositories, you can also use Bottles to create shared infrastructure extensions to FubuMVC — and that’s the topic of this blog post.

I wanna use a DatePicker control for all Date fields!

The easiest example I can think of is to build a Bottle that “drops in” an html convention to make every DateTime property whose name ends in “Date” be edited by the jquery.ui datepicker control, implemented by the new FubuMVC.DatePicker project I threw together specifically for this blog post (as in, just a demonstrator, not testing very hard).  We really do want to make this be completely “drop in” with no other configuration necessary, so the bottle is going to have to contain its own JavaScript assets and have a way to inject both the html convention policies and assets into the base FubuMVC project it gets added to.  So, how do we do that?  Well first, let’s…

Grab Some Dependencies

The FubuMVC.DatePicker depends on a couple other FubuMVC related nugets:

  1. FubuMVC.Core – duh.
  2. FubuMVC.Core.UI – the html convention support for FubuMVC applications among other things
  3. FubuMVC.Core.Assets — the asset pipeline for FubuMVC.  Conveniently enough, this Bottle comes with *a* fairly recent version of jquery just in case the application doesn’t already contain jquery.
  4. FubuMVC.JQueryUI — integrates jquery.ui into a FubuMVC application and also contains a default version of jquery.ui and a jquery.ui theme.  All of the assets can be happily overridden in the application, so don’t think you’re stuck with whatever comes in this Bottle.

Now, we’ll add the…

Client Side Script Support

We’re using the jquery.ui datepicker plugin for the client side date picker support because it’s simple to use.  I’ll do the client side activation with a new file called “DatePickerActivator.js” added at /content/scripts under the root of our FubuMVC project:

DatePickerActivator.js
  1. $(document).ready(function () {
  2.     $(‘.datepicker’).datepicker();
  3. });

The code above isn’t going to help when you’re using client side templates in a “Single Page Application” type approach, but that’s beyond the scope of this post, so we’ll call the code above good enough for now.  Now, DateTimeActivator.js does have some dependencies, so let’s declare those to the asset pipeline with a file called “datepicker.asset.config“:

DatePickerActivator.js requires jquery, jqueryui, jqueryuicss

Couple things to note up above:

  1. Any file named “*.asset.config” is picked up by FubuMVC’s asset pipeline and interpreted as asset configuration
  2. The asset pipeline supports a small textual DSL to express dependencies, asset groups, and other asset rules
  3. “jquery” and “jqueryui” are aliases that the asset pipeline will resolve into an actual asset name.  In the asset pipeline library, there’s another declaration:  “jquery is jquery-1.8.2.min.js” so that asking for “jquery” resolves to “jquery-1.8.2.min.js”  at runtime.  This was done purposely to make it easier to upgrade javascript libraries that embed versions into the file name without breaking the rest of the application.
  4. Anytime we request “DatePickerActivator.js” in a view, the asset pipeline will ensure that jquery and jquery.ui libraries are also added into the page.

Html Convention Support

Next we need to build out the actual html convention policy in the following code:

DatePickerBuilder
  1.     public class DatePickerBuilder : ElementTagBuilder
  2.     {
  3.         public override bool Matches(ElementRequest token)
  4.         {
  5.             return token.Accessor.PropertyType.IsDateTime() && token.Accessor.Name.EndsWith(“Date”);
  6.         }
  7.         public override HtmlTag Build(ElementRequest request)
  8.         {
  9.             // Add the DatePickerActivator.js into the asset pipeline for this page
  10.             request.Get<IAssetRequirements>().Require(“DatePickerActivator.js”);
  11.             string text = null == request.RawValue || DateTime.MinValue.Equals(request.RawValue)
  12.                               ? “”
  13.                               : request.RawValue.As<DateTime>().ToShortDateString();
  14.             return new HtmlTag(“input”).Text(text).Attr(“type”, “text”).AddClass(“datepicker”);
  15.         }
  16.     }

It’s not a lot of code, but it’s enough to declare when the new html convention applies (properties of type DateTime that end in “Date”), and how to build the html tag for the editor.  The only fancy thing is that this code can inject the script requirements into the asset pipeline.  It’s important to note here that the call to IAssetRequirements.Require(asset name) above does not write out a script tag right there and then, it simply tells the asset pipeline that “DatePickerActivator.js” and all its dependencies (and their dependencies) are required on this page.  Somewhere in the view (typically in the footer), there’s a single call to write the script tags that will emit script tags for all of the pending script dependencies (the asset pipeline can optionally do script compression and combinations with the assets injected by html conventions to avoid making extra http requests).

You do not have to use FubuMVC’s asset pipeline with a FubuMVC application, but it is necessary to make these types of Bottle extensibility mechanisms work.

Registering the new Html Convention

When a FubuMVC application spins up, it searches through all the assemblies loaded by Bottles looking for any concrete class that implements the IFubuRegistryExtension interface, creates an instance of that type, and applies the configuration in that IFubuRegistryExtension to the FubuMVC application spinning up.  In order to apply the html convention class that we built above, we need to add a new IFubuRegistryExtension class to our new assembly:

Code Snippet
  1.     public class DatePickerRegistryExtension : IFubuRegistryExtension
  2.     {
  3.         public void Configure(FubuRegistry registry)
  4.         {
  5.             registry.Import<HtmlConventionRegistry>(x => {
  6.                 x.Editors.Add(new DatePickerBuilder());
  7.             });
  8.         }
  9.     }

Bottle-ize the Assembly

Bottles doesn’t go around willy nilly loading every assembly it finds in the application as a Bottle, so we have to do something to mark our assembly as appropriate for Bottles to load it into FubuMVC applications.  The easiest and most common way is to just add the [FubuModule] marker attribute at the assembly level like this code below:

Code Snippet
  1. using System.Reflection;
  2. using FubuMVC.Core;
  3. [assembly: AssemblyTitle(“FubuMVC.DatePicker”)]
  4. [assembly: FubuModule]

Modifying the Build Script to Embed Content in the Bottle Assembly

Lastly, we need to embed any kind of content (JavaScript files, stylesheets, CoffeeScript, Spark or Razor views, asset config files) that isn’t part of the C# code into the Bottle assembly.  Bottles does this by sweeping the project for all files that match a set criteria (the out of the box criteria is anything that isn’t C# code or related to Visual Studio), and making a single zip file called “pak-WebContent.zip” and embedding that into the assembly.  There’s no need to make every single non-C# file an embedded resource, but we do need to call the Bottles functionality to “bottle up” the contents when you rebuild the project.

There’s an executable called “bottles.exe” in the tools folder of the Bottles nuget that you can use to “bottle up” an assembly.  I usually add the call to Bottles directly into the compile step of one or our rake scripts like so:

desc "Compiles the app"
task :compile => [:restore_if_missing, :clean, :version] do
  bottles("assembly-pak src/FubuMVC.DatePicker -p FubuMVC.DatePicker.csproj")

  MSBuildRunner.compile :compilemode => COMPILE_TARGET, :solutionfile => 'src/FubuMVC.DatePicker.sln', :clrversion => CLR_TOOLS_VERSION

  target = COMPILE_TARGET.downcase
end

def self.bottles(args)
  bottles = Platform.runtime(Nuget.tool("Bottles", "BottleRunner.exe"))
  sh "#{bottles} #{args}"
end

That’s actually all of the code, except for a nuspec file to package this up for nuget.  And tests, which don’t exist right now:(.

Consuming this Bottle

Now it’s time to use our new Bottle and apply the html convention for date properties.  You’re really got just a couple steps:

  1. Add an assembly reference in your main application to the new FubuMVC.DatePicker library through nuget or however you want to do that
  2. Do make sure that you are calling “WriteScriptTags()” somewhere at the end of your views so that the asset pipeline writes out the script files declared by the new html convention class.  We typically do this by putting that call in the main application Spark layout file.
  3. You might have to manually declare the “jqueyruicss” asset in the head of your view to make sure the “jquery-ui.css” file is present in any view that uses the new datepicker convention.  The asset pipeline handles the scripts pretty well, but the CSS files are a little trickier because they usually get written into the page before the html convention even fires
  4. Use the html convention on a page with the “InputFor” extension method like this in Spark: !{this.InputFor(x => x.SomeDate)} or <Input name=”SomeDate” /> if you’re using Spark bindings (not standardized in the FubuMVC.Spark nuget yet, but will be soon).

Stuff I think could be better

  • The javascript activation could stand to be standardized in a way that is more conducive to using the html conventions with backbone, knockout, or something of that ilk.  I’m hoping that my colleague Bob Pace will add in his module/activator magic he uses in his projects as a standard trick in FubuMVC.
  • We need to get a FubuMVC.QuickStart nuget going that bootstraps more of the asset pipeline setup and layout files to get users going faster
  • I’d like to see some enhancements to our “fubu.exe” tool to deal with more of the repetitive Bottle project setup.
  • I’m not a big fan of the way we smuggle the Bottles executable through nuget today.  It leaves us with the ugly hack in the rake scripts to magically find where nuget decided to put the bottles executable.  I know how we’re going to beat this inside the fubu project ecosystem with our ripple tool, but I’m not sure how best to do this for folks not using our build tools.  I’m not willing to accept that Bottles has to be installed on a user box before running the build file.
  • Today, FubuMVC has a limitation that asset files have to be under the /content directory for the asset pipeline.  We’re absolutely committed to changing this in the near term, but it won’t happen before our 1.0 release.  I get aggravated every time I hear somebody say that FubuMVC is just an attempt to copy Ruby on Rails.  Ironically enough, this asset file limitation is the only single thing in FubuMVC that was copied directly from RoR — and every body, including me, hates it.

What can I do with Bottles to extend FubuMVC?

This is bravado until we have enough documentation and samples to prove it, but I think that FubuMVC has the best story for extensibility and modularity in the entire .Net Web Framework universe — and honestly, I don’t think it’s even close.

So what sorts of extensibility things can you do with the FubuMVC/Bottles combination?  The short answer is every single thing that you can do with a FubuMVC application can be externalized into a Bottle and added into the base application without any seams (except for content extensions in existing views) or special handling in the base application.

“Code Complete” is a polite fiction, “Done, done, done” is the hard truth

This is a mostly rewritten version of an old blog post of mine from ’06, but the content is still important and I don’t see folks talking about it very often.  Don’t you think for one second that I’ve done this perfectly on every project I’ve worked on for the past decade, but it’s still an ideal I’d like to aim for.

Before you flame me, I’m not talking about the canonical book by Steve McConnell.  What I mean is that this statement is a lie – “we’re code complete on feature xyz.”  That phrase is a lie, or at least misleading, because you really aren’t complete with the feature.  Code complete doesn’t tell you anything definitive about the quality of the code, or most importantly, the readiness of the code for actual deployment.  Code complete just means that the developers have reached a point where they’re ready to turn the code over for testing.  You say the phrase “code complete” to mark off a gate on a schedule or claim earned credit for coding work done on a project schedule.  Using “code complete” to claim earned value is tempting, yet dangerous because it doesn’t translate into business value.  It could have lots of bugs and issues yet to be uncovered by the testers.  If the code hasn’t gone through user acceptance testing, it might not even be the right functionality.

One of my favorite aspects of eXtreme Programming from back in the day was the emphasis on creating working software instead of obsessing over intermediate progress gates and deliverables.  In direct contrast to “Code Complete,”  XP teams used the phrase “Done, done, done” to describe a feature as complete.  “Done, done, done” means the feature is 100% ready to deploy to production.

There’s quite a bit of variance from project to project, but the “story wall” statuses that I prefer to use for a Kanban type approach would go something like:

    1. On deck/not started/make sure it’s ready to be worked on
    2. In development
    3. In testing
    4. Ready for review
    5. Done, done, done

The other columns besides “done, done, done” are just intermediate stages that help the team coordinate activities and handoffs between team members.  The burndown chart informs management on the state of the iteration and helps spot problems and changes in the iteration plan, but the authoritative progress meter is the number stories crossing the line into the “done” column.

That workflow above is a little bit like playing the game of Sorry! as a kid (or parent of a kid about that age).  If you don’t remember or never played Sorry!, the goal of the game was to get your tokens into the home area (production).  There was also a “safe zone” where your tokens are almost to home base, but once in awhile you manage to draw cards that force you to send your tokens back into the danger area.

Just like the game of Sorry!, you don’t “win” at your software project until you push all your stories into a deployable state.

So, how do I use this “knowledge?”

I can’t claim to be any kind of Kanban expert, but I do know that myself and my teams bog down badly when we have too many balls up in the air.  We always seem to do best when we’re working a finite number of features and working them to completion serially rather than having more parallel efforts running simultaneously in various states of not quite done.  By the same token, I also know that I’m much, much quicker solving problems in the code I just worked on than in code I worked on last month.  That’s a roundabout way of saying that I want the testing and user approval of a new feature or user story to happen as close to my development effort for that feature as possible.  In a perfect world this translates to more or less keeping the developers, testers, and maybe even the UX designers and customers focused on the same user story or feature at any given time.

Digging into another old blog post, I strongly recommend Michael Feather’s old blog post on Iteration Slop, in specific what he describes as “trailer hitched QA.”  Right now, I don’t think my current team is doing a good enough job preventing the “trailer hitched QA” problem.  We’re trying to cut into this problem by doing more upfront “executable specifications” to bring the testers and developers onto the same page before working too much on a story.  We’re also changing from a formal iterative approach that’s tempted us into just getting to “code complete” for all the stories we guessed, I’m sorry, estimated that we could do in an iteration to a continuous flow Kanban style.  My hope is that we stop worrying so much about artificial deadlines and focus more on delivering production quality features one or two features at a time.  I’m also hoping that this leads to us and the testers working more smoothly together.

Presenting at CodeMash 2013

In my continuing bid to rejoin the development world, I’m going to be co-presenting two workshops at CodeMash 2013 with Corey Kaylor and Josh Arnold.

Making Test Automation with Web Applications Work

Let’s assume you’ve accepted the arguments in favor of automating at least part of the testing against your web application and you’ve generally nailed down all the soft fuzzy process and collaboration issues, now you’re simply left with the very hard problem of doing effective automated testing — and that’s what this workshop will concentrate on.  We’re going to be light on software process issues but very heavy on concrete technical problems and solutions.  We’ll talk about how we try to make our automated tests more reliable, faster, able to accept changes in the user interface, and less work to author.   We will be showing examples using our own .Net and FubuMVC flavored stack of Storyteller2, Serenity, WebDriver, and Jasmine, but I think that the concepts and strategies should directly transfer to other platforms and tools.

Fully Operational FubuMVC 1.0

I’m very consciously using CodeMash as a forcing function to make FubuMVC arrive at a 1.0 release — documentation, new nugets, tutorials, diagnostics and stable API’s.  I think we’re going to be able to make a pretty compelling case for why FubuMVC is worth exploring even in a world crowded with Ruby on Rails, Play, Lift, Node.js, and Sinatra.

If we manage to pull off a healthy fraction of the demos that we have planned, I’m going to do the “Now witness the firepower of this fully ARMED and OPERATIONAL battle station!” thing, but hopefully without getting thrown down an inexplicably placed well by Darth Vader afterward.  Seriously, why was there a giant, uncovered hole right in the emporer’s throne room?

Neither of these workshops will be filmed, but sometime within the next couple of months we will release Camtasia recordings of the same demo’s as part of our 1.0 release.

Once I’m done with the two workshops I’m going to rest my voice, take in as many talks as I can, catch up with some friends I haven’t seen in quite a while, and just generally mingle.  In particular I’m looking forward to the sessions on Continuous Deployment, client side web development, Clojure, and I want to see the Play framework in action.

See you all there in January.

Abstractions and Models aren’t Infallible

Last week I made a comment on Twitter as a little reminder to myself (link) that you have to occasionally challenge and even change the basic abstractions and domain model of your application.  I was working to extend the new FubuMVC.Authentication project so that we could use Windows and form based authentication on the same application at work.  The core of FubuMVC.Authentication was harvested from Josh and I’s previous project that had much simpler requirements.  I tried for far too long to push the new square shaped functionality through the existing round-holed model.  Once I took a step back and laid out the requirements and how functionality did and did not vary from Windows to form based to Twitter/Facebook/OAuth authentication, the basic abstractions changed quite a bit and I finally got some stuff done that had me stuck.

In the same week, we started to do some detailed analysis for a new user story that everybody thought could be tricky.  Once we got the business partners to give us concrete scenarios of the problems we faced, my team realized that we flat out have to change the core of our Domain Model and the relationships between entities to avoid turning our code into the kind of thing that makes me snarky on Twitter.  In this case it’s not a terrible thing because it won’t break the existing end to end acceptance tests or even much about the user interaction design.

I think we’re going to be just fine with both the authentication and the Domain Model, but for the hat trick last week, I griped on Twitter about a silly little API usability problem with an OSS tool that we use and had just upgraded.  I’ve spent a quite bit of time looking through this OSS tool’s codebase because we interact with its internals extensively in another FubuMVC related project.  Without getting too detailed, I think that the way this OSS project models its problem domain internally makes the code more complicated and certainly harder to use for us than it could be if the basic abstractions were changed to another design.  In this case, I’m familiar with the project’s history and it’s easy to see how its internal model probably worked very well with the initial, relatively simple use cases from the first release.  This project is very successful by any standards (even the fact that people gripe about it so frequently in my twitter feed is a testament to how used the tool really is), but they still have to be paying an opportunity cost in building out their newer features.

Re-thinking previously made decisions isn’t an obvious move in most cases, but it’s still something you’re going to have to do as a software developer.

When I’m most productive

I have days and even weeks when working code just bursts onto the screen with seemingly no effort and I pop out of bed the next morning ready to go again (drives my wife up the wall). Unfortunately, there are those other days when it feels like you just cannot make things move and I feel drained and burned out at the end of the day. It’s an axiom that “mama always said there’ll be days like this,” but what if we can just pay attention to what makes the good days good, the bad days bad, and use what we’ve learned to change our environment and habits for the better?

<boring caveats>

  • I’m being subjective about what I’m calling “productivity” and I know it.  Give me any kind of pseudo-scientific sort of metric and I’ll happily shoot holes into why it’s an imperfect measure*
  • Yes, you should optimize the whole, it’s not just about cutting code blah, blah, blah, preach, preach, preach, “I have people skills, don’t you understand!”  Actually, I think that’s all important too, but I’m making the assumption here that development really does mean coded/tested/approved rather than the imaginary “code complete” status.

</boring caveats>

Quick Twitch Development

I think I’m far more productive when I’m able to make very granular commits several times an hour with a clean local build for each commit.  Granted, this could be interpreted as just an impression of productivity, but I think there’s some reality to the micro-commits as an indicator of productivity.  Correlation is certainly not causation, so let’s work backwards and see what’s typically the situation when I’m able to do “quick twitch development:”

  • I’m coding within an isolated codebase and process.  Writing any kind of code that crosses process boundaries, code repositories, machines, or even just major subsystems within a big system can be much less productive in my experience.  More on that later.
  • My development tasks need to be small so that I can quickly flow from unit test to completed code to the next task
  • My unit tests and the build script in general needs to run quickly so that I have a short feedback cycle between writing a bit of code and knowing that it does what I want it to do
  • My unit tests tend to cover small areas of the code and achieve small goals such that it’s rare that I need to use a debugger to understand and solve problems
  • I need to understand my problem and technical domain well enough to be able to quickly identify the tasks and steps in whatever user story or feature I’m trying to build.  I’m naturally going to be much slower when I have to get out my notebook to doodle UML or CRC cards, go to the whiteboard with a coworker, or step away from the keyboard just to think my way through the problem.

So what can we do to make development be more like the list above?  As much as I scoff at much of the hot air devoted to Domain Driven Design, paying attention to the idea of “Bounded Contexts” and trying to do most of your coding work inside one context can help.  For my part, I’ve tried to organize the FubuMVC ecosystem into more cohesive repositories and solutions to get smaller codebases where the unit test cycle and automated build cycles become much, much faster — and after the dust settled down from the churn, I’d argue that I see a much higher throughput.

As far as being able to work in small, atomic tasks, I cannot strongly enough recommend the pursuit of Orthogonal Code.  The end result should be faster unit testing feedback cycles and smaller coding tasks.  Working with monolithic blobs of code tends to make tests slower, harder to write, and more likely to push you into needing the debugger more often.

If you’re new to a project, I think you’re going to have to invest some time looking through the codebase to understand the organization of the code, the coding style the team uses, the key abstractions, and the way that responsibilities and roles are assigned within classes or functions of the codebase.  Doing this can help you be more successful in breaking down bigger coding goals into small, achievable tasks and unit tests.

My Personal Involvement with the “Goal”

Years ago I read an article from Joel Spolsky telling us about how wonderful their requirements process was for FogBugz.  I rolled my eyes and thought to myself “of course you’re doing a great job, you’re building a system you use yourself to solve your own problems.  Let’s see you do that at my job where you’re working in a domain you don’t know.”  My sarcasm aside, I think there’s a lot of value in thinking through this statement.  I know that I’m far more productive in projects where I’m:

  • Heavily invested in the success of the project (like, say, making the FubuMVC 1.0 release in January)
  • Deeply knowledgeable and somewhat enthusiastic about solving the problem (this is why “Shadow IT” is so prevalent and even successful in big companies where much more qualified IT personnel struggle to complete the same projects)
  • Getting a lot of active collaboration from the real domain experts.  I can happily derive enthusiasm for a project if the project stakeholders themselves are enthusiastic about the project and heavily engaged.  My most successful project as a professional developer was a technical mess, but the domain expert spent a lot of time with a very green technical lead and helped create a strong vision that did actually make a real difference when we rolled it out to the factory floors.

On the other hand, things just won’t go that well when nobody really believes in the project, the theoretical business partners won’t interact much with you, and it’s “just a job.”  I don’t know what it’s like where you’re at, but the job market in Austin for developers is so hot right now that I think you’re crazy for staying in a job you don’t like.

I understand the terrain

When other developers ask me “what should I learn?,” I will invariably advise them to concentrate first on technology agnostic software fundamentals and only learn technologies or frameworks as you need to.  However, there’s something to be said for having a deep understanding of the technical stack, languages and tools that you’re working on in any given time.

The sad truth is that in any given set of tools there are going to be plenty of times when you go off the rails, be it “Ruby can’t find this gem, .Net can’t resolve this assembly version, or the dreaded ‘you need new Guid’s from long ago.'”  At a stand up meeting a couple years ago, one of my colleagues was struggling with a null reference exception coming from NHibernate on startup.*  I correctly guessed the root cause and helped him fix it quickly only because:

  • I just happened to know that NHibernate threw that NullReferenceException as a side effect of a configuration key being missing
  • I knew that the code he was using was being executed from an isolated AppDomain
  • I know that when you programmatically spin up a new AppDomain you usually (always?) need to specify what the configuration file is for the app settings

As much as I think that conceptual ideas like design patterns and design fundamentals are more valuable on the whole than technical trivia, I would have been at a complete loss if I didn’t understand the inner workings of the tooling we were using at that time.  On the other hand, I’ve struggled mightily when I’ve had to work with technologies where I’m not as familiar — the last time I coded on the Java stack, x509 certificates, the first week we used RavenDb last year.

Next time..

There’s a lot more to talk about, but my battery is almost dead and I have to take my son to go play Laser Tag for his birthday.  The single biggest thing that drags down *my* productivity is dealing with development environment dependencies when those upstream dependencies are changing underneath me.  I’m going to promote that to its own blog post.

* My personal metric of coding production is the number of unit and integration tests for a codebase, but it’s only useful in blocks of 25 or larger over significant lengths of time.

** “But Jeremy, that was probably just the case of a bad exception that should have been fixed in that API.”  And I’d say that I agree with you and you should endeavor to make exception messages as clear as possible when you’re writing API’s for other developers.  That said, it’s an imperfect world — especially if you’re gonna go off and use someone else’s code.

A high level look at the FubuMVC Ecosystem

The FubuMVC team has run pretty silent this year, but we’ve been busy just the same.  I just recently finished some, ahem, re-architecture of the GitHub repositories, assemblies, and nuget packages in preparation for our planned 1.0 release in January and it has led to the functionality being spread out into quite a few places.  If you look at our main GitHub organization page, you’ll see 57 different repositories (at the time of this post).  To make sense of what’s there, here’s a rundown of most of the active projects and nugets in what I like to call “FubuWorld.”

I should note that most of these are not available on the public Nuget feed.  For a variety of reasons, we have only been releasing to our own Nuget feed at http://build.fubu-project.org/guestAuth/app/nuget/v1/FeedService.svc/, but I hope to change this as soon as documentation starts to catch up to development.

  1. FubuCore — Foundational library and let’s admit it, a junk drawer.  Model binding, object conversions, reflection helpers, command line support, dependency graph analysis, and a lot of extension methods for things inexplicably left out of the BCL.  The repository also holds our FubuTestingSupport library we use across all the projects with an auto-mocking* interaction context and custom specification extensions.
  2. HtmlTags — a model for generating html on the server side heavily influenced by jQuery syntax
  3. FubuLocalization — very small, core library for localization
  4. Bottles — Our technology for modular, deployable components similar to the areas/slices/engines support in Ruby on Rails.  If FubuMVC wins out over the other .Net OSS web frameworks and makes any kind of dent in ASP.Net MVC usage, I think it’ll be because of our modularity story with Bottles.
  5. FubuMVC— the main repository with the core assembly for content negotiation, container agnostic support, and our Russian Doll implementation.
      1. FubuMVC.Core
      2. FubuMVC.StructureMap
      3. FubuMVC.SelfHost — support for running FubuMVC applications with the Web API Self Host libraries.  At this point I think we’ve completely given up on OWIN hosting for the foreseeable future, or at least I have.
      4. FubuMVC.TestingHarness — reusable library for integration testing FubuMVC applications and Bottles
  6. fubu — command line tool for working with FubuMVC applications.  Virtual directory creation, Bottle support, and “fubu new”
  7. FubuMVC.Core.UI — The html conventions and html helpers
  8. FubuMVC.ViewEngines— just what it sounds like.  We do still have WebForms support, but it’s busted at the moment and I haven’t gotten around to fixing it.  I almost consider that to be a public service (we’ll get it fixed soon).
    1. FubuMVC.Core.View — foundational support for view attachment and activation
    2. FubuMVC.Razor
    3. FubuMVC.Spark
  9. FubuMVC.AntiForgery — cross site scripting protection.  This is early code that hasn’t been made into a “real” Bottle yet if anybody wants to adopt an OSS project;)
  10. FubuMVC.Core.Assets — FubuMVC’s asset pipeline.  The feedback has been mixed on this thing so far and it needs much more work before it gets a 1.0 version, but you have to use it to take advantage of assets in Bottles imported into your application.  Other FubuMVC users use require.js for all asset management, and I think you could opt for Cassette as well.
  11. FubuMVC.AssetTransforms— asset file transformations that plug into the asset pipeline.  I think we’ve got all of these converted to drop in Bottles now (meaning that you just have to have the assemblies in the application path for them to work)
    1. FubuMVC.Less
    2. FubuMVC.Sass
    3. FubuMVC.Coffee
    4. FubuMVC.YUICompressor — applies compression of both JS and CSS files using the YuiCompressor.Net library
    5. FubuMVC.Minify — javascript minification using the uglify.js library
  12. FubuMVC.Ajax — small library we use for Ajax request correlation in automated testing scenarios
  13. FubuMVC.Authentication — Think passport.js for FubuMVC.  This library is under heavy construction, but the goal is to have out of the box authentication strategies for basic form-based authentication, windows authentication, and every flavor of Twitter/Facebook/Google authentication you can think of by dropping in additional Bottle’s into your application.
  14. FubuMVC.CodeSnippets — Small Bottle to embed code snippets into your running FubuMVC application using Google’s prettify.js library.  We’ll be using this nuget to create “living” documentation for FubuMVC.
  15. FubuMVC.ContentExtensions — Small Bottle that allows you to “inject” content via into your existing views from external Bottles.  It was originally developed to allow one of my teams to write customer specific application extensions without any impact to the core application.
  16. FubuMVC.Dates — Remember that annoying thing where you want to store dates as UTC in your database but always display dates and times in the local users timezone?  This Bottle contains a recipe for handling the UTC to local timezone conversion in both displaying times and receiving input from the user.
  17. FubuMVC.Diagnostics — Formerly known as “advanced diagnostics.”  This is a drop in Bottle that adds a great deal of runtime tracing and a visualization of the application configuration itself as additional routes to the application.  There’s still a lot more work before January, but my hope is that the diagnostics pages can go a long way toward making FubuMVC self describing for new users.
  18. FubuMVC.HandlerConventions — The single method “Handler” convention for FubuMVC actions
  19. FubuMVC.Instrumentation — A drop in extension to FubuMVC.Diagnostics.  I’ll let Corey explain this one.
  20. FubuMVC.JQueryUI — Not much here yet, but it does give you a drop in integration of jquery.ui into the FubuMVC asset pipeline
  21. FubuMVC.Json — Integrates Json.Net into FubuMVC applications as the json serializer and an option to use model binding with json input.
  22. FubuMVC.Localization — drop in Bottle that gives you very basic integration of FubuLocalization using xml files with a FubuMVC application.
  23. FubuMVC.Media — drop in Bottle that was originally meant to support ReSTful architectures with FubuMVC (atom feeds, HATEOAS), but mostly gets used for the “projections” support to write an object or stream of objects out to Json or Xml without having to bounce through a DTO class first.
  24. FubuMVC.Navigation — The extensible navigation model for FubuMVC applications.  This Bottle enables you to “inject” or add navigation menu items from extension Bottles.
  25. FubuMVC.ServerSentEvents — This Bottle adds support for the Server Sent Events protocol to FubuMVC applications
  26. FubuMVC.SlickGrid — Drop in Bottle to help you use the very exellent SlickGrid library.  We’re using this very heavily in my current project and it’s been getting some enhancement along the way.
  27. FubuMVC.TwitterBootstrap — Drop in Bottle to integrate Twitter Bootstrap into FubuMVC applications.  So far, we’ve got helpers for just a few things, but the big star for me is a resusable Html helper that ties FubuMVC.Navigation to the Twitter Bootstrap menu widget.
  28. Storyteller & Storyteller2 — I’m stubborn to a fault, and I still haven’t given up on Storyteller even though it hasn’t been very active in development.  I just recently forked off a Storyteller 2 with some simplifications to the testing engine that we are using at work.  At some point next year, I hope to rewrite the WPF user interface as a pure HTML/JS/CSS application with FubuMVC.
  29. Serenity— This is growing in the automated testing recipes for FubuMVC.  Today Serenity does two different things:
    1. Jasmine Runner — An “auto test” runner for using the Jasmine library to write tests against JavaScript or CoffeeScript libraries.  Also gives us easy integration of Jasmine tests into our Continuous Integration builds.  This uses the FubuMVC asset pipeline and the FubuMVC.SelfHost library to run a small FubuMVC application to host the assets and manage dependencies between JavaScript libraries and their spec files.
    2. Infrastructure for testing FubuMVC applications using Storyteller and WebDriver
  30. StructureMap 3 — It’s not dead, but I haven’t been able to spend much time on StructureMap the past 18 months.
  31. FubuPersistence — This is new, but Josh and I have been extracting some persistence infrastructure code out of past projects, cleaning it up, and making a new library.  This is our implementation of a generic Repository pattern along with some basic support for soft-deleted objects, multi-tenancy at the database level, and some infrastructure for dumping and resetting application state in automated testing scenarios.  We’re also adding very basic RavenDb support as well.**
  32. FubuValidation and FubuMVC.Validation — A very small validation library based on my project work going back 5-6 years.  Validation frameworks are a dime a dozen, but this one was built to integrate with FubuLocalization for customizing messages and at the time, was meant to make it easy for one of my former teams to “inject” customer specific validation rules from an extension Bottle.  This repository also contains the FubuMVC.Validation library to integrate FubuValidation into FubuMVC applications, but I think that library is due for a near rewrite soon.

* I think the negativity towards auto-mocking containers is exaggerated.  The only danger to me is in not paying attention to what you’re doing.

** Oh My God, Ayende said we’re not supposed to abstract data persistence at all, what are you doing!?!  It’s really this simple, we use the abstractions most of the time when there isn’t any kind of special performance or exotic querying requirement because it makes things very simple, but happily bypass all of that and go right down to the metal whenever we do need to.  Just go retrieve that baby that Ayende threw out in his bathwater.

My comments about “20 controversial programming opinions”

I saw a post going around a couple months ago called “20 controversial programming opinions.”  It’s Saturday afternoon and I’m running out of time for the weekly blog post, so I’m disagreeing with some of those “controversial” opinions as an easy way for me to make our team’s blogging deadline, so….

1.) Programmers who don’t code in their spare time for fun will never become as good as those that do. This is an elitist attitude, or at least I’ve been helpfully told this whenever I’ve espoused this same view in public. I’m a football junkie in the late summer and early fall. One of the things you hear coaches say is that a player is or isn’t getting valuable “reps” (repetitions). Just like football players learning the mechanics of their position and how to read the opposing team, developers become far better coders with more experience. The truth of the matter is that very few of us really get to grow up in shops where you can gain enough experience on the job to become genuinely good. Working on side projects, especially if you tackle challenging problems in those side projects, is really the only sure way to become a good developer for most people.

Besides, I’ve never met anyone who didn’t enjoy coding who was actually good at it.

2.) Unit testing won’t help you write good code. I think this opinion is short sighted. I just told someone the other day that Test Driven Development (TDD) as a design heuristic is overrated, but that far from saying that it’s useless. Writing your tests first or at least focusing upfront on how you’re going to test a piece of code is a very good way to arrive at orthogonal code – which generally coincides with “good” code that’s easy to work with, extend, alter, and understand in my experience. TDD also has a very big advantage in giving you much quicker feedback on API usage, especially if you’re using tests to drive out the signature of an API. In line with Lean thinking, building API’s and services by “pulling” necessary features discovered from writing tests against the consumer of the new API can easily lead to less wasted effort as can frequently happen from “push” design where you design an API for what you think you might need later.  Whether or not TDD or even just plain unit testing pushes you to better code is almost a moot point, because having a far better safety net of unit tests makes refactoring much less risky — and refactoring absolutely leads to better code.

3.) The only “best practice” you should be using all the time is “Use Your Brain”. There’s a definite anti-intellectual backlash against abstract ideas like design patterns, Agile/Lean/Kanban processes, *DD techniques and the like that I think is just as unhelpful as swallowing those ideas hook, line, and sinker. Use your head doesn’t mean disregard everyone else’s ideas — unless you’re really smart enough to completely derive all the accumulated wisdom of the greater development community over the past 50 years from first causes. Maybe you just want to climb to the higher levels of Bloom’s Taxonomy when you do learn about these “best practices” things to use them more intelligently than that guy who went design pattern happy in the project you just inherited.

7.) If you only know one language, no matter how well you know it, you’re not a great programmer.  I think there’s definitely some truth to this opinion, but I think the old “learn one new programming language a year” advice from the Pragmatic Programmers isn’t entirely good advice either.  I definitely think you should expose yourself to a variety of programming concepts — and you could never do that inside a single programming language.  All the same, I think that trying out a plethora of programming languages for a week or two leads to being more of a dilettante than a guy who’s very effective as a developer.  If you want to be a better developer, absolutely try to learn a new programming language once in a while, but I’d first recommend trying to solve different and harder problems in a language you already know rather than writing tic tac toe a dozen different ways.  To make it be more concrete, let’s say that we know two C# developers named Hank and Xavier.  Hank does a highly challenging, in depth side project in C# and supports it through quite a few changes and iterations.  On the other hand, Xavier does several small, simple versions of a TODO application in other languages.  Both developers should have improved themselves, but I think I’d bet on Hank having gained more effectiveness as a developer than Xavier did in the same time frame.  Maybe all I’m trying to say is that I think you should focus on deep problem solving skills instead of only shallowly learning lots of different new shiny object technologies and programming languages.

 

I think I could say a lot more, but there’s college football on and steaks in the fridge that won’t grill themselves today.

Initial thoughts on some new-fangled things part 2

Picking up from my last post, let me wrap up my initial thoughts on Event Sourcing, Document Databases, and why I think it’s going to take a generation for all of this stuff to be mainstream.

My bias

If you don’t agree with the following bullet points then you’re also unlikely to agree with my feelings about document databases in particular — and that’s okay, but at least let’s understand where we’re both coming from.

  • I very strongly believe in incremental and evolutionary approaches to software development and I naturally prefer tools that fit an evolutionary or continuous model of working rather than tools that are optimized for waterfall philosophies (passive code generation, most Microsoft tools before the last couple years).
  • I despise repetitive code ceremony (ironic considering that most days I work with static typed C#, but still).
  • I think in terms of objects with an increasing contribution from functional programming.  When I’m designing software, I’m thinking about responsibilities, roles, and behavior rather than “get data from table 1, 2, and 3, then update table 4.”
  • A database is nothing more than the persistence subsystem of an application in my world view.  The model in my code is reality, the database is just a persistent reflection of current and historical state.

Where I’m coming from

I started as a “Shadow IT” developer writing little tactical solutions for myself with MS Access before moving on to being a “real” developer doing all my data access work with stored procedures and generally developing against ADO record sets in my VB code.  At that point, database related work was the central technical activity.  From there, I followed a typical path from doing quasi-OO with hand rolled mapping code to using an Object Relational Mapper to do all my persistence work and database work.  Before adopting RavenDb, one of my previous teams heavily leveraged Fluent NHibernate and it’s conventions to almost completely generate our database schema from our classes and validation rules.  At that point, database work was minor in terms of our total effort except for the occasional performance optimization firedrill — especially compared to previous projects in my career .

Even so, I wasn’t completely happy because there was still friction I didn’t care for:

  1. You frequently compromised the shape of your objects to be more easily persistable/mappable with NHibernate
  2. The bleeping “everything must be virtual” thing you have to remember to do just so Anders Heilsberg will allow NHibernate to create a working virtual proxy
  3. Having to constantly worry about whether or not you are doing lazy or eager fetching in any given scenario

On my previous and hopefully my current project, things got even better because…

Persistence is easier with a Document Database and Event Sourcing

I saw a lot of the so-called impedance mismatch problem while persisting objects to a relational database.  Once you consider hierarchies, graphs of objects, polymorphic collections, custom value types, and whatnot, you find that your behavior object model becomes quite different from your relational database structure.  If you’re using an ORM, you quickly learn that there’s a substantial cost to fetching an entire object graph if the ORM has to cross multiple tables to do its work.  At that point you start getting into the guts of your ORM and learn how to control lazy or eager fetching of a collection or a reference in scenarios that perform poorly — and just so we’re very clear here, you cannot make a general rule about to always be lazy or eager.

The great thing with using a document database to me is that most of that paragraph above goes away.  The json documents that I persist in RavenDb are basically the same shape as my object graph in my C# code.  I’m sure there are exceptions, but what I saw was that the whole eager or lazy fetching problem pretty well goes away because it’s cheap to pull the entire object graph out of RavenDb at one time when it’s all stored together rather than spread around different tables.  Take away the concerns about lazy loading, and I no longer need a virtual proxy and all the annoying “must be explicitly virtual” ceremony work.

Mapping gets much simpler when all you’re doing is serializing an object to json.  We occasionally customized the way our objects were serialized, especially custom value types, but over all it was less effort than mapping with an ORM even with conventions and auto mapping.  I think the big win was hitting cases where you need polymorphic collections.  Using RavenDb we just told Json.Net to track the actual class type and boom, we could persist any new subtype of our “CaseEvent” class in a property of type “IList<CaseEvent>.”  Since I’ve always thought that ORM’s and RDBMS’s in general handle polymorphism very badly, I think that’s a big win.

We do write what we call “persistence check” tests that just verify that an object and all the fields we care about can make the round trip from persistence to being loaded later from a different database session.  That small effort has repeatedly saved our bacon, but I insisted on that work with NHibernate as well anyway.

If you aren’t building systems where your objects are flat, then maybe this section just doesn’t matter as much to you, but it certainly has been a big advantage for me.

Event Sourcing  — Want your cake?  Wanna eat it too?

The combination of event sourcing and RavenDb as a database has significantly reduced the tension between your object model and easy persistence.  I’m not hardcore on the philosophy that says “setters are evil” or that an object should never be allowed to be put into an invalid shape where you can only change the state of an object by calling its public methods — but that is still a consideration for me in designing classes.  The problem is that you constantly compromise — on incur extra friction — if you insist on directly persisting the classes that implement your business logic into your database with an ORM.  Either you:

  1. Open up public setters and a default constructor on your class to make your ORM happier at the cost of potentially allowing more coding errors into your business logic
  2. Use fancier, and in my experience more error prone, techniques in your ORM to map non-default constructors or backing fields

If instead, you use Event Sourcing you can have this scenario instead:

  1. Persist the events as dumb data bags where there’s no downside in making it completely serializable
  2. Persist a “readside” view of the system state suitable for your UI to consume that’s again devoid of any behavior so it’s also a dumb data bag
  3. Put the actual business logic with all the validation you want in a separate object that governs the acceptance and processing of the events, but you don’t actually persist that object, just its events (I know there’s a little more to this for optimization, snapshots, etc. but I want to hit publish before dinner).

I don’t know that this is a big win for me, but in a system with very rich business logic, I think you’re going to like this side effect of event sourcing.

Um, referential integrity?  Uniqueness?  Data validity?

Some of you are going to read this and say “what about referential integrity or uniqueness?”  You’ll have to implement your own code-based uniqueness validation instead of relying on the database — but you really needed to do that anyway for usability’s sake in your user interface.  I don’t see the referential integrity as being that big of an issue because you’re really storing documents as hierarchical data anyway.  Either way, even if code based validation causes you more work, I’d say that these downsides are far outweighed by the advantages.

When will NoSQL databases be mainstream?

When I was growing up as a software developer, most people understood software development through the paradigm of a relational database.  You had the database, processes that pushed data between tables, and maybe a user interface that displayed the database table and captured updates to be processed to the database.  Back then we would routinely get “requirements documents” explaining business goals completely in terms of which tables needed to be updated.

For a variety of reasons I’ve completely rejected this style of development, but many people haven’t.  I wouldn’t be surprised if database centric coding is still the dominant style of development in the world.  Honestly, I think that the relational database with procedural code paradigm is far easier for most people to understand compared to object oriented programming, functional programming, or anything even more esoteric.

The relational database paradigm has an absolutely dominant mindshare amongst developers and there’s an absurd amount of prior investment in tooling for RDBMS.  Add all that together, add a huge dash of pure inertia, and I think you’ve got the biggest example of technical “who moved my cheese” that I’ve seen in my technical career.

Next time…

Just to get third week out of this theme, I’ll summarize how my team used event sourcing on my previous project and get a bit more code centric.

Initial thoughts on some new-fangled things part 1

I’ve been lucky over the past year and change to work with some interesting projects that used some of the newer technologies and architectural concepts like Command Query Responsiblity Separation (CQRS), Event Sourcing, Eventual Consistency, and RavenDb as a document database.  I cannot speak to the scalability benefits of these tools because that’s just not an area where I have expertise.  Instead, I’m interested in how these tools have reduced coding ceremony, improved testability, and allowed my very small teams to effectively do continuous design by giving us much more architectural reversibility.  I ran out of time and energy on this post, but I’ll follow up next week with more on event sourcing, what I like about RavenDb,  and how we’ve used all of this in our projects.

Continuous Design is better with a Document Database

I gave a talk earlier this month at Agile Vancouver called “Architectural Reversibility, ” largely about how we can create better designs if we are able to do design incrementally throughout the lifetime of a project instead of having to do it all upfront.  My point of view on this topic is that we’re far more likely to succeed if we’re able to recover from the inevitable errors in architecture, design, or requirements — or better yet, if we’re able to delay commitment to elements of our technical architecture until we know more later on in the project.  Furthermore, I said that you should be cognizant of this when selecting technologies.  One of my slides showed this progression of data access/persistence technologies from my own development career that went something like this:

  1. Stored procedures (sproc) for every single bit of data access
  2. Object Relational Mapper & Relational Database
  3. Document Database

Let’s say that I need to add a property to an entity in my existing system.  Using the same numbering scheme as above, I would have to:

  1. Change the DDL defining the proper table.  Update every sproc that returns that field and any that might need to search on that field.  Go update all the places in the code that use the data returned by that table.
  2. Change the DDL defining the proper table or a data migration.  Change the relevant class in the code (even with Ruby ActiveRecord you may still touch the class to add validation rules).  Change the ORM mapping to add this field and verify the persistence of the new field all the way to the database.
  3. Add a new property to the proper class and make sure that it serializes.

Adding or changing the shape of the data in the 90’s style stored procedure model was tedious.  Back then you had to try much harder to get things right on the first try.  Using an ORM was much better, especially if you used conventions to drive the ORM mapping or even to generate the database schema from your classes.  However, using a document database where you just serialize objects to a json structure with no schema requiring you to effectively do double data entry for the database and object model?  That’s the best possible solution for really able to do continuous design because there’s very minimal friction in changing your object model (at least before you deploy for the first time anyway).

To summarize, document databases absolutely rock for architectural reversibility and that’s a very, very big deal.

Automated testing

In my strong opinion, doing automated, end to end testing using the database is vastly easier and more effective with a document database than with a relational database.  I feel that this advantage is enough by itself to justify the usage of a document database.   Why do I think that?  Well first, let’s review the two mandatory parts of any repeatable automated test:

  1. Known inputs
  2. Expected outcomes

In order to be really successful with automated testing, I think you need to achieve a couple things:

  1. The tests have to run fast enough to provide timely feedback.
  2. It has to be mechanically cheap for a test author to put the system into the initial state
  3. You can not allow state to bleed between tests because that makes them unreliable
  4. And a Jeremy special:  data input for automated tests should be isolated by test, i.e. no shared test data!

Referential integrity has repeatedly been a huge source of friction in test automation.  I have found myself frequently adding junk data to a database for automated tests that was not remotely germaine to the meaning of the test just to get the database constraints to shut up.  Folks, that’s friction that you just won’t have with a document database.

Immediately after adopting RavenDb we quickly adopted the trick of using Raven’s in memory storage for testing, and completely scrapping the full database between tests, virtually guaranteeing that we have our tests isolated from each other.  You can certainly do something like this with relational databases, but in my experience doing this is much more work and far slower no matter how you do things.  Being able to very quickly drop and rebuild a clean database in code is a killer feature for automated testing.

Separating the read and write models

The first time I saw Greg Young present on CQRS in 2008 I thought to myself “that’s interesting, but keeping two separate models for the same thing sounds like a lot of busywork to me.”  In practice, I’m finding it to be more helpful than I thought because it has allowed my team to be able to focus on one problem at a time and jump into the work without having to understand everything at once.

We just started a project where we’ll be exchanging messages from our web application to an existing backend.  We don’t exactly have the messaging workflow locked down, but our immediate concern is getting feedback on the usability and workflow of the proposed user interface.  To that end we created a very simple “read” model that stores only the data that our views need and in a shape that’s easy to consume on the page with little concern for what the real, behavioral “write” side model will look like later on.  We’re even able to write end to end automated tests against our user interface by setting up flat “read” documents in the database.

In iteration 2, we’ll be focusing on the events and messages throughout the system and flush out the “write” model and how it responds and changes with events.  In both cases, we are able to tightly focus on only one aspect of the system and test each in isolation.  Later on we’ll either use RavenDb’s built in mechanisms to or a code based “denormalizer” to keep the write and read models synchronized.  I like this path of working because it’s allowing me to focus on a subset of the application at a time without ever having to be overwhelmed with so many variables.

Honestly, I think I’d be a lot more hesitant to try this kind of architecture with a relational database where I’d have to lug around more stuff (DDL scripts, ORM mappings, data migration scripts, etc.) than I do today with a document database where the document json structure just flows out of the existing classes. RavenDb’s index feature does a lot to alleviate the tedious “left hand/right hand” coding that I worried about when I first learned about CQRS.

Eventual Consistency requires some care in testing

Jimmy Bogard recently blogged about the downsides of eventual consistency with a user interface.  We had some similar issues on a previous project   Rather than repeat everything Jimmy said, I’ll simply add that you must be cognizant of eventual consistency during testing.  A typical testing pattern is going to be something like:

  1. Arrange — set up a test scenario
  2. Act — do something that is expected to change the state of the system
  3. Assert — check that the system is in the state that you expected

Your problem here with eventual consistency is that there’s an asynchronous process between writing data in step 2 and being able to read the new data in step 3.  You absolutely have to account for this in both your automated tests and any manual testing.  My cheap solution with RavenDb is to swap out our low level RavenDb “persistor” in our IoC container with a testing implementation that just forces any reads to wait for all pending writes to finish first.

More importantly, I’m going to spend quite some time with our testers making sure that they have insight and visibility into this behavior so that everyone gets to keep from pulling out all our hair.

Finally…

I’m not a deep expert on these tools and techniques, but I’m seeing some things that I like so far.  At this point, I’d strongly prefer to avoid working on projects involving a relational database ever again.  As for RavenDb, it’s made a strong first impression on me and I’m looking forward to seeing where it goes from here.  I will commit to flushing out a quick start recipe for integrating RavenDb with a drop in “Bottle” for FubuMVC as our de facto recommendation for new FubuMVC projects.

Next time…

It’s Friday afternoon, I have to hit publish before the end of the day for an elimination bet, and I haven’t seen the inside of the gym all week, so I’m quitting here.  In part 2 I’d like to share why I think persistence is much easier with a document database, how we’re able to just not worry about a database at all early on, and my thoughts on developing with event sourcing.  Until next time, adieu.

Jeremy’s Only Rule of Testing

Years ago I wrote a series of blog posts describing “Jeremy’s Laws of Test Driven Development” (1, 2, 3, and 4) describing what I thought were some important coding and design rules to be more successful while using TDD.  I still believe in the thinking behind all those silly “laws,” but I now I would say that all of that writing is a manifestation of lower level first causes in successful software development — namely the extreme importance of quality feedback in your software efforts.

Consider this thought:  every single line of code you write, every thought you have about the user experience, the business rules, the design you intend to use, and the assumptions about the system’s usage you’re making are potentially wrong — but often wrong in subtle, hard to notice ways.  My experience is that my projects have gone much better when my team and I are able to work in tight cycle times with solid feedback mechanisms that constantly nudge us towards better results.

With that in mind, I’ve boiled down my old personal rules for using TDD into a single,  lower level rule to maximize the effectiveness of the feedback my team gets from testing:

Test with the finest grained mechanism that tells you something important

Since both the quantity and quality of your testing feedback matters, here’s a pair of examples from my new job that illustrate how this rule can guide your approach.

Scenario #1:  Use a tighter feedback loop

A couple weeks ago, I watched one of my new colleagues troubleshooting an issue with one of our phone helpdesk applications.  The call waiting elevator music wasn’t playing or switching off at the right time, and you know how annoying that can be.  My colleague had to work by kicking off the process by first making a call with the world’s lamest looking 90’s era cellphone and then stepping through the code manually until he was able to find the faulty logic in our system.  The problem turned out to be in the coordination logic written by my company and not in the 3rd party phone control software.

The fault definitely lies with the design of that code, but my colleague and I were violating my little testing rule because we were forced to use an unnecessarily slow and cumbersome feedback cycle.  What if instead, the code had been structured in a such a way that we could write narrowly focused unit test nothing but the logic that decided when to turn the call waiting music on and off.  That very narrowly focused, very fast running unit test could have told my colleague something valuable, namely that the if/then coordination logic was all wrong — all without having to look terminally uncool using the cheap 1990’s looking cell phone.  Add in the number of times we had to repeat the process to track down the problem and then to verify that the fix was correct and the finer grained tests look even better.

Scenario #2:  Sometimes a unit test is useless

I had a conversation the other day with a different colleague asking me if he’d be able to write a unit test in Jasmine for the code we’ll need to write that configures event handling and options in a SlickGrid table embedded in our application.  Applying my rule again, this proposed testing mechanism is a very tight feedback loop, but the test just doesn’t tell me anything useful.  I can assert all day long that I’m calling methods and setting properties on the SlickGrid JavaScript object, but that doesn’t tell me whether or not the grid behaves the way that we want it to when the application is running.  In this case, we have to go to a more coarse grained integration test that works against the user interface.

Making testing more useful

What’s the purpose of testing in your daily job?  Is it to certify that the software works exactly the way it’s supposed to?  What if instead we shifted our thinking about testing to focus on removing flaws and risk from our software project?  That might seem like a subtle restating of the same goal, but it can drastically change how your team or organization approaches software testing.

If your goal is to verify that the system works correctly, you’re probably more likely to focus on black box testing of the system in realistic scenarios and environments because that’s the only real way to know that the system really does work.  In that approach you probably have some formal separation between the developers and the testing team — again to guarantee that you have a completely independent appraisal of the code.

On the other hand, if you’re using testing as a way to remove defects and risk, I think you’re much more likely to follow a testing philosophy similar to my rule about tighter feedback loops, which I think inevitably leads to an emphasis on white box testing solutions and fine-grained unit testing backed up with some minimal black box testing.  If you’re not familiar with the term “white box testing,” it means taking advantage of a detailed knowledge of the system internals in your testing.  I’m sure that it can be done otherwise, but I wouldn’t even begin to try to use white box testing without a very deep synergy and a high degree of collaboration between developers and testers.  In this approach, I think you’d be foolish to keep your developers and testers formally separated.

… and lastly, a brief aside about mocking

I once wrote that you shouldn’t mock interfaces outside of your own codebase or chatty interfaces.  Taking the two examples above, doing an assertion that a message was sent to “TurnOffCallWaiting()” or “TurnOnCallWaiting()” is useful in my opinion.  I certainly have to test the real code behind the “TurnOn/Off()” methods, but I will happily use interaction testing against this kind of goal-oriented interface.

Moving to my second scenario, doing mock object assertions that I fiddled a lot of fine-grained “beforeBeginCellEdit” and “invalidateRow()” methods when I really just care that the data in a row in an html table was updated?  Not so much.

If you do need to interact with any kind of chatty, low level API — especially if it’s in a 3rd party library or tool — I think you’re much better off to wrap a gateway interface around that API that’s expressed in the semantics of your goals for that API like “TurnOffCallWaithing().”