StructureMap 3.0 is very nearly done (no, seriously)

StructureMap 3.0, the next version of the original IoC/DI Container for .Net is almost done and now is a great time to speak up about any improvements and/or changes you’d like to have in SM3.  You can see a list of previous updates (and a shameful pattern of stopping and starting on my part) here.  To be honest, my primary goal at this moment — and why I’m able to work on this during day job hours this week — is to improve the performance of our FubuMVC or FubuTransportation applications with a secondary goal is to improve StructureMap’s diagnostic ability to explain what’s happening when things go wrong.

Big Changes and Improvements:

  • The exception messages provide contextual information about what StructureMap was trying to do when things went wrong
  • The nested container implementation is vastly improved, much faster, and doesn’t have the massive singleton behavior bug from 2.6.*
  • All old [Obsolete] 2.5 registration syntax has been removed, and there’s been a major effort to enforce consistency throughout the registration API’s
  • The original StructureMap.dll has been broken up into a couple pieces.  The main assembly will be targeting PCL compliance thanks to the diligent efforts of Frank Quednau, and that means that Xml configuration and anything to do with ASP.Net has been devolved into separate assemblies and eventually into different Nuget packages.  This means that StructureMap will theoretically support WP8 and other versions of .Net for the very first time.  God help me.
  • The strong naming has been removed.  My thought is to distribute separate Nuget packages with unsigned versions for sane folks and signed versions for enterprise-y folks
  • Lifecycle (scope) can be set individually on each Instance (stupid limitation left over from the very early days)
  • Constructor selection can be specified per Instance
  • Improved diagnostics, both at runtime and for the container configuration (still in progress)
  • Improved runtime performance, especially for deep object graphs with inline dependencies (i.e., FubuMVC behavior chains)
  • The interception model has been completely redesigned
  • The ancient attribute model for StructureMap configuration has been mostly removed
  • The “Profile” model has been much improved

What’s Next?

You can take the pre-release builds of StructureMap 3.0 out for a spin at any time from the fubu TeamCity Nuget feed at http://build.fubu-project.org/guestAuth/app/nuget/v1/FeedService.svc.  A public push could come as early as February 1st, 2014, but I’m not pushing to the public Nuget feed until the stuff in the next paragraph is done.  My thought is that the initial release will be the core StructureMap package, StructureMap.AutoMocking, and StructureMap.AutoFactory.  The new Xml configuration package and a new ASP.Net support package will come later when and if there’s a demand for that.

The issue list is getting shorter and more specific, so I’m hopeful that development is almost to a close.  I’m adding a lot of new explanatory acceptance tests as I write the new documentation (with FubuDocs!).  Frank is going to push through the PCL compliance and that’ll inevitably lead to some new complexity in how we build and create the Nuget’s in our CI builds.

I’m also going to take the new bits out for a spin with a new FubuMVC application and use that to test out what the new exception messages and diagnostics look like.  The forthcoming FubuMVC.StructureMap3 package will embed new diagnostic capabilities.

Early next week, I’m going to try to use StructureMap 3 in a bigger application at Extend Health with an eye toward measuring the new performance versus 2.6.3.

Introducing FubuCsProjFile for Project & Solution File Manipulation

tl;dr:  FubuCsProjFile is a new library from the fubu community for manipulating Visual Studio.Net project files and a new composable templating engine.

The FubuMVC community was busy last year building all new functionality for build automationdocumentation generation, and project templating.  What we haven’t done yet is actually talk about what we were doing in any kind of way that might make it possible for other folks to kick the tires on all that stuff.  This blog post and the heavily under construction website at http://fubuworld.com is an attempt to change that.

For a couple years we’ve had a couple one-off pieces of code to manipulate csproj files with raw Xml manipulation copied over some of our tooling.  When we started to get serious about rebuilding the “fubu new” functionality, we knew that we first needed a more serious way to add, query, remove, and modify items in .csproj files and .sln files.  I looked around for prior art, but found little besides the MSBuild libraries themselves which — shockingly! — did not work in Mono (wouldn’t even compile as I recall).  Fortunately, Monodevelop has very robust MSBuild manipulation code with all kinds of care taken to avoid unnecessary merge problems by maintaining line breaks and file formatting.  Because it has a permissive license, I mostly copied the csproj manipulation code out of Monodevelop and wrapped a little bit prettier object model around their very low level API.

On top of the csproj file manipulation, I added a hack-y class for adding and removing projects from Visual Studio.Net solution files and a from scratch templating engine we use heavily from our “fubu new” functionality.

We certainly don’t yet support every single thing you can do in a csproj file, but we’re already using FubuCsProjFile within Bottles to attach embedded resources, inside the forthcoming Ripple 3.0 release for querying and manipulating assembly references, and as part of the prototype functionality inside of the fubu.exe tool for generating Spark or Razor views.

FubuCsProjFile is available on Nuget under the permissive Apache 2.0 license.  We have received some reports that FubuCsProjFile has some unit tests that break on Mono (“\” instead of “/”, Unix vs. Windows line breaks, the normal stuff).  That’ll get resolved soon-ish, but that just means that I can’t claim that it will work flawlessly on Mono/*nix right now.

 

Introducing FubuDocs for “Living Documentation”

TL;DR:  The FubuMVC community is finally getting its technical documentation act together with a new tool called FubuDocs.

The Wrong Way

About 5 years ago I released StructureMap 2.5 with the idea that it would permanently lock the public registration API’s into a new, shiny fluent interface that everyone would love using from now on.  As part of that release, I wrote comprehensive documentation with lots of embedded code samples painstakingly copied into the static html files and published a pure HTML website. Then I started using StructureMap 2.5 in daily work, found out that I hated using the new fluent interface, and immediately changed the public API’s in subsequent releases to smooth out the usability problems.  Unsurprisingly, I never got around to updating the now defunct documentation code samples.  

Moreover, the documentation that I did write wasn’t always helpful to users because the organization of content on the site did not make sense to them and they weren’t always able to find the right content.

Fast forward several years and the FubuMVC community has built a tremendous number of potentially useful libraries, features, and frameworks that nobody knows about mostly because I’m nearly allergic to writing documentation.  To give all our hard work an actual chance to be successful, Josh Arnold and I envisioned and built a new tool for creating and publishing living documentation we fittingly called FubuDocs (the FubuDocs documentation at this link is created and published with FubuDocs itself).

FubuDocs Highlights

  • The documentation lives side by side with the real code
  • We “slurp” sample code directly out of the real code and automated tests so the sample code cannot get out of synch with the current API to avoid the headaches I had earlier with StructureMap documentation.
  • Heavily inspired by readthedocs.org, FubuDocs determines the navigation structure and navigation page elements for you based on the files in your documentation project
  • You can run a FubuDocs project website interactively using the fubudocs tool distributed as a gem.
  • The fubudocs interactive mode exposes a topic manager tool you can use to extend, reorder, and modify the documentation outline.
  • You can use a combination of Markdown syntax and custom html elements to author content
  • Exports the final content to static HTML (we are just publishing to GitHub Pages).
  • It’s “skinnable” — in theory, works on my box, nobody else has tried that yet

 

In a later post, I’ll talk about how we have automated the publishing and versioning of technical documentation within our continuous integration infrastructure.

Presentations in NDC London and Skillsmatter in December

Back in the old days I used to get aggravated at folks that asked to blog on CodeBetter and then do nothing but post about their upcoming conference talks, but now I’m apparently that guy now.

Anyway, I’m going to be in London the first week of December for NDC London and a night at Skillsmatter.  At NDC I’m giving a talk on my organization’s experiences with automated testing and some of the technical strategies we use to get better results and more reliable tests against very enterprise-y systems.  Don’t be fooled by the word “testing” in the title, this is a very technical talk with no hint of non-coding Agile Coach “all you need is good communication” naiveté and very little process mumbo jumbo.

I’m also going to be playing straight man to Rob Ashton and Rob Conery’s snark filled shenanigans in a debate over testability on the .Net platform versus Node.js.  While many folks have already written me off (the .Net side), just remember that the Harlem Globetrotters would be no fun without the Washington Generals around.

Most excitedly for me, I’m getting to go speak Thursday night, Dec. 4th at Skillsmatter on several of the OSS projects I work on and with.  I’ll…

  • Discuss FubuMVC’s approach to modularity and how my organization exploits this to cleanly isolate feature development in large applications.
  • Demonstrate the much improved “fubu new” story for mix and match generation of a full code trees
  • Briefly explain why I think RavenDb could be one of the best things to ever happen to .Net development and how we’re using it inside FubuMVC applications and our test automation harness.
  • Show how we’ve used Katana to create an efficient development server and an option for embedding FubuMVC in any .Net process.
  • Explain why in the world we went to the effort of building Ripple (http://fubuworld/ripple) to smooth out our early issues with using Nuget for complex dependency management.
  • Talk about some of our new tools and tricks for distributed development including the new FubuTransportation service bus. I’ll also show how we’re using multi-AppDomain support with our Bottles modularity framework to make debugging and testing easier for distributed development.
  • The 3.0 release of StructureMap is close to being released, and while IoC containers are a dime a dozen now, I’d like to share some of the hard lessons I’ve learned about usability, non-insane exception messages, performance, and useful diagnostics over the past decade of developing and supporting StructureMap.
  • And just in case you thought xUnit tools were a completely solved problem, I’d love to talk about why I’m so enthusiastic about the new Fixie testing tool (https://github.com/plioi/fixie)

This is my first time to spend any kind of significant time in London and I’m looking forward to catching up with old friends on your side of the pond and the inevitable bouts of “man, I didn’t recognize you from your twitter avatar.”

 

FubuMVC at Monkeyspaces

I will be joining a very impressive group of speakers at this year’s MonkeySpace conference in Chicago to give a couple talks related to FubuMVC:

Exploring the FubuMVC and Bottles Ecosystem

I think that the combination of FubuMVC with Bottles represents the very best modularity solution in all of .Net and that it’s competitive with anything else out there.  In this talk I’m going to try to back up that claim with a quick demonstration of rapidly building out your application infrastructure with the existing ecosystem of drop in FubuMVC plugins (bottles). I’ll pull back the curtains and talk about the architectural decisions that made all the modularity possible and what we learned along the way.

Dependency Management in .Net OSS Development

The Fubu project ecosystem is big and growing.  For the past couple years we’ve used a combination of Nuget and TeamCity to quickly push build products and dependencies from upstream projects to downstream consumers. We ran into a lot of technical problems and limitations with just about everything we’ve ever tried to do. In this talk I’ll show you the new ripple tool (ripple is sort of to Nuget as Bundler is to gems) we’ve built and adopted to smooth out consuming and publishing Nugets across the 60 odd fubu-related repositories.  I’ll also show some concrete examples of how standardization has smoothed out the process.

For myself, I’m looking forward to Sebastien’s ReST talk, seeing what’s going on with OWIN, and making sure that every poor Microsoft attendee who crosses my path knows exactly how much pain strong naming + Nuget causes us.

I’ll be looking forward to meeting new people at MonkeySpace and catching up with friends I haven’t seen in quite a while (and getting out of the Texas summer heat for a couple days).

See you there.

Introducing FubuMVC.CodeSnippets for living documentation

TL;DR:  FubuMVC and its related projects are finally getting some documentation, and the FubuMVC.CodeSnippets library is a big part of the “how” we’re trying to make the docs easier to write and maintain

Once upon a time there was a man who worked on an open source tool named “StructureMap.”  This man spent an inordinate about of time on his 2.5 release, crafting a comprehensive set of documentation in static html as part of that release.  Upon making the long awaited release, some unpleasant things happened:

  1. Many people didn’t like or just couldn’t derive any value out of the documentation website because of the way it was organized
  2. This man quickly realized from his own usage that many of the new API’s were awkward to use and he immediately added alternative API’s to make StructureMap 2.5+ usable
  3. It wasn’t easy to edit the big pile of html and copy/pasted code samples, making the effort even more painful — so the docs and the actual API wildly diverged and didn’t help the poor man handle user questions

Since I’d strongly prefer not to be that guy ever again, we’re putting some effort toward using “living documentation” techniques for FubuMVC, Storyteller, and StructureMap 3 to make it as easy as possible to keep the documentation in synch with the various frameworks as they evolve.  As part of that goal, we’re using the FubuMVC.CodeSnippets (check out the link, there’s real documentation developed with FubuDocs) library to “slurp” sample code snippets right out of the live code during the automated builds.  This way we can simply reuse unit test code and bits of example code running in the CI.  If the real code changes, the sample code and the unit tests would have to change too or the CI build breaks.

In a nutshell, the idea behind FubuMVC.CodeSnippets is to just add some comments into your code marking the boundaries of a named “snippet” like so:

// SAMPLE: snippet-name

// C# code in the middle.

// ENDSAMPLE

In a FubuMVC view, you can just say “I want to display snippet named ‘snippet-name'” and the view helper for code snippets will add the raw code in a <pre> tag and use prettyprint to color code the code.

 

 

 

Would I use RavenDb again?

EDIT on 2/12/2016: This is almost a 3 year old post, but still gets quite a few reads. For an update, I’m part of a project called Marten that is seeking to use Postgresql as a document database that we intend to use as a replacement for RavenDb in our architecture. While I’m still a fan of most of the RavenDb development experience, the reliability, performance, and resource utilization in production has been lacking. At this point, I would not recommend adopting RavenDb for new projects.

I’m mostly finished with a fairly complicated project that used RavenDb and all is not quite well.  All too frequently in the past month I’ve had to answer the question “was it a mistake to use RavenDb?” and the more Jeremy’s ego-bruising “should we scrap RavenDb and rebuild this on a different architecture?”  Long story short, we made it work and I think we’ve got an architecture that can allow us to scale later, but the past month was miserable and RavenDb and our usage of RavenDb was the main culprit.

Some Context

Our system is a problem resolution system for an automated data exchange between our company and our clients.  The data exchange has long suffered from data quality issues and hence, we were tasked with building an online system to ameliorate the current manual heavy process for resolving the data issues.  We communicate with the upstream system by receiving and sending flat files dropped into a folder (boo!).  The files can be very large, and the shape of the data is conceptually different than how our application displays and processes events in our system.  As part of processing the data we receive we have to do a fuzzy comparison to the existing data for each logical document because we don’t have any correlation identifier from the upstream system (this was obviously a severe flaw in the process, but I don’t have much control over this issue).  The challenge for us with RavenDb was that we would have to process large bursts of data that involved both heavy reads and writes.

On the read side to support the web UI, the data was very hierarchical and using a document database was a huge advantage in my opinion.

First, some Good Stuff

  • RavenDb has to be the easiest persistence strategy in all of software development to get up and running on day one.  Granted that you’ll have to change settings for production later, but you can spin up a new project using RavenDb as an embedded database and start writing an application with persistence in nothing flat.  I’ve told some of my ex-.Net/now Rails friends that I think I can spin up a FubuMVC app that uses RavenDb for persistence faster than they can with Rails and ActiveRecord.  The combination of a document database and static typed document classes is dramatically lower friction in my opinion than using static typed domain entities with NHibernate or EF as well.
  • I love, love, love being able to dump and rebuild a clean database from scratch in automated testing scenarios
  • I’m still very high on document database’s, especially in the read side of an application.  RavenDb might have fallen down for us in terms of write’s, but there were several places where storing a hierarchical document is just so much easier than dealing with relational database joins across multiple tables
  • No DB migrations necessary
  • Being able to drop down to Lucene queries helped us considerably in the UI
  • I like the paging support in RavenDb
  • RavenDb’s ability to batch up read’s was a big advantage when we were optimizing our application.  I really like the lazy request feature and the IDocumentSession.Load(array of id’s) functions.

Memory Utilization

We had several memory usage problems that we ultimately attributed to RavenDb and its out of the box settings.  In the first case, we had to turn off all of the 2nd level caching because it never seemed to release objects, or at least not before our application fell over from OutOfMemoryExceptions.  In our case, the 2nd level cache would not have provided much value anyway except for a handful of little entities, so we just turned it off across the board.  I think I would recommend that you only use caching with a whitelist of documents.

Also be aware that the implementations of IDocumentSession seem to be very much optimized for short transactions with limited activity at any one time.  Unfortunately we were almost a batch driven system and our logical transactions became quite large and potentially involved a lot of reads against contextual information.  After examining our application with a memory profiler, we determined that IDocumentSession was hanging on to the data we only read.  We solved that issue by explicitly calling Evict() to remove objects from an IDocumentSession’s cache.

Don’t Abstract RavenDb Too Much

To be blunt, I really don’t agree with many of Ayende’s opinions about software development, but in regards to abstractions for RavenDb you have to play by his rules.  We have a fubu project named FubuPersistence that adds common persistence capabilities like multi-tenancy and soft deletes on top of RavenDb in an easy to use way.  That’s great and all, but we had to throw a lot of that goodness away because you so frequently have to get down to the metal with RavenDb to either tighten up performance or avoid stale data.  We were able to happily spin up a database on the fly for testing scenarios, so you might look to do that more often than trying to swap out RavenDb for mocks, stubs, or 100% in memory repositories.  Those tests are still slower than what you’d get with mocks or stubs, but you don’t have any choice when you start having to muck with RavenDb’s low level API’s.

Bulk Inserts

I think RavenDb is weak in terms of dealing with large batches of updates or inserts.  We tried using the BulkInsert functionality, and while it was a definite improvement in performance, we found it to be buggy and probably just immature (it is a recent feature).  We first hit problems with map/reduce operations not finishing after processing a batch.  We updated to a later version of RavenDb (2330), then had to retreat back to our original version (2230) with problems using Windows authentication in combination with the BulkInsert feature.  We saw the same issues with the edge version of RavenDb as well.  We also noticed that BulkInsert did not seem to honor the batch size settings and had several QA bugs under load because of this.  We eventually solved the BulkInsert problems by sending batches of 200 documents for processing through our service bus and putting retry semantics around the BulkInsert to get around occasional hiccups.

The Eventual Consistency Thing

If you’re not familiar with Eventual Consistency and its implications, you shouldn’t even dream of putting a system based on RavenDb into production.  The key with RavenDb is that query/command separation is pretty well built in.  Writes are transactional, and reads by the document id will always give you the latest information, but other queries execute against indexes that are built in background threads as a result of writes.  What this means to you is a chance of receiving stale results from queries against anything but a document id.  There’s a real set of rationale behind this decision, but it’s still a major complication in your life with RavenDb.

With our lack of correlation identifiers from upstream, we were forced to issue a lot of queries against “natural key” data and we frequently ran into trouble with stale indexes in certain circumstances.  Depending on circumstances, we fixed or prevented these issues by:

  • Introducing a static index instead of relying on dynamic indexes.  I think I’d push you to try to use a static index wherever possible.
  • Judiciously using the WaitForNonStaleResults****** methods.  Be careful with this one though, because it can have negative repercussions as well
  • In a few cases we introduced an in-memory cache for certain documents.  You *might* be able to utilize the 2nd level cache instead
  • In another case or two, we switched from using surrogate keys to using natural keys because you always get the latest results when loading by the document id.  User and login documents are the examples of this that I remember offhand.

The stale index problem is far more common in automated testing scenarios, so don’t panic when it happens.

Conclusion

I’m still very high on RavenDb’s future potential, but there’s a significant learning curve you need to be aware of.  The most important thing to know about RavenDb in my opinion is that you can’t just use it, you’re going to have to spend some energy and time learning how it works and what some of the knobs and levers are because it doesn’t just work.  On one hand, RavenDb has several features and capabilities that an RDBMS doesn’t and you’ll want to exploit those abilities.  On the other hand, I do not believe that you can get away with using RavenDb with all of its default settings on a project with larger data sets.

Honestly, I think the single biggest problem on this project was in not doing the heavy load testing earlier instead of the last moment, but everybody involved with the project has already hung their heads in shame over that one and vowed to never do that again.  Doing something challenging and doing something challenging right up against a deadline are too very different things.  It is my opinion that while we did struggle with RavenDb that we would have had at least some struggle to optimize the performance if we’d built with an RDBMS and the user interface would have been much more challenging.

Knowing what I know now, I think it’s 50/50 that I would use RavenDb for a similar project again.  If they get their story fixed for bigger transactions though, I’m all in.

Big Update on StructureMap 3.0 Progress

I can finally claim some very substantial progress on StructureMap 3.0 today.  For a background on the goals and big changes for the 3.0 release, see Kicking off StructureMap 3 from last year and some additions from last month when I started again.  As of today, StructureMap 3.0 development is in the master branch in GitHub.  If you need to get at StructureMap 2.6 level code, use the TwoSix branch.

What’s been done?

  1. I removed the strong naming.
  2. All the old [Obsolete] API methods have been removed
  3. The registration API has been greatly streamlined and there’s much more consistency internally now
  4. The nested container implementation has been completely redone.  It’s much simpler, should be much faster because it’s doing much less on setup, and the old lifecycle confusion between the parent and nested container problems have been fixed.
  5. The “Profile” functionality has been completely redesigned and rebuilt.  It’s also much more capable now than it was before.
  6. The container spinup time *should* be much better because there’s so much less going on and a lot more decision making is done in a lazy way with memoization along the way.  Lazy<T> FTW!
  7. There’s much more runtime “figure out what I could do” type possibilities now
  8. You can apply lifecycle scoping Instance by Instance instead of only at the PluginType level.  That’s been a big gripe for years.
  9. The Xml configuration has been heavily streamlined
  10. The old [PluginFamily] / [Pluggable] attributes have been completely ripped out
  11. Internally, the old PipelineGraph, InstanceFactory, ProfileManager architecture is all gone.  The new PipelineGraph implementations just wrap one or more PluginGraph objects, so there’s vastly less data structure shuffling gone on internally.

What’s left to do?

I’ve transcribed my own notes about outstanding work (minus the documentation) to the GitHub issues page.  There are a few items that are going to need some serious forethought, but I think the biggest architectural changes are already done and that list is starting to be more of a punchlist.  I would dearly love any kind of help, design input, additions, or feedback on the outstanding work.  If you’re inclined to get involved and tackle some of the issues, I tried to label the issues for the effort level.

If you think of the issues as picking a sword fight, the tags line up like this:

  1. “Easy Fix” – Facing a sheepherder who probably stole that heron mark blade he’s carrying
  2. “Medium Effort” – Fighting a Trolloc
  3. “Architectural Level Change” – Fade.  I will likely need to be involved with any of these

Fairly soon, I’ll be making a call for folks to try out a prerelease version of StructureMap 3 in their existing applications.  As part of that effort, I’d really like to get some feedback about the observed performance and see if we can beat on it enough to find any memory leak issues.

If you or someone you know is a multi-threading guru, I’d probably be interested in talking through some things with you in the codebase.

Docs?  Someday?  Maybe?

Hopefully someday soon.  The FubuMVC core team will be relaunching a completely new website sometime in the next couple years with our own implementation of a readthedocs style infrastructure.  I’m planning on making the new StructureMap documentation part of that website.  Documentation will be in git where it’ll be easy to take in pull requests for additions and corrections, and you’ll be able to use either Html or Markdown for the content.  We’ve already got a working mechanism to “slurp” code samples live out of a source code tree and put into the we pages with formatting via pretty print to achieve “living” documentation this time around.

 

Last Thoughts

I haven’t paid attention to any of the “IoC Container Performance Shootout!” type blog posts in a long time, but StructureMap used to routinely come in well ahead of the other full-featured IoC containers (tools like Funq shouldn’t be considered apples to apples with StructureMap/Windsor/Ninject/Autofac/whatever.  If you don’t support auto-wiring, rich lifecycle support, and maybe even interception, I say you don’t count as full-featured) in terms of performance.  However, as I’ve torn into the StructureMap codebase with an eye towards better performance for the first time in years, I’ve found a scary amount of performance killing cruft code.  My final thought is that as bad as the StructureMap code was (and trust me, it was), if it’s really faster than the other IoC containers, then what does that say about their code internals at that time?  😉

Big Proposed Changes for StructureMap 3

Just trying to round up more feedback as I go, here’s a handful of discussions I’ve started on the big proposed changes for StructureMap 3:

Please feel free to chime in here, twitter, or the list on any of these topics or any other thing you want for StructureMap 3.

 

Thanks,

Jeremy

A Simple Example of a Table Driven Executable Specification

My shop is starting to go down the path of executable specifications (using Storyteller2 as the tooling, but that’s not what this post is about).  As an engineering practice, executable specifications* involves specifying the expected behavior of a user story with concrete examples of exactly how the system should behave before coding.  Those examples will hopefully become automated tests that live on as regression tests.

What are we hoping to achieve?

  • Remove ambiguity from the requirements with concrete examples.  Ambiguity and misunderstandings from prose based requirements and analysis has consistently been a huge time waste and source of errors throughout my career.
  • Faster feedback in development.  It’s awfully nice to just run the executable specs in a local branch before pushing anything to the testers
  • Find flaws in domain logic or screen behavior faster, and this has been the biggest gain for us so far
  • Creating living documentation about the expected behavior of the system by making the specifications human readable
  • Building up a suite of regression tests to make later development in the system more efficient and safer

Quick Example

While executable specifications are certainly a very challenging practice from the technical side of things, in the past week or so I’m aware of 3-4 scenarios where the act of writing the specification tests has flushed out problems with our domain logic or screen behavior a lot faster than we could have done otherwise.

Part of our application logic involves fuzzy matching against people in our system against some, ahem, not quite trustworthy data from external partners. Our domain expert explained the matching logic that he wanted was to match a person’s social security number, birth date, first name, and last name — but the name matching should be case insensitive and it’s valid to match on the initial of the first name.  Since this logic can be expressed as a set number of inputs and the one output with a great number of permutations, I chose to express this specification as a table with Storyteller (conceptually identical to the old ColumnFixture in FitNesse).  The final version of the spec is shown below  (click the image to get a more readable version):

ExecutableSpec

The image above is our final, approved version of this functionality that now lives as both documentation and a regression test.  Before that though, I wrote the spec and got our domain expert to look at it, and wouldn’t you know it, I had misunderstood a couple assumptions and he gave me very concrete feedback about exactly what the spec should have been.

To make this just a little bit more concrete, our Storyteller test harness connects the table inputs to the system under test with this little bit of adapter code:

The code behind the executable spec
  1.     public class PersonFixture : Fixture
  2.     {
  3.         public PersonFixture()
  4.         {
  5.             Title = “Person Matching Logic”;
  6.         }
  7.         [ExposeAsTable(“Person Matching Examples”)]
  8.         [return:AliasAs(“Matches”)]
  9.         public bool PersonMatches(
  10.             string Description,
  11.             [Default(“555-55-5555”)]SocialSecurityNumber SSN1,
  12.             [Default(“Hank”)]string FirstName1,
  13.             [Default(“Aaron”)]string LastName1,
  14.             [Default(“01/01/1974”)]DateCandidate BirthDate1,
  15.                                   [Default(“555-55-5555”)]SocialSecurityNumber SSN2,
  16.             [Default(“Hank”)]string FirstName2,
  17.             [Default(“Aaron”)]string LastName2,
  18.             [Default(“01/01/1974”)]DateCandidate BirthDate2)
  19.         {
  20.             var person1 = new Person
  21.             {
  22.                 SSN = SSN1,
  23.                 FirstName = FirstName1,
  24.                 LastName = LastName1,
  25.                 BirthDate = BirthDate1
  26.             };
  27.             var person2 = new Person
  28.             {
  29.                 SSN = SSN2,
  30.                 FirstName = FirstName2,
  31.                 LastName = LastName2,
  32.                 BirthDate = BirthDate2
  33.             };
  34.             return person1.Equals(person2);
  35.         }
  36.     }

* Jeremy, is this really just Behavior Driven Development (BDD)?  Or the older idea of Acceptance Test Driven Development (ATDD)?  This is some folks’ definition of BDD, but BDD is so overloaded and means so many different things to different people that I hate using the term.  ATDD never took off, and “executable specifications” just sounds cooler to me, so that’s what I’m going to call it.