My shop is starting to go down the path of executable specifications (using Storyteller2 as the tooling, but that’s not what this post is about). As an engineering practice, executable specifications* involves specifying the expected behavior of a user story with concrete examples of exactly how the system should behave before coding. Those examples will hopefully become automated tests that live on as regression tests.
What are we hoping to achieve?
- Remove ambiguity from the requirements with concrete examples. Ambiguity and misunderstandings from prose based requirements and analysis has consistently been a huge time waste and source of errors throughout my career.
- Faster feedback in development. It’s awfully nice to just run the executable specs in a local branch before pushing anything to the testers
- Find flaws in domain logic or screen behavior faster, and this has been the biggest gain for us so far
- Creating living documentation about the expected behavior of the system by making the specifications human readable
- Building up a suite of regression tests to make later development in the system more efficient and safer
While executable specifications are certainly a very challenging practice from the technical side of things, in the past week or so I’m aware of 3-4 scenarios where the act of writing the specification tests has flushed out problems with our domain logic or screen behavior a lot faster than we could have done otherwise.
Part of our application logic involves fuzzy matching against people in our system against some, ahem, not quite trustworthy data from external partners. Our domain expert explained the matching logic that he wanted was to match a person’s social security number, birth date, first name, and last name — but the name matching should be case insensitive and it’s valid to match on the initial of the first name. Since this logic can be expressed as a set number of inputs and the one output with a great number of permutations, I chose to express this specification as a table with Storyteller (conceptually identical to the old ColumnFixture in FitNesse). The final version of the spec is shown below (click the image to get a more readable version):
The image above is our final, approved version of this functionality that now lives as both documentation and a regression test. Before that though, I wrote the spec and got our domain expert to look at it, and wouldn’t you know it, I had misunderstood a couple assumptions and he gave me very concrete feedback about exactly what the spec should have been.
To make this just a little bit more concrete, our Storyteller test harness connects the table inputs to the system under test with this little bit of adapter code:
* Jeremy, is this really just Behavior Driven Development (BDD)? Or the older idea of Acceptance Test Driven Development (ATDD)? This is some folks’ definition of BDD, but BDD is so overloaded and means so many different things to different people that I hate using the term. ATDD never took off, and “executable specifications” just sounds cooler to me, so that’s what I’m going to call it.