tl;dr: The open source Storyteller 3 is an all new version of an old tool that my shop (and others) use for customer facing acceptance tests, large scale test automation, and “living documentation” generation for code-centric systems.
A week from today I’m giving a talk at .Net Unboxed on Storyteller 3, an open source tool largely built by myself and my colleagues for creating, running, and managing Executable Specifications against .Net projects based on what we feel are the best practices for automated testing based on over a decade of working with automated integration testing. As I’ll try to argue in my talk and subsequent blog posts, the most complete approach in the .Net ecosystem for reliably and economically writing large scale automated integration tests.
My company and a couple other early adopters have been using Storyteller for daily work since June and the feedback has been pleasantly positive so far. Now is as good of time as any to make a public beta release for the express purpose of getting more feedback on the tool so we can continue to improve the tool prior to an official 3.0 release in January.
If you’re interested in kicking the tires on Storyteller, the latest beta as of now is 22.214.171.1249-alpha available on Nuget.org. For help getting started, see our tutorial and getting started pages.
- Shockingly for something I work on, the documentation site for Storyteller is fairly comprehensive – but please tell us about anything you think is missing or confusing
- Storyteller allows you to express automated tests in language that can be easily read and reviewed by your business partners, testers, and analysts
- Storyteller 3 includes a completely new user interface for interactively authoring and executing specifications written as a self-hosted web application with vastly better usability than the previous incarnations (admittedly a very low bar)
- Built in and extensible instrumentation and performance diagnostics that are giving our users quite a bit of insight into how their systems are behaving and why complicated tests may be failing.
- It’s not sexy at all, but Storyteller’s secondary feature is tooling inspired by readthedocs to efficiently author and maintain “living documentation” for code-centric systems. The Storyteller documentation site and the new StructureMap 3/4 website were both authored with Storyteller (you don’t have to use the same theme that I did for the documentation, I’m just lazy and copied it).
It’s improved a lot since then, but I gave a talk at work in March previewing Storyteller 3 that at least discusses the goals and philosophy behind the tool and Storyteller’s approach to acceptance tests and integration tests.
A Brief History
I had a great time at Codemash this year catching up with old friends. I was pleasantly surprised when I was there to be asked several times about the state of Storyteller, an OSS project others had originally built in 2008 as a replacement for FitNesse as our primary means of expressing and executing automated customer facing acceptance tests. Frankly, I always thought that Storyteller 1 and the incrementally better Storyteller 2 were failures in terms of usability and I was so burnt out on working with it that I had largely given up on it and ignored it for years.
Unfortunately, my shop has a large investment in Storyteller tests and our largest and most active project was suffering with heinously slow and unreliable Storyteller regression test suites that probably caused more harm than good with their support costs. After a big town hall meeting to decide whether to scrap and replace Storyteller with something else, we instead decided to try to improve Storyteller to avoid having to rewrite all of our tests. The result has been an effective rewrite of Storyteller with an all new client. While trying very hard to mostly preserve backward compatibility with the previous version in its public API’s, the .Net engine is also a near rewrite in order to squeeze out as much performance and responsiveness as we could.
The official 3.0 release is going to happen in early January to give us a chance to possibly get more early user feedback and maybe to get some more improvements in place. You can see the currently open issue list on GitHub. The biggest things outstanding on our roadmap are:
- Modernize the client technology to React.js v14 and introduce Redux and possibly RxJS as a precursor to doing any big improvements to the user interface and trying to improve the performance of the user interface with big specification suites
- A “step through” mode in the interactive specification running so users can step through a specification like you would in a debugger
- The big one, allow users to author the actual specification language in the user interface editor with some mechanics to attach that language to actual test support code later
9 thoughts on “Storyteller 3: Executable Specifications and Living Documentation for .Net”
So my main question about Storyteller is how we can use this with a CI tool like Team City. Our FIT tests (not FitNesse) are currently running in a custom console application that spits out Team City specific indications as to tests that pass and fail, with the resulting HTML outputs as artifacts so I can see what exactly happened to the failing tests. Can I do the same with Storyteller?
Storyteller wouldn’t be very useful if it didn’t have CI support;) See the “st run” command described in http://storyteller.github.io/documentation/ci/. It’s just a command line call that will drop a single HTML file with all the results (filterable summary table + drill down into individual specs). We run that from our TeamCity builds and just keep the HTML file as an artifact. You can add a tab to the CI results for that file, or happily open it from the artifacts links.
So do you get any indication that the tests are Pass/Fail without viewing the HTML document? For the FIT tests, I basically wrote my own FIT test runner using the FIT library as a console application that output the Test tags that TeamCity integrates into. That way I can see (and be informed by TeamCity) the failing tests as Build Failures (Exactly like what happens when NUnit tests fail) so I can see what tests are failing even while things are still running. I also can, at a glance, look at my entire bank of builds and see what is working and what is failing.
The build would pass or fail based on the exit code of “st run”, and yes, there is a flag to spit out the TeamCity console output that it uses to report test progress and failures.