Tag Archives: Storyteller

Storyteller 4.0 is Out!

Storyteller is a long running project for authoring human readable, executable specifications for .Net projects. The new 4.0 release is meant to make Storyteller easier to use and consume for non technical folks and to improve developer’s ability to troubleshoot specification failures.

After about 5 months of effort, I was finally able to cut the 4.0 Nugets for Storyteller this morning and the very latest documentation updates. If you’re completely new to Storyteller, check out our getting started page or this webcast. If you’re coming from Storyteller 3.0, just know that you will need to first convert your specifications to the new 4.0 format. The Storyteller Fixture API had no breaking changes, but the bootstrapping steps are a little bit different to accommodate the dotnet CLI.

You can see the entire list of changes here, or the big highlights of this release are:

  • CoreCLR Support! Storyteller 4.0 can be used on either .Net 4.6 projects or projects that target the CoreCLR. As of now, Storyteller is now a cross platform tool. You can read more about my experiences migrating Storyteller to the CoreCLR here.
  • Embraces the dotnet CLI. I love the new dotnet cli and wish we’d had it years ago. There is a new “dotnet storyteller” CLI extensibility package that takes the place of the old ST.exe console tool in 3.0 that should be easier to set up for new users.
  • Markdown Everywhere! Storyteller 4.0 changed the specification format to a Markdown plus format, added a new capability to design and generate Fixture’s with markdown, and you can happily use markdown text as prose within specifications to improve your ability to communicate intentions in Storyteller specifications.
  • Stepthrough Mode. Integration tests can be very tricky to debug when they fail. To ease the load, Storyteller 4.0 adds the new Stepthrough mode that allows you manually walk through all the steps of a Storyteller specification so you can examine the current state of the system under test as an aid in troubleshooting.
  • Asynchronous Grammars. It’s increasingly an async-first kind of world, so Storyteller follows suit to make it easier for you to test asynchronous code.
  • Performance Assertions. Storyteller already tracks some performance data about your system as specifications run, so why not extend that to applying assertions about expected performance that can fail specifications on your continuous integration builds?

 

Other Things Coming Soon(ish)

  • A helper library for using Storyteller with ASP.Net Core applications with some help from Alba. I’m hoping to recreate some of the type of diagnostics integration we have today with Storyteller and our FubuMVC applications at work for our newer ASP.net Core projects.
  • A separate package of Selenium helpers for Storyteller
  • An extension specifically for testing relational database code
  • A 4.1 release with the features I didn’t get around to in 4.0;)

 

How is Storyteller Different than Gherkin Tools?

First off, can we just pretend for a minute that Gherkin/Cucumber tools like SpecFlow may not be the absolute last word for automating human readable, executable specifications?

By this point, I think most folks associate any kind of acceptance test driven development or truly business facing Behavioral Driven Development with the Gherkin approach — and it’s been undeniably successful. Storyteller on the other hand, was much more influenced by Fitnesse and could accurately be described as a much improved evolution of the old FIT model.

SpecFlow is the obvious comparison for Storyteller and by far the most commonly used tool in the .Net space. The bottom line for me with Storyteller vs. SpecFlow is that I think that Storyteller is far more robust technically in how you can approach the automated testing aspect of the workflow. SpecFlow might do the business/testing to development workflow a little better (but I’d dispute that one too with the release of Storyteller 4.0), but Storyteller has much, much more functionality for instrumenting, troubleshooting, and enforcing performance requirements of your specifications. I strongly believe that Storyteller allows you to tackle much more complex automated testing scenarios than other options.

Here is a more detailed list about how Storyteller differs from SpecFlow:

  • Storyteller is FOSS. So on one hand, you don’t have to purchase any kind of license to use it, but you’ll be dependent upon the Storyteller community for support.
  • Instead of parsing human written text and trying to correlate that to the right calls in the code, Storyteller specifications are mostly captured as the input and expected output. Storyteller specifications are then “projected” into human readable HTML displays.
  • Storyteller is much more table centric than Gherkin with quite a bit of functionality for set-based assertions and test data input.
  • Storyteller has a much more formal mechanism for governing the lifecycle of your system under test with the specification harness rather than depending on an application being available through other means. I believe that this makes Storyteller much more effective at development time as you cycle through code changes when you work through specifications.
  • Storyteller does not enforce the “Given/When/Then” verbiage in your specifications and you have much more freedom to construct the specification language to your preferences.
  • Storyteller has a user interface for editing specifications and executing specifications interactively (all React.js based now). The 4.0 version makes it much easier to edit the specification files directly, but the tool is still helpful for execution and troubleshooting.
  • We do not yet have direct Visual Studio.Net integration like SpecFlow (and I’m somewhat happy to let them have that one;)), but we will develop a dotnet test adapter for Storyteller when the dust settles on the VS2017/csproj churn.
  • Storyteller has a lot of functionality for instrumenting your specifications that’s been indispensable for troubleshooting specification failures and even performance problems. The built in performance tracking has consistently been one of our most popular features since it was introduced in 3.0.

 

An Experience Report of Moving a Complicated Codebase to the CoreCLR

TL;DR – I like the CoreCLR, project.json project system, and the new dotnet CLI so far, but there are a lot of differences in API that could easily burn you when you try to port existing .Net code to the CoreCLR.

As I wrote about a couple weeks ago, I’ve been working to port my Storyteller project to the CoreCLR en route to it being cross platform and generally modernized. As of earlier this week I think I can declare that it’s (mostly) running correctly in the new world order. Moreover, I’ve been able to dogfood Storyteller’s documentation generation feature on Mac OSX today without much trouble so far.

As semi-promised, here’s my experience report of moving an existing codebase over to targeting the CoreCLR, the usage of the project.json project system, and the new dotnet CLI.

 

Random Differences

  • AppDomain is gone, and you’ll have to use a combination of AppContext or DependencyContext to replace some of the information you’ve gotten from AppDomain.CurrentDomain about the running process. This is probably an opportunity for polyfill’s
  • The Thread class is very different and a couple methods (Yield(), Abort()) were removed. This is causing me to eventually go down to pinvoke in Storyteller
  • A lot of convenience methods that were probably just syntactic sugar anyway have been removed. I’ve found differences with Xml support and Stream’s. Again, I’ve gotten around this by adding extension methods to the Netstandard/CoreCLR code to add back in some of these things

 

Project.json May be Just a Flash in the Pan, but it’s a Good One

Having one tiny file that controls how a library is compiled, Nuget dependencies, and how that library is packed up into a Nuget later has been a huge win. Even better yet, I really appreciate how the project.json system handles transitive dependencies so you don’t have to do so much bookkeeping within your downstream projects. I think at this point I even prefer the project.json + dotnet restore combination over the Nuget workflow we have been using with Paket (which I still think was much better than out of the box Nuget was).

I’m really enjoying having the wildcard file inclusions in project.json so you’re not constantly having all the aggravation from merging the Xml-based csproj files. It’s a little bit embarrassing that it’s taken Microsoft so long to address that problem, but I’ll take it now.

I really hope that the new, trimmed down csproj file format is as usable as project.json. Honestly, assuming that their promised automatic conversion works as advertised, I’d recommend going to project.json as an interim solution rather than waiting.

 

I Love the Dotnet CLI

I think the new dotnet CLI is going to be a huge win for .Net development and it’s maybe my favorite part of .Net’s new world order. I love being able to so quickly restore packages, build, run tests, and even pack up Nuget files without having to invest time in writing out build scripts to piece it altogether.

I’ve long been a fan of using Rake for automating build scripts and I’ve resisted the calls to move to an arbitrarily different Make clone. With some of my simpler OSS projects, I’m completely forgoing build scripts in favor of just depending on the dotnet cli commands. For example, I have a small project called Oakton using the dotnet CLI, and it’s entire CI build script is just:

rmdir artifacts
dotnet restore src
dotnet test src/Oakton.Testing
dotnet pack src/Oakton --output artifacts --version-suffix %build.number%

In Storyteller itself, I removed the Rake script altogether and just a small shell script that delegates to both NPM and dotnet to do everything that needs to be done.

I’m also a fan of the “dotnet test” command too, especially when you want to quickly run the support for one .Net framework version. I don’t know if this is the test adapter or the CoreCLR itself being highly optimized, but I’ve been seeing a dramatic improvement in test execution time since switching over to the dotnet cli. In Marten I think it cut the test execution time of the main testing suite down by 60-70% some how.

The best source I’ve found on the dotnet CLI has been Scott Hanselman’s blog posts on DotNetCore.

 

AppDomain No Longer Exists (For Now)

AppDomain’s getting ripped out of the CoreCLR (yes, I know they’re supposed to come back in Netstandard 2.0, but who knows when that’ll be) was the single biggest problem I had moving Storyteller over to the CoreCLR. I outlined the biggest problem in a previous post on how testing tools generally use AppDomain’s to isolate the system under test from the test harness itself.

I ended up having Storyteller spawn a separate process to run the system under test in a way so that users can rebuild that system without having to first shut down the Storyteller specification editor tool. The first step was to replace the little bit of Remoting I had been using between AppDomain’s with a different communication scheme that just shot JSON back and forth over sockets. Fortunately, I had already been doing something very similar through a remoting proxy, so it wasn’t that bad of a change.

The next step was to change Storyteller testing projects to be changed from a class library to an executable that could be invoked to start up the system under test and start listening on a supplied port for the JSON messages described in the previous paragraph.This was a lot of work, but it might end up being more usable. Instead of depending on something else to have pre-compiled the system under test, Storyteller can start up the system under test with a spawned call to “dotnet run” that does any necessary compilation for you. It also makes it pretty easy to direct Storyteller to run the system under test under different .Net versions.

Of course, the old System.IO.Process class behaves differently in the CoreCLR (and across platforms too), and that’s still causing me some grief.

 

Reflection Got Scrambled in the CoreCLR

So, that sucked…

Infrastructure or testing tools like Storyteller will frequently need to use a lot of reflection. Unfortunately, the System.Reflection namespace in the CoreCLR has a very different API than classic .Net and that has been consistently been a cause of friction as I’ve ported code to the CoreCLR. The challenge is even worse if you’re trying to target both classic .Net and the CoreCLR.

Here’s an example, in classic .Net I can check whether a Type is an enumeration type with “type.IsEnum.” In the CoreCLR, it’s “type.GetTypeInfo().IsEnum.” Not that big a change, but basically anything you need to do against a Type is now on the paired TypeInfo and you now have to bounce through Type.GetTypeInfo().

One way or another, if you want to multi-target both CoreCLR and .Net classic, you’ll be picking up the idea of “polyfills” you see all over the Javascript world. The “GetTypeInfo()” method doesn’t exist in .Net 4.6, so you might do:

public static Type GetTypeInfo(this Type type)
{
    return type;
}

to make your .Net 4.6 code look like the CoreCLR equivalent. Or in some cases, I’ve just built polyfill extension methods in the CoreCLR to make it look like the older .Net 4.6 API:

#if !NET45
        public static IEnumerable<Type> GetInterfaces(Type type)
        {
            return type.GetTypeInfo().GetInterfaces();
        }

#endif

Finally, you’re going to have to dip into conditional compilation fairly often like this sample from StructureMap’s codebase:

    public static class AssemblyLoader
    {
        public static Assembly ByName(string assemblyName)
        {
#if NET45
            // This method doesn't exist in the CoreCLR
            return Assembly.Load(assemblyName);
#else
            return Assembly.Load(new AssemblyName(assemblyName));
#endif
        }
    }

There are other random changes as well that I’ve bumped into, especially around generics and Assembly loading. Again, not something that the average codebase is going to get into, but if you try to do any kind of type scanning over the file system to auto-discover assemblies at runtime, expect some churn when you go to the CoreCLR.

If you’re thinking about porting some existing .Net code to the CoreCLR and it uses a lot of reflection, be aware that that’s potentially going to be a lot of work to convert over.

Definitely see Porting a .Net Framework Library to .Net Core from Michael Whelan.

Storyteller 3.0 Official Release — and on to 4.0

I was able to finally cut an official Storyteller 3.0 release to Nuget at the end of last week with a few fixes. Storyteller is a tool for writing and running executable specifications (BDD, if you absolutely have to call it that), end to end automated testing, and a separate function for generating technical documentation websites that I’ve used for StructureMap, Marten, and Storyteller itself.

Some links with a whole lot more information:

I’ve done two conference talks about Storyteller in the last year but both of the videos were never posted. I’ve had rotten luck with this lately:( I *might* be doing a talk about the building of the Storyteller client with React.js at CodeMash.

 

Onward to 4.0!

With Marten settling down after its 1.0 release, I’ve been able to devote much more time to Storyteller and the 4.0 release is well underway. While there are worlds of little GitHub issues to knock down, the big themes for 4.0 are to:

  1. Get Storyteller running cross platform using the CoreCLR. I blogged about my plans for pulling this off a couple weeks ago, and as of last week I’m able to run Storyteller locally on Mac OSX. I’ll be writing a follow up experience report post soon, but the summary is “so, that sucked.”
  2. Fully embrace the dotnet CLI and create a dotnet test adapter for Storyteller to make it far easier for developers to run and debug Storyteller specifications
  3. The “step through” model of executing specifications when troubleshooting failing specs. This is partially done now, but there’s plenty to do on the UI client side before it’s really usable.
  4. Extend the reach of Storyteller by allowing users to author the specification language on the fly in the specification editor without having to first write C# code. My hope is that this feature will make Storyteller much more effective as a specifications tool. This had been a goal of the 3.0 release, but I never got to do it.
  5. Enhance the performance testing and diagnostic support in Storyteller today. Maybe the most popular feature of Storyteller is how it captures, correlates, and displays information about the system performance during spec execution. I’d like to take this farther with possible support for gathering benchmark data and allowing teams to express non-functional requirements about performance that would cause specifications to fail if they’re too slow.
  6. Potentially, be able to use the existing Storyteller specification editor and runner against completely different testing engines written for other languages or platforms

More coming this fall…

Moving Storyteller to the CoreCLR and going Cross Platform

This is half me thinking out loud and half experience report with new .Net new world order. If you want to know more about what Storyteller is, there’s an online webinar here or my blog post about the Storyteller 3 reboot and vision.

Storyteller 3 is an OSS acceptance test automation tool that we use at work for executable specifications and end to end regression testing. Storyteller doesn’t have a huge number of users, but the early feedback has been mostly positive from the community and it gets plenty of pull requests that have helped quite a bit with usability. Not that my Marten work is settling down I’ve been able to start concentrating on Storyteller again.

My current focus for the moment is making Storyteller work on the CoreCLR as a precursor to being truly cross platform. Going a little farther than that, I’m proposing some changes to its architecture that I think will make it relatively painless to use the existing user interface and test runners with completely different, underlying test engines (I’m definitely thinking about a Node.js based runner and doing a port of the testing engine to Scala or maybe even Swift or Kotlin way down the road as a learning exercise).

My first step has been to chip away at Storyteller’s codebase by slowly replacing dependencies that aren’t supported on the CoreCLR (this work is in the project.json branch on Github):

Current State Proposed End State
  • Targets .Net 4.6
  • Self-hosted w/ Nowin
  • FubuMVC for the web application
  • Fleck for web sockets support
  • Tests execute in a separate AppDomain with all communication done via sending Json messages through .Net Remoting
  • FubuCore for the command line parsing
  • Uses Fixie for unit testing
  • RhinoMocks for mocking
  • Csproj/MSBuild for compiling, Paket for Nuget management
  • A single Nuget for the testing engine library and the test running/documentation generation executable
  • No Visual Studio or VS Code integration
  • Targets .Net 4.6 and the CoreCLR
  • Self-hosted with Kestrel
  • Raw ASP.Net Core middleware
  • Kestrel/ASP.Net Core for Websockets
  • Tests will execute in a separate process, and the communication between processes will all be done with sockets
  • Using Oakton for the command line parsing
  • Uses xUnit for unit testing
  • NSubstitute for mocking
  • The dotnet CLI for CI builds and all Nuget management, project.json for all the projects
  • A nuget for the .Net testing engine library, a second one for the command line tooling for specification running and editing, a third nuget for the documentation generation
  • A fourth nuget for integrating Storyteller with dotnet test
  • A VS Code plugin?

Some thoughts on the work so far:

  • Kestrel and the bits of ASP.Net Core I’m using have been pretty easy to get up and going. The Websockets support doesn’t feel terribly discoverable, but it was easy to find good enough examples and get it going. I was a little irritated with the ASP.Net team for effectively ditching the community-driven OWIN specification (yes, I know that ASP.Net Core supports OWIN, but it’s an emulation) for their own middleware signature. However, I think that what they did do is probably going to be much more discoverable and usable for the average user. I will miss referring to OWIN as the “mystery meat” API.
  • I actually like the new dotnet CLI and I’m looking forward to it stabilizing a bit. I think that it does a lot to improve the .Net development experience. It’s an upside down world when an alt.net founder like me is defending a new Microsoft tool that isn’t universally popular with more mainstream .Net folks.
  • I still like Fixie and I hope that project continues to move forward, but xUnit is the only game in town for the dotnet CLI and CoreCLR.
  • Converting the projects to the new project.json format was relatively harmless compared to the nightmare I had doing the same with StructureMap, but I’m not yet targeting the CoreCLR.
  • I’ve always been gun shy about attempting any kind of Visual Studio.Net integration, but from a cursory look around xUnit’s dotnet runner code, I’m thinking that a “dotnet test” adapter for Storyteller is very feasible.
  • The new Storyteller specification editing user interface is a React.js-based SPA. I’m thinking that that architecture should make it fairly simple to build a VS Code extension for Storyteller.

 

The Killer Problem: AppDomain’s are Gone (for now)

Storyteller, like most tools for automating tests against .Net, relies on AppDomain’s to isolate the application under test from the test harness so that you can happily rebuild your application and rerun without having to completely drop and restart the testing tool. Great, and other than .Net Remoting not being the most developer-friendly thing in the world, that’s worked out fairly well in Storyteller 3 (it had been a mess in Storyteller 1 & 2).

There’s just one little problem, AppDomain’s and Remoting are no longer in the CoreCLR until at least next year (and I’m not wanting to count on them coming back). It would be perfect if you were able to unload the new AssemblyLoadContext, but as far as I know that’s not happening any time soon.

At this point, I’m thinking that Storyteller will work by running tests in a completely separate process to be launched and shut down by the Storyteller test running executable. To make that work, users will have to make their Storyteller specification project be an executable that bootstraps their system under test and then pass an ISystem object and the raw command line parameters into some kind of Storyteller runner.

I’ve been experimenting with using raw sockets for the cross process communication and so far, so good. I’m just shooting json strings back and forth. I thought about using HTTP between the processes, but I came down to just feeling like that would be too heavy. I also considered using our LightningQueues project in its “ZeroMQ” mode, but again, I opted for lighter weight. The other advantage for me is that the “dotnet test” adapter communication is done by json over sockets as well.

I think this strategy of running separate processes would make Storyteller a little more complicated to set up compared to the existing “just make a class library and Storyteller will find your custom ISystem if one exists” strategy.  My big hope is that the combination of depending on a separate command line launched process and shooting json across sockets will make it much easier to bring on alternative test running engines that would be usable with the existing Storyteller user interface tooling.

 

 

 

Talking React & Redux at NDC Oslo Next Week

I’ll be at NDC Oslo next week to give a talk entitled An Experience Report from Building a Large React.js/Redux Application. While I’ll definitely pull some examples from some of our ongoing React.js projects at work, most of the talk will be about the development of the new React.js-based web application within the open source Storyteller 3 application.

Honestly, just to help myself start getting more serious about preparation, I’m thinking that I’ll try to cover:

  1. How the uni-directional flow idea works in a React.js application
  2. The newer things in ES2015 that I think make React components easier to write and read. That might be old news to many people, but it’s still worth talking about
  3. Using React.js for dynamically determined layouts and programmatic form generation
  4. The transition from an adhoc Flux-like architecture built around Postal.js and hand-rolled data stores to a more systematic approach with Redux.
  5. Utilizing the combination of pure function components in React.js with the react-redux bridge package and what that means for testability.
  6. The very positive impact of Redux in your ability to automate behavioral testing of your screens, especially when the application has to constantly respond to frequent messages from the server
  7. Integrating websocket communication to the Redux store, including a discussion of batching, debouncing, and occasional snapshots to keep the server and client state synchronized without crushing the browser from too many screen updates.
  8. Designing the shape of your Redux state to avoid unnecessary screen updates.
  9. Composing the reducer functions within your Redux store — as in, how will Jeremy get out of the humongous switch statement of doom?
  10. Using Immutable.js within your Redux store and why you would or wouldn’t want to do that

By no means is this an exhaustive look at the React/Redux ecosystem (I’m not talking about Redux middleware, the Redux specific development tools, or GraphQL among other things), but it’s the places where I think I have something useful to say based on my experiences with Storyteller in the past 18 months.

The last time I was at NDC (2009) I gave a total of 8 talks in 4 days. While I still hope to get to do a Marten talk while I’m there, I’m looking forward to not being a burnt out zombie this time around with the much lighter speaking load;)

 

Reliable and “Debuggable” Automated Testing of Message Based Systems in a Crazy Async World

In my last post on Automated Testing of Message Based Systems, I talked about how we are able to collapse all the necessary pieces of distributed messaging systems down to a single process and why I think that makes for a much better test automation experience. Once you can reliably bootstrap, tear down, and generally control your distributed application from a test harness, you move on to a couple more problems: when is a test actually done and what the heck is going on inside my system during the test when messages are zipping back and forth?

 

Knowing when to Assert in the Asynchronous World

Since everything is asynchronous, how does the test harness know when it’s safe to start checking outcomes? Any automated test is more or less going to follow the basic “arrange, act, assert” structure. If the “act” part of the test involves asynchronous operations, your test harness has to know how to wait for the “act” to finish before doing the “assert” part of a test in order for the tests to be reliable enough to be useful inside of continuous integration builds.

For a very real scenario, consider a test that involves:

  1. A website application that receives an HTTP POST, then sends a message through the service bus for asynchronous processing
  2. A headless service that actually processes the message from step 1, and quite possibly sends related messages to the service bus that also need to be finished processing before the test can do its assertions against expected change of state.
  3. The headless service may be sending messages back to the website application

To solve this problem in our test automation, we came up with the “MessageHistory” concept in FubuMVC’s service bus support to help order test assertions after all message processing is complete. When activated, MessageHistory allows our test harnesses to know when all message processing has been completed in a test, even when multiple service bus applications are taking part in the messaging work.

When a FubuMVC application has the service bus feature enabled, you can activate the MessageHistory feature by first bootstrapping in the “Testing” mode like so:

public class ChainInvokerTransportRegistry 
    : FubuTransportRegistry<ChainInvokerSettings>
{
    public ChainInvokerTransportRegistry()
    {
        // This property opts the application into
        // the "testing" mode
        Mode = "testing";

        // Other configuration declaring how the application
        // is composed. 
    }
}

In a blog post last week, I talked about How we do Semantic Logging, specifically how we’re able to programmatically add listeners for strong typed audit or debug messages. By setting the “Testing” mode, FubuMVC adds a new listener called MessageRecordListener to the logging that listens for logging messages related to the service bus message handling. The method below from MessageRecordListener opts the listener into any logging message that inherits from the MessageLogRecord class we use to mark messages related to service bus processing:

        public bool ListensFor(Type type)
        {
            return type.CanBeCastTo<MessageLogRecord>();
        }

For the purposes of the MessageHistory, we listen for:

  1. EnvelopeSent — history of a message that was sent via the service bus
  2. MessageSuccessful or MessageFailed — history of a message being completely handled
  3. ChainExecutionStarted — a message is starting to be executed internally
  4. ChainExecutionFinished — a message has been completely executed internally

All of these logging messages have the message correlation id, and by tracking the outstanding “activity started” messages against the “activity finished” messages, MessageHistory can “know” when the message processing has completed and it’s safe to start processing test assertions. Even if an automated test involves multiple applications, we can still get predictable results as long as every application is logging its information to the static MessageHistory class (I’m not showing it here, but we do have a mechanism to connect message activity back to MessageHistory when we use separate AppDomain’s in tests).

Just to help connect the dots, the MessageRecordListener relays information about work that’s started or finished to MessageHistory with this method:

        public void DebugMessage(object message)
        {
            var log = message.As<MessageLogRecord>();
            var record = log.ToRecord();

            if (record.IsPollingJobRelated()) return;

            // Tells MessageHistory about the recorded
            // activity
            MessageHistory.Record(log);

            _session.Record(record);
        }

 

Inside of test harness code, the MessageHistory usage is like this:

MessageHistory.WaitForWorkToFinish(() => {
    // Do the "act" part of your test against a running
    // FubuMVC service bus application or applications
}).ShouldBeTrue();

This method does a couple things:

  1. Clears out any existing message history inside of MessageHistory so you’re starting from a blank slate
  2. Executes the .Net Action “continuation” you passed into the method as the first argument
  3. Polls until there has been at least one recorded “sent” tracked message and all outstanding “sent” messages have been logged as completely handled or until the configured timeout period has expired.
  4. Returns a boolean that just indicates whether or not MessageHistory finished successfully (true) or just timed out (false).

For the pedants and the truly interested among us, the WaitForWorkToFinish() method is an example of using Continuation Passing Style (CPS) to correctly order the execution steps. I would argue that CPS is very useful in these kinds of scenarios where you have a set order of execution but some step in the middle or end can vary.

 

Visualizing What the Heck Just Happened

The next big challenge in testing message-based, service bus applications is being able to understand what is really happening inside the system when one of these big end to end tests fails. There’s asynchronous behavior and loosely coupled publish/subscribe mechanics. It’s clearly not the easiest problem domain to troubleshoot when things don’t work the way you expect.

We have partially solved this problem by tying the semantic log messages produced by FubuMVC’s service bus system into the results report of our automated tests. Specifically, we use the Storyteller 3 project (one of my other OSS projects that is being criminally neglected because Marten is so busy) as our end to end test harness. One of the powerful features in Storyteller 3 is the ability to publish and embed custom diagnostics into the HTML results report that Storyteller produces.

Building on the MessageRecordListener setup in the previous section, FubuMVC will log all of the service bus activity to an internal history. In our Storyteller test harness, we wipe out the existing state of the recorded logging messages before the test executes, then at the end of the specification run we gather all of the recorded logging messages for just that test run and inject some custom HTML into the test results.

We do two different visualizations, a “threaded” message history arranged by the history of a single message, who published it, who handled it, and what became of it?

ThreadedMessageHistory

The threaded history view helps to understand how a single message was processed from sender, to receiver, to execution. Any error steps will show up in the threaded history. So will retry attempts and any additional messages triggered by the topmost message.

We also present pretty well the same information in a tabular form that exposes the metadata for the message envelope wrapper at every point some activity is recorded:

MessageActionHistory

 

I’m using images for the blog post, but these reports are written into the Storyteller HTML results. These diagnostics have been invaluable to us in understanding how our message based systems actually behave. Having these diagnostics as part of the test results on the CI server has been very helpful in diagnosing failures in the CI builds that can be notoriously hard to debug.

Next time…

At some point I’ll blog about how we integrate FubuMVC’s HTTP diagnostics into the Storyteller results and maybe a different post about the performance tracking data that Storyteller exposes as part of the testing results. But don’t look for any of that too soon;)

Batch Queries with Marten

Marten v0.7 was published just under two weeks ago, and one of the shiny new features was the batched query model with let’s say a trial balloon syntax that was shot down pretty fast in the Marten Gitter room (I wasn’t happy with it either). To remedy that, we pushed a new Nuget this morning (v0.7.1) that has a new, streamlined syntax for the batched query and updated the batched query docs to match.

So here’s the problem it tries to solve, say you have an HTTP endpoint that needs to aggregate several different sources of document data into a single, aggregated JSON message back to your web client (this is a common scenario in a large application at my work that is going to be converted to Marten shortly). To speed up that JSON endpoint, you’d like to be able to batch up those queries into a single call to the underlying Postgresql database, but still have an easy way to get at the results of each query later. This is where Marten’s batch query functionality comes in as demonstrated below:

// Start a new IBatchQuery from an active session
var batch = theSession.CreateBatchQuery();

// Fetch a single document by its Id
var user1 = batch.Load<User>("username");

// Fetch multiple documents by their id's
var admins = batch.LoadMany<User>().ById("user2", "user3");

// User-supplied sql
var toms = batch.Query<User>("where first_name == ?", "Tom");

// Query with Linq
var jills = batch.Query<User>().Where(x => x.FirstName == "Jill").ToList();

// Any() queries
var anyBills = batch.Query<User>().Any(x => x.FirstName == "Bill");

// Count() queries
var countJims = batch.Query<User>().Count(x => x.FirstName == "Jim");

// The Batch querying supports First/FirstOrDefault/Single/SingleOrDefault() selectors:
var firstInternal = batch.Query<User>().OrderBy(x => x.LastName).First(x => x.Internal);

// Kick off the batch query
await batch.Execute();

// All of the query mechanisms of the BatchQuery return
// Task's that are completed by the Execute() method above
var internalUser = await firstInternal;
Debug.WriteLine($"The first internal user is {internalUser.FirstName} {internalUser.LastName}");

Using the batch query is a four step process:

  1. Start a new batch query by calling IDocumentSession.CreateBatchQuery()
  2. Define the queries you want to execute by calling the Query() methods on the batch query object. Each query operator returns a Task<T> object that you’ll use later to access the results after the query has completed (under the covers it’s just a TaskCompletionSource).
  3. Execute the entire batch of queries and await the results
  4. Access the results of each query in the batch, either by using the await keyword or Task.Result.

 

A Note on our Syntax vis a vis RavenDb

You might note that the Marten syntax is quite a bit different syntax-wise and even conceptually to RavenDb’s Lazy Query feature. While we originally started Marten with the idea that we’d stay very close to RavenDb’s API to make the migration effort less difficult, we’re starting to deviate as we see fit. In this particular case, I wanted the API to be more explicit about the contents and lifecycle of the batched query. In other cases like the forthcoming “Include Query” feature, we will probably stay very close to RavenDb’s syntax if we don’t have any better ideas or strong reason to deviate from the existing art.

 

A Note on “Living” Documentation

I’ve received a lot of criticism over the years for having inadequate, missing, or misleading documentation for the OSS projects I’ve ran. Starting with Storyteller 3.0 and StructureMap 4.0 last year and now Marten this year, I’ve been having some success using Storyteller’s static website generation to author technical documentation in a way that’s been easy to keep code samples and content up to date with changes to the underlying tool. In the case of the batched query syntax from Marten above, the code samples are pulled directly from the acceptance tests for the feature. As soon as I made the changes to the code, I was able to update the documentation online to reflect the new syntax from running a quick script and pushing to the gh-pages branch of the Marten repository. All told, it took me under a minute to refresh the content online.