Spitballing the Future of Storyteller

I just made a little tactical release of Storyteller v5.3.5 (it took a couple “fix” releases to really knock things out) that at least addresses the critical Nuget dependency issues users were experiencing against the latest versions. While in the code, it was quickly obvious that the Storyteller codebase was in some serious need of refurbishing and modernization. Hence, this post laying out some envisioning for the next wave of Storyteller development.

Storyteller is an OSS behavior driven development / executable specifications / integration testing tool I’ve worked on and led in some form or another for years. The obvious comparison is to Cucumber-based tools (like SpecFlow in the .Net world), but Storyteller was much more influenced by FitNesse and Ward Cunningham’s original FIT library from way back. While Cucumber has sucked up almost all the Oxygen in the room for tools like this, I think Storyteller’s model has some significant advantages over Cucumber tools for more data-intensive scenarios and does a lot more to help you tackle more complicated integration testing scenarios.

It’s never been super successful in terms of adoption or mindshare, but I and others do use it successfully. I personally enjoy working on Storyteller because it’s the only significant UI/UX work I get to do (my various day jobs have all wisely kept me out of any kind of UI design for the past decade).

Sometime later this year I’d like to turn my attention back to Storyteller to:

  1. Overhaul and modernize the user interface. There’s definitely some room for ease of use improvements. There’s also some missing features that would help for the next bullet. Mostly though, it’s a way for me personally to level up on client side development as it’s been a couple years now since I’ve done any substantial Javascript work
  2. Improve the usability of Storyteller for non-developers and make Storyteller work better throughout the entire software development lifecycle. Storyteller today is very oriented toward technical developers and automated testing more than true Behavior Driven Development as a requirements communication and capture mechanism
  3. Find ways to parallelize specification execution. Storyteller specifications today have to run single file, and bigger suites of heavy integration tests have proven to be very slow to run in real usage (it’s not Storyteller itself so much as things like database access and any usage of Selenium). I’d like to explore some possible ways to parallelize execution in CI pipelines. More on this later in the post.

I’d love to attract some other contributors or folks that would be willing to play with early versions and give feedback. I’d especially love help and feedback on any new user interface work.

Storyteller’s Current Architecture

If you use Storyteller’s interactive user interface today in its latest incarnation (v5), there are three processes running including the browser. Here’s a quickie visual representation of the current architecture:

Storyteller

The Specification Project

The actual Storyteller specifications project is analogous to an xUnit.Net/NUnit/MSTest project in that it holds all the test fixture code and specifications (tests). In Storyteller v5, this project is a console application that delegates to Storyteller in its Project.Main() entry point method to perform any kind of application set up, tear down, and Storyteller configuration you may need.

This project probably has references to your actual application assemblies. In the case of an ASP.Net Core system, this might completely bootstrap and run your entire application. It also includes your Storyteller Fixture classes that sit in between the specifications and your application code.

This executable is all you need to run Storyteller specifications from the command line as you would in continuous integration builds. This same application runs as a test execution “agent” in a buddy process to the dotnet storyteller process when you’re executing specifications from the interactive specification runner. The dotnet storyteller process starts and stops the Specification project process so you can rebuild the system under test without having to stop and restart the Storyteller UI.

Offhand, I don’t think that the Fixture model will change too much in a new version and I think I’d like it to be a goal that your existing Fixture class code would compile immediately after an upgrade. I am very interested in doing a significant overhaul to how Storyteller bootstraps or shuts down the system under test and other environmental set up actions. So far I’ve identified a few opportunities for improvement:

dotnet storyteller

dotnet storyteller is a separate executable that is added to your project today through the dotnet tool extensibility. This tool runs a small ASP.Net Core application hosted locally and served via Kestrel (a “backend for frontend”). All of the client side assets (minus some on CDN’s) are embedded resources within this executable. This process communicates with the browser front end through JSON messages sent via web sockets. The communication with the running specification “buddy” process is done through JSON messages sent through raw sockets.

Honestly, more of the evident improvements will be in the actual user interface, but there’s a few things I’d like to do in the “Backend for Frontend” AspNetCore application:

  • Incorporate Jasper for both the HTTP handling, an in-memory command processor, and the communication with the Specification process through sockets. Mostly this is a way to prove out Jasper, but it would also remove quite a bit of one-off code in Storyteller today.
  • Possibly convert this to being a global tool rather than using it as a local dotnet tool.
  • Maybe consider making this application be an Electron application? I’ve done enough research to know that’s feasible.

Storyteller Editor Browser Application

The actual Storyteller client running in the browser is a fairly complicated React/Redux application. After the initial requests for the HTML and client assets, all other communication between the browser application and the ASP.Net Core backend is through JSON messages sent via web sockets.

The current stack is:

  • React.js v14 — the UI was started in v11, and you can unfortunately see a lot of different styles of building React.js components as React itself was churning so hard
  • Redux + react-redux — I like this combination, especially on the receiving end of web sockets. This was all retrofitted in after the application work had started and I had some growing pains with this approach. I think the Redux work could be much simpler with a redo
  • Webpack — it works great when it’s all set up, but it’s a brutal tool to use for someone like me who only moonlights in the JS world once in a while
  • Immutable.js — It did what it was supposed to do, and I think it works pretty well in combination with Redux
  • Postal.js — I think it was a great event aggregator in its day, but most of what it originally did in a homegrown Flux implementation was later subsumed by Redux
  • react-bootstrap — Former colleagues of mine were active in this project at the time of Storyteller v3 development when the UI was rebuilt, and it was a natural fit
  • Mocha/Chai/Karma for testing

I very briefly looked at possibly incorporating Blazor as the user interface tooling, but it’s model just doesn’t appeal much to me and it won’t have a huge amount of mindshare in the near future. I guess that WPF would be more viable in the .Net Core 3.0 stack, but I’ve never cared for it. Right now I’m leaning very heavily toward continuing to develop the user interface as a browser based application primarily built around React + Redex.

My current thinking about a future web stack is:

  • React.js vLatest — I personally like React.js, and there’s plenty of usable code in the existing application that I don’t feel it’s justified to switch to Vue.js or anything else at this point
  • Redux + react-redux — I looked at MobX and it’s interesting, but I actually like Redux and there’s existing code. I’m not buying into React’s Context API for everything
  • Parcel.js — I don’t think Storyteller needs all the functionality of Webpack, and I’ve been very impressed with what I’ve seen with Parcel.js so far in admittedly limited prototyping
  • Reactstrap or Material UI — I think that it would be easier to move to Reactstrap because the existing UI is built around Bootstrap and there are so many existing Bootstrap themes, but I like the editor support better in Material UI at a glance. I think I’m ever so slightly leaning toward Material UI and I’d love to hear from other folks.
  • Jest + Enzyme for testing. Time to modernize, and I’d really like to leave Karma behind. I think I’d happily continue to use Chai though. I’d really like to be more familiar with the most recent approaches to testing React components before I need to oversee real client side development at work
  • Typescript? I don’t think it’s absolutely necessary, but it would have helped the first time around in the more complicated data structures and the Redux reducers. Mostly this would be a chance for me to get into TypeScript I think.

Headless or Review Mode

Today you pretty well have to run your specification project to be working with Storyteller at all. That’s fine for developers who would naturally have all the prerequisites for the system under test on their box and be comfortable working with the code. That isn’t a great situation for the mass majority of testers and absolutely not at all good for non technical business experts.

For Storyteller v4, a couple former colleagues and I built some functionality where you could define all new grammar/fixture language for your project in a specialized markdown format, then be able to write specifications using the interactive editor using those stubbed grammars.

As an alternative to running fully connected against the “system under test”, I’d like Storyteller to also have a “headless mode” that would allow you to review and edit specifications, but not have to actually run the application. That architecture would look like this:

Headless Storyteller

In this mode, Storyteller could run without the actual specification project process, so you’d need absolutely nothing but a web browser and the dotnet storyteller application running. You’d have to previously export the definition of the Fixture/Grammar language for your system under test to Storyteller-flavored Fixture markdown files.

  • As an important follow up to that headless mode, I really want to embed “language designers” directly into the Storyteller UI that would allow users to make up fixture/grammar language usage on the fly.
  • Storyteller already has some exposed functionality to generate the C# code for missing grammars and fixtures. My hope is that the combination of the headless mode, being able to define the specification language any time within the editor, and finally being able to generate skeleton code to match that language will hugely improve a user’s ability to use Storyteller as a requirements elicitation tool in addition to being an integration testing tool.

 

Parallelized Specification Execution

Folks have wanted some ability to parallelize Storyteller specification execution seemingly forever. I’ve come up with a couple ideas so far:

  1. Use a model very similar to xUnit.Net where you can mark Fixture classes by whether they’re completely stateless or somehow “know” what fixtures can run simultaneously. The engine would then “know” how to schedule and parallelize executions
  2. In the Storyteller bootstrapping, have some way to effectively set up multiple specification runners in the specification project. You might have to do some extra work in your setup to opt into this model. Things like using different database schemas per runner or other stateful resources
  3. Docker all the things! For really big and slow specification suites, what if you could use something like an Azure Elastic Job to temporarily put out a whole bunch of parallel Storyteller specification runs, farm work out to all these runners, and combine the results in one place for an effective CI run. I’m assuming Docker would come into play as a way to spin up Storyteller agents.

 

What next?

Honestly, I need to get busy with Marten’s backlog first, but I really wanted to get some of these ideas out and onto paper, so to speak. I’d really like to gather as much feedback as possible from Storyteller’s current userbase and maybe try to attract some new folks.

 

 

Advertisements

Lamar v3 is Released: Faster, smaller, quicker cold starts, internal type friendly

Lamar has had some churn lately with me almost giving up on it, changing my mind, and look at that, publishing a very significantly improved version up to Nuget just now.

I just published Lamar v3.0 with some bug fixes and big internal improvements. This release is mostly significant because it eliminates Lamar’s previous reliance on Roslyn for runtime code generation in favor of a more common model that builds Expression trees and compiles dynamic lambdas — inevitably with the excellent FastExpressionCompiler for maximum performance and faster cold starts.

All told, Lamar v3.0:

  • Has much faster “cold start” times compared to ❤
  • Is faster across the board, but especially when internal types are involved. The very early user feedback is “holy moses that’s much faster”
  • I think that we’re through with compatibility with internal types being registered into the container because the dynamic Lambdas don’t have to care about the type visibility
  • Will use way less memory without the Roslyn dependency tree in memory
  • Has far fewer dependencies now that some users complained about

There are no API changes to Lamar itself, but there’s now a different set of Nuget libraries:

  1. LamarCompiler — only has Lamar’s old helper for executing the Roslyn runtime compilation, but this is no longer referenced by Lamar itself
  2. LamarCodeGeneration — the Frame / Variable model that Lamar uses to generate code and now Expression trees in memory. This is used by Lamar itself, could be used as a stand alone library, and has no direct coupling to LamarCompiler
  3. Lamar itself

A touch of history…

I’ve been absolutely exhausted for years with maintaining StructureMap and dealing with user questions and oddball problems. I also knew it had some pretty ugly performance issues and it had been a massive pain in the ass force fitting StructureMap’s long standing functionality into ASP.Net Core’s new compliance requirements (that appear to have been made up by the ASP.Net team out of whole cloth with ZERO input from the major OSS IoC tool authors). I also knew from some experimentation that I wasn’t going to be able to fix StructureMap very easily, so…

Lamar was an attempt by me to throw together a new IoC tool that would mostly mimic StructureMap‘s public API (everybody has opinions, but there are actually quite a few people who like StructureMap’s usability and it’s had ~6 million downloads) and a large subset of its functionality. At the same time, I did change its lifecycle semantics and behavior to be ASP.Net Core-compliant from the very get go to head off more problems with that.

Lamar’s original model was based around using dynamic code generation and compilation with Roslyn. Woohoo, it was easy to use and I was able to get a large subset of StructureMap’s old functionality working over Christmas break a couple years ago. Unfortunately, that model doesn’t play well at all with internal types and my limited workarounds were pretty clumsy. It turns out that lots and lots of people and common frameworks like to put internal types into IoC containers (looking at you ASP.Net Core). So here we are. Lamar switched over to using Expressions compiled into Funcs like basically every other IoC container I’m aware of, but hopefully it starts fulfilling my real goal for Lamar:

Get folks complaining about StructureMap off of that and make my life easier overall by reducing user problems.

Jasper v0.9.9 is Released!

Jasper is an open source project I’ve been furiously working on for the past couple years as what I have to admit is mostly a chance to resurrect the best parts of the earlier FubuMVC framework while solving its technical and usability shortcomings. Along the way, Jasper has evolved a bit away from its FubuMVC roots to be more consistent and compatible with ASP.Net Core. At this point, Jasper is a fancy command execution pipeline that lives inside of the ASP.Net Core ecosystem. Jasper can be used as any mix of a lightweight messaging framework, an in-memory service bus, or handling HTTP requests as an alternative ASP.Net Core framework.

I was just able to push a new v0.9.9 release of Jasper that I want to effectively be the last alpha release. At this point, I think the public API surface is pretty well set and only a handful of features left before making the big ol’ 1.0 release. I think the most important thing for Jasper is to try to get folks to take it for a spin, or glance through tutorials, and generally try to get some feedback and visibility about the project.

Here’s some links to get you started:

There’ll hopefully be plenty of blog posts on Jasper in the next couple weeks, starting with how Jasper’s usability contrasts with MVC Core in the HTTP space or NServiceBus/MediatR/etc. for messaging and command execution.

The Road to 1.0

I’ve got to put Jasper down for awhile to focus on some Lamar and help out a lot more with Marten for probably a couple months before I do much more on Jasper, but I’d still love to get 1.0 out by at least the end of the summer.

I’m thinking out loud in this section, so everything is subject to change. Before I flip the switch to the big, giant 1.0, I think these things might need to happen:

  • There are some optimizations I want to make to the database backed message persistence I couldn’t quite get to for v0.9.9.
  • I’m very tempted just to wait until the netcoreapp3.0 wave of updates comes out. It’s a near guarantee that ASP.Net Core v3.0 will break some of Jasper’s internals, and it’d be very helpful to slim Jasper’s package dependency tree down if Jasper could depend on the new generic host model instead of IWebHostBuilder when 3.0 unifies that model somewhat.
  • Jasper heavily depends on the Task Parallel Library from Microsoft, but that seems to be somewhat deprecated now. I might look to rewire Jasper’s internals to use the newer System.Threading.Channels instead. I haven’t done any research into this one yet.
  • If the HTTP support is going to live on, it’ll need some kind of Swagger/Swashbuckle integration for the Jasper HTTP API support
  • I’d like to spend some time on supporting the Techempower Benchmarks for Jasper and some performance optimization. My goal here is to make Jasper the fastest HTTP application framework for .Net Core and be just barely slower than the raw ASP.Net Core benchmarks (not sure how feasible that is, but let me dream on).
  • Jasper is going to have to adjust to whatever becomes of Lamar. I don’t think this is going to change Jasper at development time, but might introduce a new production build step to optimize Jasper application’s “cold start” times for better hosting in Docker kind of worlds.

Lamar stays and how that enfolds

I wrote a blog post at the end of last week about Lamar when I just happened to be feeling discouraged about a couple things (it happens sometimes, and I swear that having to support IoC tools exposes me to more difficult people than every other project I work on combined). I got some rest this weekend, a bit of positive reinforcement from other folks, and actually thought through how to fix the issues. Long story short, I’m not giving up on anything and here’s what I think the very doable game plan is for Lamar (and the closely related Jasper project):

Lamar

  • Short term: Get a small bug fix release out soon that has some options to “force” all the compilation upfront in one dynamic assembly. That’s gonna hurt the cold start time, but should help the memory usage. We’ll also look to see if there’s any places where Lamar is holding on unnecessarily to the Roslyn compilation objects to ensure that they can be garbage collected
  • Medium term: Introduce an alternative compilation model based on Expressions compiled to Lambdas with FastExpressionCompiler. This model will kick in automatically whenever there’s one or more internal types in the “build plan”, and could be opted into globally through a container level switch. I didn’t want to do this originally because the model just isn’t very fun to work with.  After thinking it through quite a bit over the weekend, I think it won’t be bad at all to retrofit this alternative to Lamar’s existing Frame and Variable model. This will knock out the performance issues with internal types and address all the memory issues.
  • Long term: Probably split up LamarCompiler a little bit to remove the actual code compilation to significantly slim down Lamar’s dependency tree and move Lamar to a purely Expression based model. Introducing the Expression model will inevitably make the exception stacktrace messages coming out of Lamar explode, so there might have to be an effort to rewrite them to make them more user friendly (I had to do this in StructureMap 3 several years ago).

 

Jasper

I’m very close to pulling the trigger on a Jasper v1.0, with the understanding that it’s inevitable that there will be a Jasper v2.0 later in the year to incorporate .Net Core 3.0. I don’t think that Jasper will have any issues with memory usage related to Lamar because it uses Lamar very differently than MVC in any flavor. The changes to Lamar will impact Jasper though, so:

Short term: Jasper v1.0 with Lamar as it is.

Medium term: Jasper gets a model where you can happily use the runtime codegen and compilation during development time while things are churning, but for production usage you have the ability to just drop the code that would be generated to disk, have that compiled into your system in the first place, and let Jasper use those types. Ultra fast production time cold start times, no worries at all about Roslyn doing bad things from memory. I’ve already done successful proof of concept development on this one.

Long term: profit.

As an aside, I got quizzed quite a bit about why Jasper has to be specific to Lamar as its IoC container and can’t just support whatever tool folks want to use. The reason is that Jasper uses Lamar’s very specific code generation in its pipeline to avoid using an IoC container at runtime whatsoever and also to avoid forcing users to have to conform to all kinds of Jasper specific adapter interfaces. I could maybe force Jasper to still pull this off with the built in DI container or another IoC container with Jasper-centric adapters to expose all of its metadata in a way such that the codegen understands it, but just ick.

If you took Lamar’s runtime codegen away, I think Jasper inevitably looks like a near clone of NServiceBus or Brighter both in its usability and runtime pipeline and why does the world need that?

 

 

Lamar asks: Do I stay or do I go?

I happened to look into the Lamar issue list on GitHub late this afternoon at the end of a very long work day. What I saw was nothing to feel good about. A huge memory usage problem (probably related to Roslyn but I don’t know that for sure), a user who’s been very difficult in the past telling me he switched to another tool because it was easier to use and faster, bugs that unfortunately point to a huge flaw in Lamar’s basic premise, and folks really wanting a feature I’ve put off because it’s going to be a lot of work.

Lamar started its life as a subsystem of Jasper, the project that’s been my main side-project for the past couple years and my biggest ambition for several years before that. Jasper’s special sauce is the way that it uses Lamar to generate and compile code at runtime to glue together the runtime pipeline in the most efficient way possible. At the same time, it was killing me to support StructureMap and I wanted out of it, but didn’t want to leave a huge mess of folks with an abandoned project, so I turned Lamar into an IoC tool that was purposely meant to be a smooth transition for StructureMap users to an improved and more efficient IoC tool.

Right now, it’s becoming clear that the basic model of Lamar to use the runtime code generation isn’t working out. First because of what looks like some severe memory issues related to the runtime compilation, and second because it turns out that it’s very, very common for folks to use internal .Net types in IoC containers which pretty well defeats the entire code generation model. Lamar has workarounds for this to fall back to dynamic expressions for internal types, but it’s not optimized for that and Lamar’s performance degrades badly when folks use internal types.

As I see it, my choices with Lamar are to:

  • Retool it to use dynamic Expression compilation all the way down in the IoC engine usage like basically every other IoC tool including the older StructureMap library. That takes care of the performance issues most likely and knocks out the bugs due to internal types, but just sounds miserable.
  • Deal with the memory problems by trying to engage with the actual Roslyn team to figure out what’s going on and work around it. Which doesn’t solve the internal type issue.
  • Use pre-generated code build by Lamar during testing time baked into the real application for production deployments. There’s a bit more than a proof of concept already baked in, but this still doesn’t fix the internal problem

On the Jasper side I could:

  • Try to make it work with the built in IoC container but still try to do its “no IoC at runtime because it’s all codegen’d” model with a cutdown version of Lamar

Or, just call the whole thing off because while I would argue that Jasper shows a lot of promise technically and I still believe in its potential, it has zero adoption and I most likely won’t get to use it myself in any projects in the next year. If I give up on Jasper, I’m likely to do the same to Lamar. In that case, I’ll write off Lamar as a failed project concept, apologize profusely to all the folks that started to use it to replace StructureMap, and walk away.

Nobody needs to feel sorry for me here, so no need to try to cheer me up about any of this, but, I wouldn’t mind some feedback on whether it’s worth keeping Lamar around and what you think about the proposed improvements.

My integration testing challenges this week

EDIT 3/12: All these tests are fixed, and it wasn’t as bad as I’d thought it was going to be, but the extra logging and faster change code/run test under debugger cycle definitely helped.

This was meant to be a short post, or at least an easy to write post on my part, but it spilled out from integration testing and into some Storyteller mechanics, semi-advanced xUnit.Net usage, ASP.Net Core logging integration, and even a tepid defense of compositional architectures wrapped around an IoC container.

I’ve been working feverishly the past couple of months to push Jasper to a 1.0 release. As (knock on wood) the last big epic, I’ve been working on a large overhaul and redesign of the message persistence behind Jasper’s durable messaging. The existing code was fairly well covered by integration tests, so I felt confident that I could make the large scale changes and use the big bang integration tests to ensure the intended functionality still worked as designed.

I assume that y’all can guess how this has turned out. After a week of code changes and fixing any and all unit test and intermediate integration test failures, I got to the point where I was ready to run the big bang integration tests and, get this, they didn’t pass on the first attempt! I know, shocking right? Moreover, the tests involve all kinds of background processing and even multiple logical applications (think ASP.Net Core IWebHost objects) being started up and shut down during the tests, so it wasn’t exactly easy to spot the source of my test failures.

I thought it’d be worth talking about how I first stopped and invested in improving my test harness to make it faster to launch the test harness and to capture a lot more information about what’s going on in all the multithreading/asynchronous madness going on but…

First, an aside on Debugger Hell

You probably want to have some kind of debugger in your IDE of choice. You also want to be reasonably good using that tool, know how to use its features, and conversant in its supported keyboard shortcuts. You also want to try really hard to avoid needing to use your debugger too often because that’s just not an efficient way to get things done. Many folks in the early days of Agile development, including me, described debugger usage as a code or testing “smell.” And while “smell” is mostly used today as a pejorative to put down something that you just don’t like, it was originally just meant as a warning you should pay more attention to in your code or approach.

In the case of debugger usage, it might be telling you that your testing approach needs to be more fine grained or that you are missing crucial test coverage at lower levels. In my case, I’ll be looking for places where I’m missing smaller tests on elements of the bigger system and fill in those gaps before getting too worked up trying to solve the big integration test failures.

Storyteller Test Harness

For these big integration tests, I’m using Storyteller as my testing tool (think Cucumber,  but much more optimized for integration testing as opposed to being cute). With Storyteller, I’ve created a specification language that lets me script out message failover scenarios like the one shown below (which is currently failing as I write this):

JasperFailoverSpec

In the specification above, I’m starting and stopping Jasper applications to prove out Jasper’s ability to fail over and recover pending work from one running node to another using its Marten message persistence option. At runtime, Jasper has a background “durability agent” constantly running that is polling the database to determine if there is any unclaimed work to do or known application nodes are down (using advisory locks through Postgresql if anybody would ever be interested in a blog post about just that). Hopefully that’s enough description just to know that this isn’t particularly an easy scenario to test or code.

In my initial attempts to diagnose the failing Storyteller tests I bumped into a couple problems and sources of friction:

  1. It was slow and awkward mechanically to get the tests running under the debugger (that’s been a long standing problem with Storyteller and I’m finally happy with the approach shown later in this post)
  2. I could tell quickly that exceptions were being thrown and logged in the background processing, but I wasn’t capturing that log output in any kind of usable way

Since I’m pretty sure that the tests weren’t going to get resolved quickly and that I’d probably want to write even more of these damn tests, I believed that I first needed to invest in better visibility into what was happening inside the code and a much quicker way to cycle into the debugger. To that end, I took a little detour and worked on some Storyteller improvements that I’ve been meaning to do for quite a while.

Incorporating Logging into Storyteller Results

Jasper uses the ASP.Net Core logging abstractions for its own internal logging, but I didn’t have anything configured except for Debug and Console tracing to capture the logs being generated at runtime. Even with the console output, what I really wanted was all the log information correlated with both the individual test execution and which receiver or sender application the logging was from.

Fortunately, Storyteller has an extensibility model to capture custom logging and instrumentation directly into its test results. It turned out to be very simple to whip together an adapter for ASP.Net Core logging that captured the logging information in a way that can be exposed by Storyteller.

You can see the results in the image below. The table below is just showing all the logging messages received by ILogger within the “Receiver1” application during one test execution. The yellow row is an exception that was logged during the execution that I might not have been able to sense otherwise.

AspNetLoggingInStoryteller

For the implementation, ASP.Net Core exposes the ILoggerProvider service such that you can happily plug in as many logging strategies as you want to an application in a combinatorial way. On the Storyteller side of things, you have the Report interface that let’s you plug in custom logging that can expose HTML output into Storyteller’s results.

Implementing that crudely came out as a single class that implements both adapter interface (here’s a gist of the whole thing):

public class StorytellerAspNetCoreLogger : Report, ILoggerProvider

The actual logging just tracks all the calls to ILogger.Log() as little model objects in memory:

public void Log<TState>(LogLevel logLevel, EventId eventId, TState state, Exception exception, Func<TState, Exception, string> formatter)
{
    var logRecord = new LogRecord
    {
        Category = _categoryName,
        Level = logLevel.ToString(),
        Message = formatter(state, exception),
        ExceptionText = exception?.ToString()
    };

    // Just keep all the log records in an in memory list
    _parent.Records.Add(logRecord);
}

Fortunately enough, in the Storyteller Fixture code for the test harness I bootstrap the receiver and sender applications per test execution, so it’s really easy to just add the new StorytellerAspNetCoreLogger to both the Jasper applications and the Storyteller test engine:

var registry = new ReceiverApp();
registry.Services.AddSingleton<IMessageLogger>(_messageLogger);

var logger = new StorytellerAspNetCoreLogger(key);

// Tell Storyteller about the new logger so that it'll be
// rendered as part of Storyteller's results
Context.Reporting.Log(logger);

// This is bootstrapping a Jasper application through the 
// normal ASP.Net Core IWebHostBuilder
return JasperHost
    .CreateDefaultBuilder()
    .ConfigureLogging(x =>
    {
        x.SetMinimumLevel(LogLevel.Debug);
        x.AddDebug();
        x.AddConsole();
        
        // Add the logger to the new Jasper app
        // being built up
        x.AddProvider(logger);
    })

    .UseJasper(registry)
    .StartJasper();

And voila, the logging information is now part of the test results in a useful way so I can see a lot more information about what’s happening during the test execution.

It sucks that my code is throwing exceptions instead of just working, but at least I can see what the hell is going wrong now.

Get the debugger going quickly

To be honest, the friction of getting Storyteller tests running under a debugger has always been a drawback to Storyteller — especially compared to how fast that workflow is with tools like xUnit.Net that integrate seamlessly into your IDE. You’ve always been able to just attach your debugger to the running Storyteller process, but I’ve always found that to be clumsy and slow — especially when you’re trying to quickly cycle between attempted fixes and re-running the tests.

I made some attempts in Storyteller 5 to improve the situation (after we gave up on building a dotnet test adapter because that model is bonkers), but that still takes some set up time to make it work and even I have to always run to the documentation to remember how to do it. Sometime this weekend the solution for a quick xUnit.Net execution wrapper around Storyteller popped into my head and it honestly took about 15 minutes flat to get things working so that I could kick off individual Storyteller specifications from xUnit.Net as shown below:

StorytellerWithinXUnit

Maybe that’s not super exciting, but the end result is that I can rerun a specification after making changes with or without debugging with nothing but a simple keyboard shortcut in the IDE. That’s a dramatically faster feedback cycle than what I had to begin with.

Implementation wise, I just took advantage of xUnit.Net’s [MemberData] feature for parameterized tests and Storyteller’s StorytellerRunner class that was built to allow users to run specifications from their own code. After adding a new xUnit.Net test project and referencing the original Storyteller specification project named “StorytellerSpecs, ” I added the code file shown below in its entirety::

// This only exists as a hook to dispose the static
// StorytellerRunner that is hosting the underlying
// system under test at the end of all the spec
// executions
public class StorytellerFixture : IDisposable
{
    public void Dispose()
    {
        Runner.SpecRunner.Dispose();
    }
}

public class Runner : IClassFixture<StorytellerFixture>
{
    internal static readonly StoryTeller.StorytellerRunner SpecRunner;

    static Runner()
    {
        // I'll admit this is ugly, but this establishes where the specification
        // files live in the real StorytellerSpecs project
        var directory = AppContext.BaseDirectory
            .ParentDirectory()
            .ParentDirectory()
            .ParentDirectory()
            .ParentDirectory()
            .AppendPath("StorytellerSpecs")
            .AppendPath("Specs");

        SpecRunner = new StoryTeller.StorytellerRunner(new SpecSystem(), directory);
    }

    // Discover all the known Storyteller specifications
    public static IEnumerable<object[]> GetFiles()
    {
        var specifications = SpecRunner.Hierarchy.Specifications.GetAll();
        return specifications.Select(x => new object[] {x.path}).ToArray();
    }

    // Use a touch of xUnit.Net magic to be able to kick off and
    // run any Storyteller specification through xUnit
    [Theory]
    [MemberData(nameof(GetFiles))]
    public void run_specification(string path)
    {
        var results = SpecRunner.Run(path);
        if (!results.Counts.WasSuccessful())
        {
            SpecRunner.OpenResultsInBrowser();
            throw new Exception(results.Counts.ToString());
        }
    }
}

And that’s that. Something I’ve wanted to have for ages and failed to build, done in 15 minutes because I happened to remember something similar we’d done at work and realized how absurdly easy xUnit.Net made this effort.

Summary

  • Sometimes it’s worthwhile to take a step back from trying to solve a problem through debugging and invest in better instrumentation or write some automation scripts to make the debugging cycles faster rather than just trying to force your way through the solution
  • Inspiration happens at random times
  • Listen to that niggling voice in your head sometimes that’s telling you that you should be doing things differently in your code or tests
  • Using a modular architecture that’s composed by an IoC container the way that ASP.net Core does can sometimes be advantageous in integration testing scenarios. Case in point is how easy it was for me to toss in an all new logging provider that captured the log information directly into the test results for easier test failure resolution

And when I get around to it, these little Storyteller improvements will end up in Storyteller itself.

Jasper meets Azure Service Bus and gets better with Rabbit MQ

This is just a super quick update on Jasper so I can prove to the world that it’s actually moving forward. I’ll hopefully have a lot more on Jasper in the next couple weeks when I push the next big alpha.

I posted a new release of Jasper last week (v0.9.8.6) that has some better transport support for using Jasper commands with real messaging infrastructure:

I’d definitely love any feedback on the configuration of either transport option. I’m aiming to allow users to have access to every last bit of advanced features that either transport exposes, while also enabling users to quickly swap out transport mechanisms between executing locally, on test, and in production as necessary.

 

What’s Next

I really want to get to a 1.0 release soon because this has dragged on so much longer than I’d hoped. Right now I’m focused on improving the message persistence options and add automatic messaging idempotent rules.