Moving Storyteller to the CoreCLR and going Cross Platform

This is half me thinking out loud and half experience report with new .Net new world order. If you want to know more about what Storyteller is, there’s an online webinar here or my blog post about the Storyteller 3 reboot and vision.

Storyteller 3 is an OSS acceptance test automation tool that we use at work for executable specifications and end to end regression testing. Storyteller doesn’t have a huge number of users, but the early feedback has been mostly positive from the community and it gets plenty of pull requests that have helped quite a bit with usability. Not that my Marten work is settling down I’ve been able to start concentrating on Storyteller again.

My current focus for the moment is making Storyteller work on the CoreCLR as a precursor to being truly cross platform. Going a little farther than that, I’m proposing some changes to its architecture that I think will make it relatively painless to use the existing user interface and test runners with completely different, underlying test engines (I’m definitely thinking about a Node.js based runner and doing a port of the testing engine to Scala or maybe even Swift or Kotlin way down the road as a learning exercise).

My first step has been to chip away at Storyteller’s codebase by slowly replacing dependencies that aren’t supported on the CoreCLR (this work is in the project.json branch on Github):

Current State Proposed End State
  • Targets .Net 4.6
  • Self-hosted w/ Nowin
  • FubuMVC for the web application
  • Fleck for web sockets support
  • Tests execute in a separate AppDomain with all communication done via sending Json messages through .Net Remoting
  • FubuCore for the command line parsing
  • Uses Fixie for unit testing
  • RhinoMocks for mocking
  • Csproj/MSBuild for compiling, Paket for Nuget management
  • A single Nuget for the testing engine library and the test running/documentation generation executable
  • No Visual Studio or VS Code integration
  • Targets .Net 4.6 and the CoreCLR
  • Self-hosted with Kestrel
  • Raw ASP.Net Core middleware
  • Kestrel/ASP.Net Core for Websockets
  • Tests will execute in a separate process, and the communication between processes will all be done with sockets
  • Using Oakton for the command line parsing
  • Uses xUnit for unit testing
  • NSubstitute for mocking
  • The dotnet CLI for CI builds and all Nuget management, project.json for all the projects
  • A nuget for the .Net testing engine library, a second one for the command line tooling for specification running and editing, a third nuget for the documentation generation
  • A fourth nuget for integrating Storyteller with dotnet test
  • A VS Code plugin?

Some thoughts on the work so far:

  • Kestrel and the bits of ASP.Net Core I’m using have been pretty easy to get up and going. The Websockets support doesn’t feel terribly discoverable, but it was easy to find good enough examples and get it going. I was a little irritated with the ASP.Net team for effectively ditching the community-driven OWIN specification (yes, I know that ASP.Net Core supports OWIN, but it’s an emulation) for their own middleware signature. However, I think that what they did do is probably going to be much more discoverable and usable for the average user. I will miss referring to OWIN as the “mystery meat” API.
  • I actually like the new dotnet CLI and I’m looking forward to it stabilizing a bit. I think that it does a lot to improve the .Net development experience. It’s an upside down world when an alt.net founder like me is defending a new Microsoft tool that isn’t universally popular with more mainstream .Net folks.
  • I still like Fixie and I hope that project continues to move forward, but xUnit is the only game in town for the dotnet CLI and CoreCLR.
  • Converting the projects to the new project.json format was relatively harmless compared to the nightmare I had doing the same with StructureMap, but I’m not yet targeting the CoreCLR.
  • I’ve always been gun shy about attempting any kind of Visual Studio.Net integration, but from a cursory look around xUnit’s dotnet runner code, I’m thinking that a “dotnet test” adapter for Storyteller is very feasible.
  • The new Storyteller specification editing user interface is a React.js-based SPA. I’m thinking that that architecture should make it fairly simple to build a VS Code extension for Storyteller.

 

The Killer Problem: AppDomain’s are Gone (for now)

Storyteller, like most tools for automating tests against .Net, relies on AppDomain’s to isolate the application under test from the test harness so that you can happily rebuild your application and rerun without having to completely drop and restart the testing tool. Great, and other than .Net Remoting not being the most developer-friendly thing in the world, that’s worked out fairly well in Storyteller 3 (it had been a mess in Storyteller 1 & 2).

There’s just one little problem, AppDomain’s and Remoting are no longer in the CoreCLR until at least next year (and I’m not wanting to count on them coming back). It would be perfect if you were able to unload the new AssemblyLoadContext, but as far as I know that’s not happening any time soon.

At this point, I’m thinking that Storyteller will work by running tests in a completely separate process to be launched and shut down by the Storyteller test running executable. To make that work, users will have to make their Storyteller specification project be an executable that bootstraps their system under test and then pass an ISystem object and the raw command line parameters into some kind of Storyteller runner.

I’ve been experimenting with using raw sockets for the cross process communication and so far, so good. I’m just shooting json strings back and forth. I thought about using HTTP between the processes, but I came down to just feeling like that would be too heavy. I also considered using our LightningQueues project in its “ZeroMQ” mode, but again, I opted for lighter weight. The other advantage for me is that the “dotnet test” adapter communication is done by json over sockets as well.

I think this strategy of running separate processes would make Storyteller a little more complicated to set up compared to the existing “just make a class library and Storyteller will find your custom ISystem if one exists” strategy.  My big hope is that the combination of depending on a separate command line launched process and shooting json across sockets will make it much easier to bring on alternative test running engines that would be usable with the existing Storyteller user interface tooling.

 

 

 

15 thoughts on “Moving Storyteller to the CoreCLR and going Cross Platform

  1. Can you give more detail about “xUnit is the only game in town for the dotnet CLI and CoreCLR”? I’m using NUnit for Noda Time, using the beta dotnet-test-nunit and it’s fine from the command line, although I’m having a pretty ropy time running tests from VS (both CodeRush and Test Explorer).

    1. I think I stand corrected. I didn’t know that NUnit had any CoreCLR or dotnet cli support yet. I’m having decent luck using the very latest preview of TestDriven.Net to run and debug into unit tests.

      1. That tends to be the way of things – look in month X and there’s no support for Core, in month X+1 it’s in beta, and in X+2 it’s released. Frustrating in terms of evaluating the best option for any particular dimension, but a good thing in general 🙂

    1. It’s not a game changing library by any means. Oakton is just a somewhat improved, CoreCLR-ized version of the command line parsing I’ve used for years from FubuCore.

  2. Resharper doesn’t work with .net core tests yet and the VS runner is barely useable, but dotnet test does work with the runner.

    Unfortunately Microsoft give very little guidance on their “test” folder pattern too, but what we really need is a test framework adapter to go across all 4/5 frameworks! Or a few .net framework attributes and Assert extensions than can be plugged into.

    1. TestDriven.Net *is* working against the CoreCLR/project.json project files, and that’s enough to make me productive. If I didn’t have that, I wouldn’t be all that enthusiastic about working with any of the new stuff.

  3. > I thought about using HTTP between the processes, but I came down to just feeling like that would be too heavy.

    Once you want to start doing more complex communication between the client and server, such as error handling and recovery, passing metadata (like version the number of the server and client), etc., I think you’re going to regret that decision.

    HTTP isn’t really heavy; there’s just a tiny bit of header parsing “overhead” for each request and response. I say it’s worth it by whole lot in the long run.

Leave a comment