Marten 1.1 Release Notes

Marten 1.1 was released just now (as in, hold your horses until Nuget gets done indexing it) with an assortment of bug fixes, performance & reliability improvements, and a couple of new convenience methods. As our teams have used Marten more at work, we’ve also had to make some adjustments for running Marten under reduced Postgresql security privileges and with the “AutoCreateSchemaObjects == None” mode. Finally, we had to add a couple new public members to existing API’s, so SemVer rules mean this had to be a minor point bump.

So what’s new or different? You can find the entire 1.1 issue and pull request list in GitHub. The highlights are described below:

Distinct() Support in Linq

From a pull request by John Campion, Marten now supports the Linq Distinct() keyword:

public void use_distinct(IQuerySession session)
    var surnames = session
        .Select(x => x.LastName)

Better Connection and Transaction Hygiene

I’m a little embarrassed by this one, but at least we got it before it did too much harm. Marten had been too aggressive in starting transactions in sessions which has had the effect of making Npgsql send extraneous ROLLBACK; messages to Postgresql to close out the empty transactions. In some failure cases, our team at work was seeing this cause a connection to hang. We made two fixes for this behavior:

First off, if you IDocumentSession.SaveChanges(Async) is called when there are no outstanding changes queued up, Marten does absolutely nothing. No connection opened, no transaction started, just nothing.

Secondly, Marten now starts transactions lazily within an IDocumentSession. So instead of starting a transaction on the first time a session opens a connection to Postgresql, it defers that until SaveChanges() or SaveChangesAsync() is called.

public void lazy_tx(IDocumentSession session)
    // Executing this query will *not* start
    // a new transaction
    var users = session
        .Where(x => x.Internal)

    session.Store(new User {UserName = "lebron"});

    // This starts a transaction against the open
    // connection before doing any writes

Data Migration Improvements

From our work on moving document storage from RavenDb to Marten (and other users too), we’ve bumped into a little bit of friction in Marten. The bulk inserts in either of the non-default modes left out the last modified data. That impacts either of these options:

public void bulk_inserts(IDocumentStore store, Target[] documents)
    store.BulkInsert(documents, BulkInsertMode.IgnoreDuplicates);

    // or

    store.BulkInsert(documents, BulkInsertMode.OverwriteExisting);

To make it easier to migrate data in documents that uses a Hilo sequence for identity assignment, we added a convenience method to establish a new “floor” in the sequence to avoid conflicting with the existing data being brought over from a new system.

public void reset_hilo(IDocumentStore store)
    // This resets the Hilo state in the database
    // for the IntDoc document type so that
    // all id's assigned will be greater than the floor
    // value.

Do note that it’s possible and even likely that there will be gaps in the id sequence in the database when you do this.








An Experience Report of Moving a Complicated Codebase to the CoreCLR

TL;DR – I like the CoreCLR, project.json project system, and the new dotnet CLI so far, but there are a lot of differences in API that could easily burn you when you try to port existing .Net code to the CoreCLR.

As I wrote about a couple weeks ago, I’ve been working to port my Storyteller project to the CoreCLR en route to it being cross platform and generally modernized. As of earlier this week I think I can declare that it’s (mostly) running correctly in the new world order. Moreover, I’ve been able to dogfood Storyteller’s documentation generation feature on Mac OSX today without much trouble so far.

As semi-promised, here’s my experience report of moving an existing codebase over to targeting the CoreCLR, the usage of the project.json project system, and the new dotnet CLI.


Random Differences

  • AppDomain is gone, and you’ll have to use a combination of AppContext or DependencyContext to replace some of the information you’ve gotten from AppDomain.CurrentDomain about the running process. This is probably an opportunity for polyfill’s
  • The Thread class is very different and a couple methods (Yield(), Abort()) were removed. This is causing me to eventually go down to pinvoke in Storyteller
  • A lot of convenience methods that were probably just syntactic sugar anyway have been removed. I’ve found differences with Xml support and Stream’s. Again, I’ve gotten around this by adding extension methods to the Netstandard/CoreCLR code to add back in some of these things


Project.json May be Just a Flash in the Pan, but it’s a Good One

Having one tiny file that controls how a library is compiled, Nuget dependencies, and how that library is packed up into a Nuget later has been a huge win. Even better yet, I really appreciate how the project.json system handles transitive dependencies so you don’t have to do so much bookkeeping within your downstream projects. I think at this point I even prefer the project.json + dotnet restore combination over the Nuget workflow we have been using with Paket (which I still think was much better than out of the box Nuget was).

I’m really enjoying having the wildcard file inclusions in project.json so you’re not constantly having all the aggravation from merging the Xml-based csproj files. It’s a little bit embarrassing that it’s taken Microsoft so long to address that problem, but I’ll take it now.

I really hope that the new, trimmed down csproj file format is as usable as project.json. Honestly, assuming that their promised automatic conversion works as advertised, I’d recommend going to project.json as an interim solution rather than waiting.


I Love the Dotnet CLI

I think the new dotnet CLI is going to be a huge win for .Net development and it’s maybe my favorite part of .Net’s new world order. I love being able to so quickly restore packages, build, run tests, and even pack up Nuget files without having to invest time in writing out build scripts to piece it altogether.

I’ve long been a fan of using Rake for automating build scripts and I’ve resisted the calls to move to an arbitrarily different Make clone. With some of my simpler OSS projects, I’m completely forgoing build scripts in favor of just depending on the dotnet cli commands. For example, I have a small project called Oakton using the dotnet CLI, and it’s entire CI build script is just:

rmdir artifacts
dotnet restore src
dotnet test src/Oakton.Testing
dotnet pack src/Oakton --output artifacts --version-suffix %build.number%

In Storyteller itself, I removed the Rake script altogether and just a small shell script that delegates to both NPM and dotnet to do everything that needs to be done.

I’m also a fan of the “dotnet test” command too, especially when you want to quickly run the support for one .Net framework version. I don’t know if this is the test adapter or the CoreCLR itself being highly optimized, but I’ve been seeing a dramatic improvement in test execution time since switching over to the dotnet cli. In Marten I think it cut the test execution time of the main testing suite down by 60-70% some how.

The best source I’ve found on the dotnet CLI has been Scott Hanselman’s blog posts on DotNetCore.


AppDomain No Longer Exists (For Now)

AppDomain’s getting ripped out of the CoreCLR (yes, I know they’re supposed to come back in Netstandard 2.0, but who knows when that’ll be) was the single biggest problem I had moving Storyteller over to the CoreCLR. I outlined the biggest problem in a previous post on how testing tools generally use AppDomain’s to isolate the system under test from the test harness itself.

I ended up having Storyteller spawn a separate process to run the system under test in a way so that users can rebuild that system without having to first shut down the Storyteller specification editor tool. The first step was to replace the little bit of Remoting I had been using between AppDomain’s with a different communication scheme that just shot JSON back and forth over sockets. Fortunately, I had already been doing something very similar through a remoting proxy, so it wasn’t that bad of a change.

The next step was to change Storyteller testing projects to be changed from a class library to an executable that could be invoked to start up the system under test and start listening on a supplied port for the JSON messages described in the previous paragraph.This was a lot of work, but it might end up being more usable. Instead of depending on something else to have pre-compiled the system under test, Storyteller can start up the system under test with a spawned call to “dotnet run” that does any necessary compilation for you. It also makes it pretty easy to direct Storyteller to run the system under test under different .Net versions.

Of course, the old System.IO.Process class behaves differently in the CoreCLR (and across platforms too), and that’s still causing me some grief.


Reflection Got Scrambled in the CoreCLR

So, that sucked…

Infrastructure or testing tools like Storyteller will frequently need to use a lot of reflection. Unfortunately, the System.Reflection namespace in the CoreCLR has a very different API than classic .Net and that has been consistently been a cause of friction as I’ve ported code to the CoreCLR. The challenge is even worse if you’re trying to target both classic .Net and the CoreCLR.

Here’s an example, in classic .Net I can check whether a Type is an enumeration type with “type.IsEnum.” In the CoreCLR, it’s “type.GetTypeInfo().IsEnum.” Not that big a change, but basically anything you need to do against a Type is now on the paired TypeInfo and you now have to bounce through Type.GetTypeInfo().

One way or another, if you want to multi-target both CoreCLR and .Net classic, you’ll be picking up the idea of “polyfills” you see all over the Javascript world. The “GetTypeInfo()” method doesn’t exist in .Net 4.6, so you might do:

public static Type GetTypeInfo(this Type type)
    return type;

to make your .Net 4.6 code look like the CoreCLR equivalent. Or in some cases, I’ve just built polyfill extension methods in the CoreCLR to make it look like the older .Net 4.6 API:

#if !NET45
        public static IEnumerable<Type> GetInterfaces(Type type)
            return type.GetTypeInfo().GetInterfaces();


Finally, you’re going to have to dip into conditional compilation fairly often like this sample from StructureMap’s codebase:

    public static class AssemblyLoader
        public static Assembly ByName(string assemblyName)
#if NET45
            // This method doesn't exist in the CoreCLR
            return Assembly.Load(assemblyName);
            return Assembly.Load(new AssemblyName(assemblyName));

There are other random changes as well that I’ve bumped into, especially around generics and Assembly loading. Again, not something that the average codebase is going to get into, but if you try to do any kind of type scanning over the file system to auto-discover assemblies at runtime, expect some churn when you go to the CoreCLR.

If you’re thinking about porting some existing .Net code to the CoreCLR and it uses a lot of reflection, be aware that that’s potentially going to be a lot of work to convert over.

Definitely see Porting a .Net Framework Library to .Net Core from Michael Whelan.

Why you should give Marten a look before adopting an ORM like EF

Of course this post is displaying my bias, but hell, I wouldn’t have spent almost a year working so hard on Marten if I didn’t believe in the concept and its advantages over other .Net persistence tools. In no way am I actually criticizing Entity Framework or Dapper in this post, but I am trying to  bluntly say that the days of ORM’s and even micro-ORM’s needs to come to a close soon.

Last week I got to pull the trigger on Marten 1.0. This week I think it’s time to talk about why and when you would want to use Marten as your persistence strategy in .Net development. Most of the early interest and adopters of Marten have been disaffected users of RavenDb looking for a more reliable alternative who were already sold on the idea of a document database (and here’s me from a month ago on moving from RavenDb to Marten). That’s great and all, and it got our community jump started, but I think Marten deserves a longer look from developers who aren’t already using document databases like RavenDb or MongoDb today.

I’ve written about my thinking about how I choose persistence tools before. I’ve also spoken out many times about the never ending attempts to keep business logic code from being compromised by coupling it directly to your database storage. Finally, there’s the old discussions about “persistence ignorance” from years past.

Consistent with my beliefs and experience with software development, I feel like the document database approach for most of your application persistence makes the most sense for developer productivity, automated testing support, and ease of deploying application code changes. Yes, please do consider Marten as a replacement or alternative to MongoDb or RavenDb because it’s built on the more reliable and performant Postgresql foundation. But also, give Marten a serious look as an alternative to using the Entity Framework, NHibernate, or any of the plethora of micro-ORM projects out there.

Why you would choose Marten over EF

  • Much less mechanical work for mapping and configuration. Far less effort for schema management and database migrations.
  • Higher “reversibility” in your software architecture because it’s so much easier to change the structure of your documents with Marten versus an ORM of any kind.
  • Marten is built on Postgresql and gets to take advantage of their very advanced JSON features. Postgresql is highly performant, widely used, runs cross platform, and it’s FOSS.
  • Being able to use embedded Javascript functions inside of Postgresql gives Marten a potentially far better story for creating read side views of complex data than any kind of ORM access plus AutoMapper type tooling.
  • Marten is going to be able to handle hierarchical data and storing subclassed documents much easier than any ORM ever could. Moreover, Marten will easily outperform an ORM fetching any kind of aggregated, deep object because there’s just far less stuff that Marten has to do compared to an ORM like EF.
  • Marten allows you to mix and match persistence strategies. Document storage? Check. Event sourcing persistence? Got that too. Wanna use straight up SQL or even integrate directly with Dapper in Marten database transactions? We’ve got that one covered as well. Heck, if you wanted to do full text searching or do key/value storage, you can use Postgresql features to get that done all in one single bit of infrastructure.
  • Using value types is trivial in Marten. As long as your JSON serializer can handle your type, you’re good to go with no crazy gymnastics like ORM’s cause.
  • I think that Marten’s compiled query feature is hands down better than EF’s solution for reusing Linq query parsing, and that’s goint to make a significant difference in application performance under a load.

Why you would choose EF over Marten

  • You don’t have any experience with Postgresql
  • The full weight of Microsoft is behind EF whereas Marten is a community OSS project. My company is de facto sponsoring Marten and I have a track record for sticking with my OSS projects for a long time, but EF has a whole team behind it.
  • EF’s Linq support is going to be more complete than Marten’s — but we’re certainly trying on that front.
  • Entity Framework and relational databases are very well known and it’s easy to find folks who are familiar with them. Like it or not, that has to be part of your thinking about choosing any tool.
  • More documentation, samples, and a larger development community than any OSS tool in .Net could ever attract.
  • Your shop simply favors Sql Server or uses cloud hosting providers that only support Sql Server. We’re frequently getting the feedback from developers that they’d like to use Marten but that their database teams won’t allow Postgresql. I think that there will eventually be a version of Marten on Sql Server, but that’s dependent upon Sql Server getting better JSON support than what’s coming in MSSQL 2016.
  • You can still use straight up SQL to access your Entity Framework persisted data if that’s valuable. You can do that with Marten as well, but the Postgresql JSON operators aren’t the most approachable thing in the world.


About ORM’s

A decade ago I strongly espoused Object Relational Mapping (ORM) tools like NHibernate  as the “best” tool to allow us to build our application code in a way that made sense for the application logic and still be able to persist state to a relational database. Great, and that was arguably better than the morass of purely procedural code and wretched stored procedure data access that came before. After using ORM’s for five years, then another five years of predominantly using document databases, I don’t ever want to go back to ORM’s.

Why? Because ORM’s are a giant mess to use if your object model deviates much from the structure that makes sense in the database — and it usually does. I’m sick of wasting time trying to remember to make every member virtual for dynamic proxies, sick of having to worry about lazy vs eager loading, and having to duplicate so much logical work to keep the application code synchronized to the database schema. I absolutely prefer the productivity we get from document database approaches where we just store a JSON representation of our document objects with minimal work for mapping.

While the classic analogy for ORM’s is Ted Neward’s seminal blog post on ORM’s are the Vietnam of Computer Science, the analogy I would use is that ORM’s are like Prius’s. I think they were only an overly complicated, intermediate solution until we came up with better approaches. Much the way that hybrid cars are a stepping stone to fuel cell or pure electric cars and Obamacare should be a temporary measure until we get our acts together and adopt a rational single payer healthcare system, ORM’s are just a less bad idea than what came before.



I helped write the EF Vote of No Confidence once upon a time, and while I to this day think that the initial version of the Entity Framework should have never been released, I don’t actually have any issue with the current versions of EF and how it implements object relational mapping. I’m just objecting to the whole concept of ORM’s now;)


What about Micro-ORM’s?

I occasionally see one of the main developers on Dapper pooh-pooh’ing IoC containers like StructureMap.* That’s okay though because I have the exact same “meh” attitude toward micro-ORM’s. I think they couple your application way too tightly to your database structure and they seem to let database concerns bleed into your application code more than I’d prefer. Again, the problem with ORM’s is the “R” more than how the “M” is done.

It’s a moot point though, because you can happily use Dapper in conjunction with Marten by just calling Dapper methods on the exposed connection property of Marten sessions — which was done explicitly to enable this interaction.



* I don’t feel strongly enough to even argue the pro-IoC container side of things. I’ve thought about kicking out a blog post titled “A tepid defense of IoC containers.” My own usage of StructureMap is pretty simple anyway.

Storyteller 3.0 Official Release — and on to 4.0

I was able to finally cut an official Storyteller 3.0 release to Nuget at the end of last week with a few fixes. Storyteller is a tool for writing and running executable specifications (BDD, if you absolutely have to call it that), end to end automated testing, and a separate function for generating technical documentation websites that I’ve used for StructureMap, Marten, and Storyteller itself.

Some links with a whole lot more information:

I’ve done two conference talks about Storyteller in the last year but both of the videos were never posted. I’ve had rotten luck with this lately:( I *might* be doing a talk about the building of the Storyteller client with React.js at CodeMash.


Onward to 4.0!

With Marten settling down after its 1.0 release, I’ve been able to devote much more time to Storyteller and the 4.0 release is well underway. While there are worlds of little GitHub issues to knock down, the big themes for 4.0 are to:

  1. Get Storyteller running cross platform using the CoreCLR. I blogged about my plans for pulling this off a couple weeks ago, and as of last week I’m able to run Storyteller locally on Mac OSX. I’ll be writing a follow up experience report post soon, but the summary is “so, that sucked.”
  2. Fully embrace the dotnet CLI and create a dotnet test adapter for Storyteller to make it far easier for developers to run and debug Storyteller specifications
  3. The “step through” model of executing specifications when troubleshooting failing specs. This is partially done now, but there’s plenty to do on the UI client side before it’s really usable.
  4. Extend the reach of Storyteller by allowing users to author the specification language on the fly in the specification editor without having to first write C# code. My hope is that this feature will make Storyteller much more effective as a specifications tool. This had been a goal of the 3.0 release, but I never got to do it.
  5. Enhance the performance testing and diagnostic support in Storyteller today. Maybe the most popular feature of Storyteller is how it captures, correlates, and displays information about the system performance during spec execution. I’d like to take this farther with possible support for gathering benchmark data and allowing teams to express non-functional requirements about performance that would cause specifications to fail if they’re too slow.
  6. Potentially, be able to use the existing Storyteller specification editor and runner against completely different testing engines written for other languages or platforms

More coming this fall…

Marten 1.0 is Here!


Marten is an open source library hosted on GitHub that makes the rock solid Postgresql database usable within .Net systems as a fully fledged document database. On top of that, we also built event store functionality with support for read side projection generation. By adopting Marten and Postgresql on your project, you’ve got a one stop shop for document storage, event sourcing, classic relational database usage, and even key/value storage in one single database engine.

I was able to push the official Marten 1.0 release today to Nuget. As of now, I’m fully ready to stand behind Marten as production ready. We have some early adopters using Marten in production now, my own shop is well underway with an effort to migrate our largest application to Marten, and I know of a couple other teams developing against Marten today. Now that the big “1.0” version is out, we will be honoring semantic versioning rules from here on out for backwards compatibility. It’s almost a 100% certainty that there’s going to be another nuget release next week with some small additions and whatever bug fixes inevitably spill out of the 1.0 release.

This is the culmination of almost a solid year of effort from myself and many others. Marten is de facto sponsored as an OSS project by my employer and several of my colleagues have made contributions as well. I’ve said for quite a while now that Marten has been the most positive experience I’ve ever had developing an OSS project and I greatly appreciate the Marten community. By doing this project in public, I feel like my company has been able to get valuable feedback on usability, the necessary features we need to perform well, and a lot of critical bugs detected and fixed much faster than I think we could have done if we’d built this in private.

I’ll follow up this blog post and try to make the case for why and when a team should choose Marten over the other options for persistence. Marten has been driven from the very beginning as a replacement for RavenDb in our architecture at work, but I’ll also try to make the case for why your team could be more productive with the Marten/Postgresql combination than an ORM like EF or NHibernate and even micro-ORM’s. We do need to provide a lot more information and samples for using the event store functionality that I’ll try to blog out as we get that pulled together.

I’d like to thank…

  • My colleague Corey Kaylor for helping create the vision and doing so much work in our build automation. Thank Corey if you like having CoreCLR support or the compiled query feature
  • Nieve for doing a lot of work on Marten’s Linq support, among other things
  • Tim Cools for pushing the event store functionality and our identity map mechanics
  • Evgeniy Kulakov for getting the async functionality started
  • Daniel Marbach for doing so much work to help us improve Marten’s async support
  • Jens Pettersson for lots of feedback, suggestions, and help with Marten’s bulk insert story
  • Phillip Haydon for being our biggest advocate, providing a lot of guidance on Postgresql, and driving the design of many features
  • Khalid Abuhakmeh for doing the graphics on the website and helping with the event store
  • Brady Clifford and company for doing the heavy lifting to make Marten fit into our application architecture at work
  • A whole host of other contributors and folks who have raised or commented on issues for us. The level of community interest and feedback I’ve seen has been unprecedented to me in my 12 years of doing OSS work in .Net.


Some links




If you’re interested, or know somebody who would be, I am able to take on short term consulting projects for Marten as side work. 

Building Marten’s Async Daemon

A couple weeks ago I wrote a blog post on the new “Async Daemon” feature in Marten. This post is a bit that I cut out of that post just describing the challenges I faced and what I did to slide around the problems. For all Marten users that have been asking me about writing their own subsystem to read and process events offline, you really want to read this post to understand why that’s much harder than you’d think and why you do probably want to just help make the async daemon solid.

The first challenge for the async daemon was “knowing” when there are new events that need to be processed by async projections. When a projection runs, it needs to process the events in the same order that they were captured in. Since the async daemon was inevitably going to use some sort of polling (NOTIFY/LISTEN in Postgresql was not adequate by itself) to read events out of the event table, we needed a very efficient way to be able to page the event fetching without missing events.

We started Marten with the thought that we would try to accomplish that by having the event store enqueue the events in a rolling buffer table that some kind of offline process would poll and read, but we were talked out of that approach in discussions with a Postgresql consultant who was helping us at work. Moreover, as I worked through other use cases to rebuild projections from scratch or add new projections later, we realized that the rolling buffer table would never have worked for the async daemon.

We also experimented with using sequential Guid’s as the global identifier for events in the event store with the idea that we would be able to use that to key off of for the projections by always querying for “Id > [last event id encountered].” In my testing I was unable to get the sequential Guid algorithm to accurately order the event id’s, especially under a heavy parallel load.

In the end, we opted to make the event store table in Marten use a sequential long integer as its primary key, and backed that with a database SEQUENCE. That gave us a more reliable way to “know” what events were new for each individual projection. In testing I figured out pretty quickly that the async daemon was missing events when there’s a lot of concurrent events streaming in because of event sequence id’s being reserved from in flight transactions. To counteract that problem, I ended up taking a two step process:

  1. Limit the async daemon to only querying against events that were captured before some time of threshold (3 seconds is the default) to avoid missing events that are still in flight
  2. When the async daemon fetches a new page of events, it actually tries to check that there are no gaps in the event sequence, and if there is, it pauses a little bit, and tries again until there are no gaps in the sequence or if the subsequent fetch turns up the exact same data (leading the async daemon to believe that the missing events were rejected).

Those two steps — as far as I can tell — have eliminated the problems I was seeing before about missing events in flight. It did completely ruin a family dinner at our favorite Thai restaurant when I couldn’t make myself stop thinking about how to slide around the problems in event ordering;)

The other killer problem was in trying to make the async daemon resilient in the face of potential connectivity problems and occasional projection failures without losing any results. I’ll try to blog about that in a later post.




Moving Storyteller to the CoreCLR and going Cross Platform

This is half me thinking out loud and half experience report with new .Net new world order. If you want to know more about what Storyteller is, there’s an online webinar here or my blog post about the Storyteller 3 reboot and vision.

Storyteller 3 is an OSS acceptance test automation tool that we use at work for executable specifications and end to end regression testing. Storyteller doesn’t have a huge number of users, but the early feedback has been mostly positive from the community and it gets plenty of pull requests that have helped quite a bit with usability. Not that my Marten work is settling down I’ve been able to start concentrating on Storyteller again.

My current focus for the moment is making Storyteller work on the CoreCLR as a precursor to being truly cross platform. Going a little farther than that, I’m proposing some changes to its architecture that I think will make it relatively painless to use the existing user interface and test runners with completely different, underlying test engines (I’m definitely thinking about a Node.js based runner and doing a port of the testing engine to Scala or maybe even Swift or Kotlin way down the road as a learning exercise).

My first step has been to chip away at Storyteller’s codebase by slowly replacing dependencies that aren’t supported on the CoreCLR (this work is in the project.json branch on Github):

Current State Proposed End State
  • Targets .Net 4.6
  • Self-hosted w/ Nowin
  • FubuMVC for the web application
  • Fleck for web sockets support
  • Tests execute in a separate AppDomain with all communication done via sending Json messages through .Net Remoting
  • FubuCore for the command line parsing
  • Uses Fixie for unit testing
  • RhinoMocks for mocking
  • Csproj/MSBuild for compiling, Paket for Nuget management
  • A single Nuget for the testing engine library and the test running/documentation generation executable
  • No Visual Studio or VS Code integration
  • Targets .Net 4.6 and the CoreCLR
  • Self-hosted with Kestrel
  • Raw ASP.Net Core middleware
  • Kestrel/ASP.Net Core for Websockets
  • Tests will execute in a separate process, and the communication between processes will all be done with sockets
  • Using Oakton for the command line parsing
  • Uses xUnit for unit testing
  • NSubstitute for mocking
  • The dotnet CLI for CI builds and all Nuget management, project.json for all the projects
  • A nuget for the .Net testing engine library, a second one for the command line tooling for specification running and editing, a third nuget for the documentation generation
  • A fourth nuget for integrating Storyteller with dotnet test
  • A VS Code plugin?

Some thoughts on the work so far:

  • Kestrel and the bits of ASP.Net Core I’m using have been pretty easy to get up and going. The Websockets support doesn’t feel terribly discoverable, but it was easy to find good enough examples and get it going. I was a little irritated with the ASP.Net team for effectively ditching the community-driven OWIN specification (yes, I know that ASP.Net Core supports OWIN, but it’s an emulation) for their own middleware signature. However, I think that what they did do is probably going to be much more discoverable and usable for the average user. I will miss referring to OWIN as the “mystery meat” API.
  • I actually like the new dotnet CLI and I’m looking forward to it stabilizing a bit. I think that it does a lot to improve the .Net development experience. It’s an upside down world when an founder like me is defending a new Microsoft tool that isn’t universally popular with more mainstream .Net folks.
  • I still like Fixie and I hope that project continues to move forward, but xUnit is the only game in town for the dotnet CLI and CoreCLR.
  • Converting the projects to the new project.json format was relatively harmless compared to the nightmare I had doing the same with StructureMap, but I’m not yet targeting the CoreCLR.
  • I’ve always been gun shy about attempting any kind of Visual Studio.Net integration, but from a cursory look around xUnit’s dotnet runner code, I’m thinking that a “dotnet test” adapter for Storyteller is very feasible.
  • The new Storyteller specification editing user interface is a React.js-based SPA. I’m thinking that that architecture should make it fairly simple to build a VS Code extension for Storyteller.


The Killer Problem: AppDomain’s are Gone (for now)

Storyteller, like most tools for automating tests against .Net, relies on AppDomain’s to isolate the application under test from the test harness so that you can happily rebuild your application and rerun without having to completely drop and restart the testing tool. Great, and other than .Net Remoting not being the most developer-friendly thing in the world, that’s worked out fairly well in Storyteller 3 (it had been a mess in Storyteller 1 & 2).

There’s just one little problem, AppDomain’s and Remoting are no longer in the CoreCLR until at least next year (and I’m not wanting to count on them coming back). It would be perfect if you were able to unload the new AssemblyLoadContext, but as far as I know that’s not happening any time soon.

At this point, I’m thinking that Storyteller will work by running tests in a completely separate process to be launched and shut down by the Storyteller test running executable. To make that work, users will have to make their Storyteller specification project be an executable that bootstraps their system under test and then pass an ISystem object and the raw command line parameters into some kind of Storyteller runner.

I’ve been experimenting with using raw sockets for the cross process communication and so far, so good. I’m just shooting json strings back and forth. I thought about using HTTP between the processes, but I came down to just feeling like that would be too heavy. I also considered using our LightningQueues project in its “ZeroMQ” mode, but again, I opted for lighter weight. The other advantage for me is that the “dotnet test” adapter communication is done by json over sockets as well.

I think this strategy of running separate processes would make Storyteller a little more complicated to set up compared to the existing “just make a class library and Storyteller will find your custom ISystem if one exists” strategy.  My big hope is that the combination of depending on a separate command line launched process and shooting json across sockets will make it much easier to bring on alternative test running engines that would be usable with the existing Storyteller user interface tooling.