Hello Calavista

I’m excited to announce that today I’m joining Calavista here in Austin. I’ll be providing oversight and technical direction for full stack, custom application development projects for our clients. I think I’ll mostly be working on the .Net platform, but I might get to be involved with other platforms here and there. After years of working remotely and mostly working on oddball, technical infrastructure tooling, I’m looking forward to simply getting back to project based work again — and having some measure of say over the entire technical stack;)

Several folks have asked me what’s going to happen with the OSS projects I work on and if I’d be able to use any of them in this new role. I can tell you that I don’t plan to walk away from any of my active OSS projects, but I have no idea what the future holds or whether or not it will be appropriate to use many of them in my new role. Offhand, I think I’ll be using Alba and maybe Storyteller on the first round of clients. After that, who knows?

I do know that I’ll be using my OSS work as a way to learn some technologies that are new to me but likely to show up in client project work. Definitely look for Jasper to suddenly get plenty of Azure friendly features and work much more tightly with MVC Core as part of that effort.



Retrospective on my OSS Career

Tomorrow is my last day at Willis Towers Watson (WTW) after 5+ years, and I felt like a bit of a retrospective was in order — partially to convince myself that I actually accomplished something in that time;) I did plenty of other things while I was there, but supporting and building OSS tools has been a major part of my official role.

It’s stupid long, but I think this is a decent timeline of the highlights and lowlights of my OSS career. I think I’m about to slow way down on OSS work after this year and in my new role, so I kind of want to reflect back a little bit on what I have done the past 15 years or so before turning the page.


I was in a miserable position as a non-coding architect at a Big IT shop, and I was scared to death that that meant my career was already over. I came up with a crazy scheme to build out an awesome new ORM for this new .Net thing as a way to prove to prospective employers that I really could write code. That work went nowhere, but I did actually manage to land a different job at ThoughtWorks.

From following the internal discussion lists there I learned about dependency injection and the very early PicoContainer IoC tool that had been built by some folks at TW. I realized that I could repurpose some of the code from the wreckage of my failed ORM project to be an IoC tool for .Net. Add some encouragement from TW colleagues, and StructureMap was born (and to this day, refuses to die).


The details aren’t important, but I went through some dark, then much happier but chaotic, personal times and work — especially OSS work — was an escape for me.

I made the big StructureMap 2.5 release that Jimmy Bogard took to calling my Duke Nukem Forever release that was supposed to be StructureMap’s “Python 3000” release that fixed all the original structural problems and put it permanently on a stable base for the future.

Narrator’s Voice: Everybody including Jeremy hated the new StructureMap 2.5 configuration DSL, he immediately added better alternatives that weren’t documented, people yelled at him for years about the missing documentation, and StructureMap was certainly not “done”

A couple of us at Dovetail started FubuMVC, an alternative framework for ASP.Net because we thought we were smarter than the real ASP.Net team working on MVC and had a different vision for how all of that should work.

We also rebooted my older Storyteller project I had worked on for years as improved tooling for the old FitNesse testing engine, and finally moved it toward being a full blown replacement for FitNesse.



I did a big pre-compiler workshop at CodeMash 2012 that I thought went great and I was encouraged.

I joined WTW at the end of 2012 after my one and only (and last) startup experience turned sour. The corner of WTW where I work was at the time the biggest user of FubuMVC — the very large, very ambitious OSS web framework that was my own personal white whale for a long time. A couple of us believed that this would be a perfect opportunity to keep FubuMVC going as an OSS project because we genuinely believed that it would be successful enough that we would be able to make a living doing consulting for shops wanting to use it.

At the time, we were still building new features into FubuMVC that were almost immediately being put into real projects and it was a fun time.

Narrator Voice: It did not turn out the way Jeremy planned


Some of my WTW colleagues and I did a big workshop at CodeMash 2013 on the big FubuMVC 1.0 release that flopped hard and I failed to reach the obvious conclusion that the gig was up and it was time to move on. Having stayed up all night the night before reading A Memory of Light didn’t help either, but c’mon, we’d waited 20+ years to get to the end of that thing.

I poured a lot more energy that year into FubuMVC for command line tooling and a full blown templating engine that made for some really cool conference talks that year, but didn’t really get used that much.

The big win for 2013 was building out an addon called FubuTransportation that used a lot of the basic infrastructure in FubuMVC to be a fairly robust service bus that still underpins quite a few systems at WTW today. That was my first real exposure to asynchronous messaging, and I take some significant pride in what we built.


I basically had to admit that FubuMVC had failed as an OSS project and I was admittedly kind of lost the rest of the year. I wrote a lot about why it failed and what I thought we had learned along the way. The biggest lesson to me was that I had never done a good job promoting, explaining, or documenting FubuMVC and its usage. Part of my theory here is that if we’d had more usage and visibility early, we could have more quickly identified and addressed usability issues. I swore that if I ever tried to do something like FubuMVC ever again that I’d do that part of the project much better.

It did give me a chance to swing back to StructureMap and finish the big 3.0 release that made some sweeping improvements to the internals, improved a serious performance problem that impacted our systems, killed off some old bugs, and fixed some pretty serious flaws. I genuinely believed that the 3.0 release would put it permanently on a stable base for the future.

Narrator’s Voice: Jeremy was wrong about this

I also got to play around with an eventually abandoned project called pg-events that was meant to be a Node.js based event store backed by Postgresql. It didn’t go anywhere by itself, but it absolutely helped form the basis for what became the Marten event sourcing functionality that’s actually been a success.

Later that year we started seeing some significantly encouraging information about “Project K” that eventually became .Net Core. All of that made myself and my main contributors much more bullish about .Net again and in a fit of pure stubbornness, I started to envision how I would build a much better version of FubuMVC that took advantage of all the new stuff like Roslyn and fixed the technical flaws in FubuMVC. I started referring to that theoretical project as “Jasper” after my original hometown in Missouri.


My shop was in border line insurrection over how our Storyteller integration testing was going. I gave a big talk outlining what I thought some of the challenges and problems were including options to switch to SpecFlow or just use xUnit, but to my surprise, they chose to just have a much improved Storyteller that became the Storyteller 3.0 release.

I pretty well did a near ground up rewrite of the testing engine focusing on performance and ease of use. For the user interface, I used this work as a chance for me to learn React.js that we had just adopted at work as a replacement for Angular.js. I had a blast doing UI work for the first time in years and I’m genuinely pretty proud of how it all turned out. Storyteller 3 went into usage later that year and it’s mostly still going in our older projects that haven’t converted yet to .Net Core.

In late 2015, we knew that we needed to get our biggest system off of RavenDb before the next busy season. My boss at the time had a theory that we could exploit Postgresql’s new JSON support to act as a document database. I took on the work to go spike that out and see if we could build a library that could be swapped in for our existing RavenDb usage in that big application. The initial spiking went well, and off it went.

At one point my boss asked me what name I was using for the Postgresql as doc db library, because he was concerned that I’d choose a bad name like “Jasper.Data” — which of course was exactly what the project name I was temporarily using. I mumbled “no,” quickly googled what the natural predators of ravens are, and settled on the name “Marten.”

Because of the bitter taste of FubuMVC that still remained, I swore that I would do a much better job at the softer things in an OSS project and tried to blog up a storm about the early work. The Marten concept seemed to resonate with quite a few folks and we had interest and early contributions almost off the bat that did a lot to make the project go faster and be far more successful than I think it would have otherwise.

I still wasn’t *completely* done with FubuMVC, and did a pretty large effort to consolidate all the remaining elements that we still used at work into a single library in the FubuMVC 3.0 release. I spent a lot of time streamlining the bootstrapping as well for quicker feedback cycles during integration testing. A lot of this work helped inform the internals of Jasper I’ll talk about later.

In late 2015 Kristian Hellang worked with me to make StructureMap work with the new ASP.Net Core DI abstractions and compliance specifications. While we were doing that, I also snuck in some work to overhaul StructureMap’s type scanning support based on years of user problems. With that work done, I pushed the StructureMap 4.0 release in the belief that I had now overhauled everything in StructureMap from the old 2.* architecture and that it was done for all time.

Narrator’s Voice: Jeremy was wrong. While the 4.* release was definitely an improvement, users still managed to find oddball problems and the performance definitely lagged newer IoC tools


We used Marten in production, just in time for our busy season. It had about the expected level of issues pop up in its shakedown cruise, but I still felt pretty good about how it went. Adoption in the outside world steadily creeped up and I got to do several podcasts that year about Marten.

Unfortunately, Marten caused quite a bit of conflict between myself and our centralized database team that ultimately contributed to me deciding to leave. I lost some enthusiasm for Marten because of this, and my activity within the Marten community declined because of it.


OSS wise, this year was going to be all about Jasper for me. FubuMVC was a web framework first that had messaging functionality bolted on later. Jasper on the other hand, was going to be a much smaller tool to replace the older FubuMVC/FubuTransportation messaging in a way that would play nicely with the newer ASP.Net Core bits.

First though, I needed to clean my plate of all other outstanding work so I could concentrate on just Jasper:

  • Oakton was a bit of leftover fubu code we’d used for years for command line parsing that I converted to .Net Core and documented
  • Alba is also some leftover fubu code for declarative testing of HTTP handlers that I adapted for .Net Core, documented, and published to Nuget
  • Storyteller 4 moved Storyteller to ASP.Net Core. That turned out to be a huge chunk of work, but it was a great learning experience. I also added quite a few improvements for usage by non-developers that might not have paid off.
  • Storyteller 5 took a little better advantage of the new dotnet cli and made debugging specifications a lot easier
  • Marten 2.0 was a huge effort to reduce object allocations within Marten and improve runtime performance. It also cleaned up some internal trash and made it a lot easier going forward to add new types of database schema objects that have definitely paid off

Finally, I got some time to bear down and start working on Jasper that’s really been my main passion project for 3-4 years. I had a bunch of new colleagues in our architecture team at work that were interested in Jasper, and I thought we made a huge amount of progress fast. I didn’t do as much work to publicize it like I did with Marten because I just didn’t have a good feeling about my company’s continued support for Jasper after what happened with Marten.


I decided I was absolutely fed up with supporting StructureMap and it wasn’t viable to make the large scale changes it would take to fix the performance and other remaining issues. As an offramp for SM users and as part of Jasper, I started the year by yanking some code out of Jasper into a new library I first called “BlueMilk,” and now Lamar that is meant to be a much more performant and smaller replacement for StructureMap.

I’m also working full speed right now on Jasper with an expected 0.8 and 0.9 update coming in the next couple weeks. I don’t have a lot of interest yet, but I’m feeling very, very good about the technical direction, the usability, and the performance right now. I’m not 100% sure what WTW will do with it, but I’m committed to continuing with both Jasper and Lamar. I’m aiming for a 1.0 release of both by this fall.

And Beyond…

Several folks have asked me what will happen with Jasper or Marten when I start my new position next week. I honestly don’t know, but I fully intend to continue supporting Jasper, Marten, Storyteller, and Lamar. I don’t expect that I’ll have the bandwidth to write nearly as much OSS code as I have the past 5 years or so at WTW, but I wanted to slow down anyway.


What I’ve Learned

Heh, maybe not enough. I learned a lot of specific technical lessons and I’m a far better technical developer because of my OSS work. For more specifics though, I’d say:

  • Only do OSS work if you’re passionate about it and generally enjoy what you’re doing
  • My OSS work has absolutely had a positive impact on my career, just indirectly. My interview for my new position included a presentation on Marten. My previous position came about directly because of my OSS projects
  • It’s also an opportunity cost if you could have been learning something valuable in the time you spent on OSS work
  • I’ve met a lot of cool people through my OSS work and I have relationships I wouldn’t have had otherwise
  • Make your documentation easy to update and easy to contribute to by other folks
  • Talk about what you’re doing early and often




Jasper meets RabbitMQ

For the most part, I’ve been focused on Jasper’s built in socket-based and HTTP transports because they’re compatible with the infrastructure that our older, existing systems at work already use. For pretty easy to understand reasons, other folks in the company are very apprehensive about depending on something new that they’ve never heard of (even though its forebears have been running happily in production for 4-5 years, but I digress). To have another transport option using more industry proven technology, I took a couple days and built out a new transport option in Jasper that uses the well known RabbitMQ message broker to do the actual work of moving message data from service to service.

Getting Started with RabbitMQ within Jasper

There really isn’t that much to do. Assuming that you have RabbitMQ itself stood up (I run it locally on OS X for development, but hosting it in Docker is pretty easy too) and a Jasper application, you just need to install the Jasper.RabbitMQ Nuget into your Jasper application project and configure listeners or publishing rules via “rabbitmq” Uris as shown in the samples below.

From there, you can direct Jasper to listen for messages from a RabbitMQ broker like this:

public class AppListeningToRabbitMQ : JasperRegistry
    public AppListeningToRabbitMQ()
        // Port is optional if you're using the default RabbitMQ
        // port of 5672, but it's shown here for completeness

I’ll discuss how to get around this restriction in a later section, but for right now this configuration implies that the messages received from this queue are coming from another Jasper application or at least an application that is using Jasper’s idioms for envelope metadata through headers.

The Uri structure is specific to Jasper just to identify the connection to a certain RabbitMQ broker by host name, port number, and RabbitMQ specific information like queue names, exchanges, and exchange types.

Likewise, message subscriptions or publishing rules are configured by the same Uri data, with an example publishing rule shown below (when Jasper sees the Uri scheme “rabbitmq,” it knows to delegate to the RabbitMQ transport internally):

public class AppPublishingToRabbitMQ : JasperRegistry
    public AppPublishingToRabbitMQ()

Again, this simplistic example is assuming that there’s another Jasper application on the receiving side of the queue, or at least something that can understand the message metadata and serialized message data sent through the queue.

For the moment, I’ve just been testing with direct exchanges in RabbitMQ. It should be possible as is to use fan out exchanges with the current code, but I was unable to write a successful test proving that really worked locally. The usage of RabbitMQ right now is assuming that Jasper is responsible for the message routing. I’m going to leave the work for a later day, but the “seams” are already in place where we could replace Jasper’s built in message routing to use RabbitMQ topic or header exchanges to offload the actual routing logic to RabbitMQ.

More things to know:

  • Jasper is using the RabbitMQ.Client library to interact with RabbitMQ. If you browse the Jasper documentation, you can see how Jasper.RabbitMQ exposes all the RabbitMQ.Client ConnectionFactory configuration so that you can use any necessary features or configuration for that library (like I don’t know, security?).
  • The RabbitMQ transport can be used either in a “fire and forget,” non-durable manner, or as a durable transport that’s still backed up by Jasper’s support for durable messaging and the outbox pattern I demonstrated in my last post that obviates the need for distributed transactions between your application database and RabbitMQ itself.
  • More below, but you can use the RabbitMQ transport to integrate with other applications that are not themselves Jasper applications or even .Net applications


What if Jasper isn’t on the other end?

I did a long, internal presentation on Jasper at work a couple weeks ago that I would charitably describe as a train wreck (Jasper itself did fine, but the presentation went soaring off the rails for me). One of the main axis of pushback about adopting Jasper was that it would lock us into the .Net platform. Besides the ability to just use HTTP services between applications, I tried to say that we’d still be able to do messaging between Jasper/.Net applications and say a Node.js application through RabbitMQ with just a little bit of potential effort to translate between message metadata.

At the point of receiving a message from RabbitMQ or enqueuing a message to RabbitMQ, there’s a little bit of mapping that takes Jasper’s Envelope object that contains the raw message and some tracking metadata, and maps that to the RabbitMQ.Client IBasicProperties model. If there’s a non-Jasper application on the other side of the queue, you may want to do some translation of the metadata using the IEnvelopeMapper interface or subclass the built in DefaultEnvelopeMapper as shown below:

public class CustomEnvelopeMapping : DefaultEnvelopeMapper
    public override Envelope ReadEnvelope(byte[] data, IBasicProperties props)
        // Customize the mappings from RabbitMQ headers to
        // Jasper's Envelope values
        return base.ReadEnvelope(data, props);

    public override void WriteFromEnvelope(Envelope envelope, IBasicProperties properties)
        // Customize how Jasper Envelope objects are mapped to
        // the outgoing RabbitMQ message structure
        base.WriteFromEnvelope(envelope, properties);

To use your custom envelope mapping, attach it by Uri again like this:

public class CustomizedRabbitMQApp : JasperRegistry
    public CustomizedRabbitMQApp()
        Settings.Alter<RabbitMQSettings>(settings =>
            // Retrieve the Jasper "agent" by the full Uri:
            var agent = settings.For("rabbitmq://server1/queue1");

            // Customize or change how Jasper maps Envelopes to and from
            // the RabbitMQ properties
            agent.EnvelopeMapping = new CustomEnvelopeMapping();

You may not even need to do this at all because at a minimum, the raw message data, the application id, the message type, and content type (think “application/json” etc) will map correctly to Jasper’s internal structure, but you’d just miss out on some correlation data and the ability to do workflows like saga id tracking and request/reply semantics.

I edited this section a couple times to try to filter out most of the snark and irritation on my part, but I’m sure some of that comes shining through.


Why not just use RabbitMQ?

In an internal presentation at work a couple weeks ago, I tried to use this slide to illustrate why we would still use Jasper with RabbitMQ instead of “just using RabbitMQ”:


Feel free to argue with the details of my Venn diagram up there, but the point I was trying to get across was that there were a lot of application concerns around messaging that simply aren’t part of what RabbitMQ or Kafka or Azure Service Bus does for you. I spent about 20 minutes on this, then demonstrated a lot of Jasper’s features for how it takes raw byte[] data and marshals that to the right command executor in your application with all the error handling, instrumentation, outbox, and middleware fixings you’d expect in a tool like Jasper. And at the end of all that, I got a huge dose of “why aren’t we just using Kafka?” Grrrrrr.

I intend to write a Jasper-flavored version of this question someday soon, but for now, I’ll recommend Sure, You Can Just Use RabbitMQ.

Jasper’s “Outbox” Pattern Support

Jasper supports the “outbox pattern,”  a way to achieve consistency between the outgoing messages that you send out as part of a logical unit of work without having to resort to two phase, distributed transactions between your application’s backing database and whatever queueing technology you might be using. Why do you care? Because consistency is good, and distributed transactions suck, that’s why.

Before you read this, and especially if you’re a coworker freaking out because you think I’m trying to force you to use Postgresql, Jasper is not directly coupled to Postgresql and we will shortly add similar support to what’s shown here for Sql Server message persistence with Dapper and possibly Entity Framework.

Let’s say you have an ASP.Net Core MVC controller action like this in a system that is using Marten for document persistence:

public async Task<IActionResult> CreateItem(
    [FromBody] CreateItemCommand command,
    [FromServices] IDocumentStore martenStore,
    [FromServices] IMessageContext context)
    var item = createItem(command);

    using (var session = martenStore.LightweightSession())
        await session.SaveChangesAsync();
    var outgoing = new ItemCreated{Id = item.Id};
    await context.Send(outgoing);

    return Ok();

It’s all contrived, but it’s a relatively common pattern. The HTTP action:

  1. Receives a CreateItemCommand message from the client
  2. Creates a new Item document and persists that with a Marten document session
  3. Broadcasts an ItemCreated event to any known subscribers through Jasper’s IMessageContext service. For the sake of the example, let’s say that under the covers Jasper is publishing the message through RabbitMQ (because I just happened to push Jasper’s RabbitMQ support today).

Let’s say that in this case we need both the document persistence and the message being sent out to either succeed together or both fail together to keep your system and any downstream subscribers consistent. Now, let’s think about all the ways things can go wrong:

  1. If we keep the code the way that it is, what if the transaction succeeds, but the call to context.Send() fails, so we’re inconsistent
  2. If we sent the message before we persisted the document, but the call to session.SaveChangesAsync() failed, we’re inconsistent
  3. The system magically fails and shuts down in between the document getting saved and the outgoing message being completely enqueued — and that’s not that crazy if the system handles a lot of messages

We’ve got a couple options. We can try to use a distributed transaction between the underlying RabbitMQ queue and the Postgresql database, but those can be problematic and are definitely not super performant. We could also use some kind of compensating transaction to reestablish consistency, but that’s just more code to write.

Instead, let’s use Jasper’s support for the “outbox” pattern with Marten:

public async Task<IActionResult> CreateItem(
    [FromBody] CreateItemCommand command,
    [FromServices] IDocumentStore martenStore,
    [FromServices] IMessageContext context)
    var item = createItem(command);
    using (var session = martenStore.LightweightSession())
        // Directs the message context to hold onto
        // outgoing messages, and persist them 
        // as part of the given Marten document
        // session when it is committed
        await context.EnlistInTransaction(session);
        var outgoing = new ItemCreated{Id = item.Id};
        await context.Send(outgoing);
        await session.SaveChangesAsync();

    return Ok();

The key things to know here are:

  • The outgoing messages are persisted in the same Postgresql database as the Item document with a native database transaction.
  • The outgoing messages are not sent to RabbitMQ until the underlying database transaction in the call to session.SaveChangesAsync() succeeds
  • For the sake of performance, the message persistence goes up to Postgresql with all the document operations in one network round trip to the database for just a wee bit of performance optimization.

For more context, here’s a sequence diagram explaining how it works under the covers using Marten’s IDocumentSessionListener:

Handling a Message w_ Unit of Work Middleware (1)

So now, let’s talk about all the things that can go wrong and how the outbox usage makes it better:

  • The transaction fails. No messages will be sent out, so there’s no inconsistency.
  • The transaction succeeds, but the RabbitMQ broker is unreachable. It’s still okay. Jasper has the outgoing messages persisted, and the durable messaging support will continue to retry the outgoing messages when the broker is available again.
  • The transaction succeeds, but the application process is killed before the outgoing message is completely sent to RabbitMQ. Same as the bullet point above.


Outbox Pattern inside of Message Handlers

The outbox usage within a message handler for the same CreateItemCommand in its most explicit form might look like this:

public static async Task Handle(
    CreateItemCommand command, 
    IDocumentStore store, 
    IMessageContext context)
    var item = createItem(command);

    using (var session = store.LightweightSession())
        await context.EnlistInTransaction(session);

        var outgoing = new ItemCreated{Id = item.Id};
        await context.Send(outgoing);


        await session.SaveChangesAsync();

Hopefully, that’s not terrible, but we can drastically simplify this code if you don’t mind some degree of “magic” using Jasper’s cascading message support and Marten transaction middleware:

public static ItemCreated Handle(
    CreateItemCommand command,
    IDocumentSession session)
    var item = createItem(command);

    return new ItemCreated{Id = item.Id};

The usage of the [MartenTransaction] attribute directs Jasper to apply a transaction against the IDocumentSession usage and automatically enlists the IMessageContext for the message in that session. The outgoing ItemCreated message returned from the action is sent out through the same IMessageContext object.


Jasper Command Line App Support you Wish Your Framework Already Had

Jasper is a new messaging and command runner framework targeting Netstandard2 my shop has been building as a replacement for part of the old FubuMVC framework. I wrote about the general vision and rationale here.

Earlier today I made a v0.7.0 release of Jasper and its related extensions. The pace of development has kicked back up because we’re getting ready to start doing load and chaos testing with our QA folks later this week and we’re already transitioning some smaller, low volume systems to Jasper. The highlights this time are:

  • A lot of optimization for the “cold start” time, especially if you’re using Jasper in combination with ASP.Net Core. I collapsed the ASP.Net Core support back to the core library, so this post is already obsolete.
  • The integration with ASP.Net Core is a lot tighter. For example, Jasper is now using the ASP.Net Core logging under its covers, the ASP.Net Core IHostedService, and just generally plays nicer when used in combination with ASP.Net Core applications.
  • Jasper now has some support for stateful sagas, but only with Marten-backed persistence. I’ll blog about this one soon, and there will be other saga persistence options coming fairly soon. Sql Server backed persistence at a bare minimum.
  • Finer grained control over how certain message types are published
  • Mild improvements to the Marten integration. Again, Jasper isn’t hard coupled to Marten and Postgresql, but it’s just been easy to prove out concepts with Marten first.
  • More command line usages that I’m showing in the rest of this post;)

Command Line Integration

First off, let’s say that you have a simple Jasper application that listens for incoming messages at a designated port configured with this class:

public class SubscriberApp : JasperRegistry
    public SubscriberApp()
        // Listen for incoming messages via the
        // built in, socket transport in a 
        // fire and forget way at port 2222

To run your Jasper application as a console application, you can use the Jasper.CommandLine library as a quick helper that also adds some diagnostic commands you may find helpful during both development and deployment time. Using your SubscriberApp class above, you can bootstrap your application in a console application like this:

class Program
    static int Main(string[] args)
        return JasperAgent.Run(args);

Once that’s done, you can immediately run your application from the command line with dotnet run, which would give you some output like this:

Running service 'SubscriberApp'
Application Assembly: Subscriber, Version=, Culture=neutral, PublicKeyToken=null
Hosting environment: Production
Content root path: [the IHostedEnvironment.ContentRootPath value]
Hosted Service: Jasper.Messaging.MessagingActivator
Hosted Service: Jasper.Messaging.NodeRegistration
Listening for loopback messages
Listening for messages at [url]/messages
Listening for messages at [url]/messages/durable

Active sending agent to loopback://replies/
Active sending agent to loopback://retries/
Handles messages:
            [Message Type]: [Handler Type and Handler Method Name]

Now listening on: [listener Uri]
Application started. Press Ctrl+C to shut down.

Other than a little bit of contextual information, it’s about what you would get with the ASP.Net Core command line support. If you’re not familiar with the dotnet cli, you can pass command line arguments to your Program.Main() ​method by using double dashes to separate arguments that apply to dotnet run from the arguments that get passed into your main method. Using the Oakton library for parsing Linux style command arguments and flags, your Jasper application can also respond to other commands and optional flags.

Knowing all that, this:

dotnet run -- -v


dotnet run -- --verbose

will run your application with console and debug loggers, and set the minimum log level in the ASP.Net Core logging to “Debug.”

Alternatively, you can also override the log level by:

dotnet run -- --log-level Information


dotnet run -- -l Trace

where the value is one of the values in the LogLevel enumeration.

To override the environment your application is running under, you can use this flag:

dotnet run -- --environment Development

or use the “-e” short version of that.

So what, what else do you got?

You can run a Jasper application, but there’s actually quite a bit more. If you type dotnet run -- ?, you can see the other available commands:


Screen Shot 2018-04-11 at 3.53.09 PM

The “export-json-schema” and “generate-message-types” commands are from an extension library that allows you to export JSON schema documents for the known message types or generate C# classes with the necessary Jasper message type identity from JSON schema documents. The command line support is extensible, allowing you to add prepackaged commands from addon Nugets or even be exposed from your own application. I’m going to leave that to a later post or *gasp*, updated documentation.

Preview the Generated Code

If you read my earlier post on Jasper’s Roslyn-Powered “Special Sauce,” you know that Jasper internally generates and compiles glue code to handle messages or HTTP requests. To help troubleshoot applications or just to understand the interplay between message handlers and any configured middleware, you can use this command to either list out the generated code or export it to a file:

dotnet run -- code -f export.cs


Check out the IoC Container Configuration

As a long time IoC tool author and user, I’m painfully aware that people run into issues with service registrations being incorrect or using erroneous lifecycles. To help ease those issues, Jasper allows you to see the service registrations of your entire application with this command:

dotnet run -- services

This is just displaying the output of the Lamar WhatDoIHave() report, similar to StructureMap’s WhatDoIHave() functionality.

Validate the System

As part of deployment or maybe even local development, you can choose to just start up the application, run all the registered environment checks, and verify that all the known message handlers and HTTP routes can be successfully compiled — with oodles of ugly red textual output if any check along the way fails. That’s done with dotnet run -- validate.


Manage Subscriptions

It’s admittedly kind of boring and I’m running out of time before I need to head home, but there is a dotnet run -- subscriptions command that you can use to manage message subscriptions at deployment or build time that’s documented here.


Next up:

I’ll get a decent, business facing example of Jasper’s stateful saga support.


The Marten Webinar is Online, and Answering Some Questions

JetBrains was gracious enough to let me record an introductory webinar last week on the Marten project that lets .Net developers successfully treat Postgresql as a fully ACID-compliant, document database and event store. The recording posted this morning and there’s a link to it right below:

Questions from the Webinar

These are some of the leftover questions we weren’t able to get to during the webinar:

Is there an ability to do faceted searches in Marten?

I don’t know why, but I just couldn’t make out the word “faceted” during the talk and this one slipped through.

In Marten itself, no, but we’re sitting on top of Postgresql that does support faceted searching. In the longer run, we’ve talked about having some easy to use facilities that allow you to either “project” a Marten-persisted document type to a flat database view or to possible write a “read side” table for reporting during document writes. My attitude on these kinds of features is to try to lean on Postgresql wherever possible to try to keep Marten’s already very large feature set and codebase from getting (more) bloated.

Have you any experience running Marten against Citus?

No, but I’d be very curious to see how that goes. I’ve purposely put off any kind of software based sharding with Marten in hopes that Citus just works. Volunteering? 😉

How did you convince your company to build marten from the ground up instead of using existing docdb?

Marten was originally conceived of and executed as a near drop in replacement for our existing document database (RavenDb) that was causing us quite a few production issues. At the time, we theorized that it would be easier to build an in place replacement for RavenDb than to convert a massive, existing project to some completely different database and persistence framework. We were a very OSS-friendly shop at the time, and Marten was actually my then manager’s concept.

Can you extract values from the json in explicit table fields?

Marten can only work against tables that it controls with an expected structure, so no, sorry.

Can we use map-reduce queries like in RavenDB? And is there async index creation, with map-reduce?

Indexes are just Postgresql indexes, even when calculated against a JSON search. We don’t directly support map-reduce, and I don’t actually think we’ll need to in the long run. See the section on faceted search above too.

Will you be posting the code used in the webinar somewhere?

Yep, it’s in GitHub here.

Successfully Running an xUnit Suite in Parallel

TL:DR: Don’t call Task.Wait() in your xUnit tests if you want things running faster and in parallel. In other words, async turtles all the way down. This is a requested post from my buddy Jim Holmes.

In my recent OSS efforts like Marten, Jasper, and Lamar, I have tended to lean much more heavily on top down integration tests than having lots of intermediate and low level unit tests. Putting aside the wisdom of that approach aside for another time, depending so much on integration testing has made the main testing suite in Jasper run too slowly for my comfort as the project has grown.

Like many xUnit users, the second I hit issues with test suites locking up or failing unpredictably, I lazily slap on the directive to prevent parallel test execution like so:

using Xunit;

[assembly: CollectionBehavior(CollectionBehavior.CollectionPerAssembly)]

As I said, the Jasper suite got too slow for productive, quick twitch development, so I finally broke down and committed enough time to eliminate all the issues in the Jasper code and test suite that was stopping us from running the tests in parallel. It’s a huge, squashed commit, but you can see what I did here.

In a few places, I was using static members to record actions during integration tests as just a mechanically cheap way of asserting correct behavior. I had to move all of that to object instances that were scoped to the test run. Not that big of a deal.

The bigger problem by far was deadlock issues in bootstrapping a Jasper application, which was kind of a problem where there are ~200 tests that each try to bootstrap a Jasper application as part of the test. To optimize the “cold start” time of Jasper, I heavily parallelize startup activities through Task objects. The synchronous version of bootstrapping has to eventually do a couple Task.GetAwaiter().GetResult() calls (once in Jasper, once in Lamar where it uses StructureMap’s old trick of parallelizing the type scanning), and that was prone to deadlocks.

The original pattern of many integration tests looked like this:

public void some_name()
    using (var runtime = JasperRuntime.For(_ => { // some configuration}))
        // do stuff and run assertions

After re-plumbing the bootstrapping and adding purely asynchronous bootstrapping and teardown methods to both Jasper and the underlying Lamar IoC container, I mostly moved to this pattern instead:

public async Task some_name()
    var runtime = await JasperRuntime.ForAsync(_ => {
        // some configuration

    try {
        // do stuff and run assertions
    finally {
        // shutdown the running app w/ 100% async API calls
        await runtime.Shutdown();

Long story short, it sucked, but now the tests can happily run in parallel and it made a huge difference in the test suite runtime and I’ve been able to be much more productive.