Developing Error Handling Strategies for Asynchronous Messaging

I’m furiously working on what I hope is the last sprint toward a big new Jasper 2.0 release. Part of that work has been a big overhaul of the error handling strategies with an eye toward solving the real world problems I’ve personally experienced over the years doing asynchronous messaging in enterprise applications.

Whether you’re purposely using micro-services, having to integrate with 3rd party systems, or just the team down the hall’s services, it’s almost inevitable that an enterprise system will have to communicate with something else. Or at the very least have a need to do some kind of background processing within the same logical system. For all those reasons, it’s not unlikely that you’ll have to pull in some kind of asynchronous messaging tooling into your system.

It’s also an imperfect world, and despite your best efforts your software systems will occasionally encounter exceptions at runtime. What you really need to do is to plan around potential failures in your application, especially around integration points. Fortunately, your asynchronous messaging toolkit should have a robust set of error handling capabilities baked in — and this is maybe the single most important reason to use asynchronous messaging toolkits like MassTransit, NServiceBus, or the Jasper project I’m involved with rather than trying to roll your own one off message handling code or depend strictly on web communication via web services.

In no particular order, I think you need to have at least these goals in mind:

  • Craft your exception handling in such a way that it will very seldom require manual intervention to recover work in the system.
  • Build in resiliency to transient errors like networking hiccups or database timeouts that are common when systems get overtaxed.
  • Limit your temporal coupling to external systems both within your organization or from 3rd party systems. The point here is that you want your system to be at least somewhat functional even if external dependencies are unavailable. My off the cuff recommendation is to try to isolate calls to an external dependency within a small, atomic command handler so that you have a “retry loop” directly around the interaction with that dependency.
  • Prevent inconsistent state within your greater enterprise systems. I think of this as the “no leaky pipe” rule where you try to avoid any messages getting lost along the way, but it also applies to ordering operations sometimes. To illustrate this farther, consider the canonical example of recording a matching debit and withdrawal transaction between two banking accounts. If you process one operation, you have to do the other as well to avoid system inconsistencies. Asynchronous messaging makes that just a teeny bit harder maybe by introducing eventual consistency into the mix rather than trying to depend on two phase commits between systems — but who’s kidding who here, we’re probably all trying to avoid 2PC transactions like the plague.

Quick Introduction to Jasper

I’m using the Jasper framework for all the error handling samples here. Just to explain the syntax in the code samples, Jasper is configured at bootstrapping time with the JasperOptions type as shown in this sample below:

using var host = Host.CreateDefaultBuilder()
    .UseJasper(opts =>
    {
        opts.Handlers.OnException<TimeoutException>()
        // Just retry the message again on the
        // first failure
        .RetryOnce()

        // On the 2nd failure, put the message back into the
        // incoming queue to be retried later
        .Then.Requeue()

        // On the 3rd failure, retry the message again after a configurable
        // cool-off period. This schedules the message
        .Then.ScheduleRetry(15.Seconds())

        // On the 4th failure, move the message to the dead letter queue
        .Then.MoveToErrorQueue()

        // Or instead you could just discard the message and stop
        // all processing too!
        .Then.Discard().AndPauseProcessing(5.Minutes());
    }).StartAsync();

The exception handling policies are “fall through”, meaning that you probably want to put more specific rules before more generic rules. The rules can also be configured either globally for all message types, or for specific message types. In most of the code snippets the variable opts will refer to the JasperOptions for the application.

More in the brand new docs on error handling in Jasper.

Transient or concurrency errors that hopefully go away?

Assuming that you’ve done enough testing to remove most of the purely functional errors in your system. Once you’ve reached that point, the most common kind of error in my experience with system development is transient errors like:

  • Network timeouts
  • Database connectivity errors, which could be related to network issues or connection exhaustion
  • Concurrent access errors
  • Resource locking issues

For these types of errors, I think I’d recommend some sort of exponential backoff strategy that attempts to retry the message inline, but with an increasingly longer pause in between attempts like so:

// Retry the message again, but wait for the specified time
// The message will be dead lettered if it exhausts the delay
// attempts
opts.Handlers
    .OnException<SqlException>()
    .RetryWithCooldown(50.Milliseconds(), 100.Milliseconds(), 250.Milliseconds());

What you’re doing here is retrying the message a certain number of times, but with a pause to slow down processing in the system to allow for more time for a distressed resource to stabilize before trying again. I’d also recommend this approach for certain types of concurrency exceptions where only one process at a time is allowed to work with a resource (a database row? a file? an event store stream?). This is especially helpful with optimistic concurrency strategies where you might just need to start processing over against the newly changed system state.

I’m leaving it out for the sake of brevity, but Jasper will also let you put a message back into the end of the incoming queue or even schedule the next attempt out of process for a later time.

You shall not pass! (because a subsystem is down)

A few years ago I helped design a series of connected applications in a large banking ecosystem that ultimately transferred money from incoming payments in a flat file to a commercial, off the shelf (COTS) system. The COTS system exposed a web service endpoint we could use for our necessary integrations. Fortunately, we designed the system so that inputs to this service happened in a message handler fed by a messaging queue, so we could retry just the final call to the COTS web service in case of its common transient failures.

Great! Except that what also happened was that this COTS system could very easily be put into an invalid state where it could not handle any incoming transactions. In our then naive “retry all errors up to 3 times then move into a dead letter queue” strategy, literally hundreds of transactions would get retried those three times, spam the hell out of the error logs and production monitoring systems, and all end up in the dead letter queue where a support person would have to manually move them back to the real queue later after the COTS system was fixed.

This is obviously not a good situation. For future projects, Jasper will let you pause all incoming messages from a receiving endpoint (like a message queue) if a particular type of error is encountered like this:

using var host = await Host.CreateDefaultBuilder()
    .UseJasper(opts =>
    {
        // The failing message is requeued for later processing, then
        // the specific listener is paused for 10 minutes
        opts.Handlers.OnException<SystemIsCompletelyUnusableException>()
            .Requeue().AndPauseProcessing(10.Minutes());

    }).StartAsync();

Using that capability above, if you have all incoming requests to use an external web service coming through a single queue and receiving endpoint, you will be able to pause all processing of that queue if you detect an error that implies that the external system is completely invalid, but also try to restart listening later. All without user intervention.

Jasper would also enable you to chain additional actions to take after encountering that exception to send other messages or maybe raise some kind of alert through email or text that the listening has been paused. At the very worst, you could also use some kind of log monitoring tool to raise alerts when it sees the log message from Jasper about a listening endpoint being paused.

Dealing with a distressed resource

All of the other error handling strategies I’ve discussed so far have revolved around a single message. But what if you’re seeing a high percentage of exceptions across all messages for a single endpoint, which may imply that some kind of resource like a database is overloaded?

To that end, we could use a circuit breaker approach to temporarily pause message handling when a high number of exceptions are happening across incoming messages. This might help alleviate the load on the distressed subsystem and allow it to catch up before processing additional messages. That usage in Jasper is shown below:

opts.Handlers.OnException<InvalidOperationException>()
    .Discard();

opts.ListenToRabbitQueue("incoming")
    .CircuitBreaker(cb =>
    {
        // Minimum number of messages encountered within the tracking period
        // before the circuit breaker will be evaluated
        cb.MinimumThreshold = 10;

        // The time to pause the message processing before trying to restart
        cb.PauseTime = 1.Minutes();

        // The tracking period for the evaluation. Statistics tracking
        cb.TrackingPeriod = 5.Minutes();

        // If the failure percentage is higher than this number, trip
        // the circuit and stop processing
        cb.FailurePercentageThreshold = 10;

        // Optional allow list
        cb.Include<SqlException>(e => e.Message.Contains("Failure"));
        cb.Include<SocketException>();

        // Optional ignore list
        cb.Exclude<InvalidOperationException>();
    });
}).StartAsync();

Nope, that message is bad, no soup for you!

Hey, sometimes you’re going to get an exception that implies that the incoming message is invalid and can never be processed. Maybe it applies to a domain object that no longer exists, maybe it’s a security violation. The point being the message can never be processed, so there’s no use in clogging up your system with useless retry attempts. Instead, you want that message shoved out of the way immediately. Jasper gives you two options:

using var host = await Host.CreateDefaultBuilder()
    .UseJasper(opts =>
    {
        // Bad message, get this thing out of here!
        opts.Handlers.OnException<InvalidMessageYouWillNeverBeAbleToProcessException>()
            .Discard();
        
        // Or keep it around in case someone cares about the details later
        opts.Handlers.OnException<InvalidMessageYouWillNeverBeAbleToProcessException>()
            .MoveToErrorQueue();

    }).StartAsync();

Related Topics

Now we come to the point of the post when I’m getting tired and wanting to get this finished, so it’s time to just mention some related concepts for later research.

For the sake of consistency within your distributed system, I think you almost have to be aware of the outbox pattern — and conveniently enough, Jasper has a robust implementation of that pattern. MassTransit also recently added a “real outbox.” I know that NServiceBus has an improved outbox planned, but I don’t have a link handy for that.

Again for consistency within your distributed system, I’d recommend you familiarize yourself with the concept of compensating actions, especially if you’re trying to use eventual consistency in your system and the secondary actions fail.

And lastly, I’m not the world’s foremost expert here, but you really want some kind of system monitoring that detects and alerts folks to distressed subsystems, circuit breakers tripping off, or dead letter queues growing quickly.

Advertisement

Command Line Support for Marten Projections

Marten 5.7 was published earlier this week with mostly bug fixes. The one, big new piece of functionality was an improved version of the command line support for event store projections. Specifically, Marten added support for multi-tenancy through multiple databases and the ability to use separate document stores in one application as part of our V5 release earlier this year, but the projections command didn’t really catch up and support that — but now it can with Marten v5.7.0.

From a sample project in Marten we use to test this functionality, here’s part of the Marten setup that has a mix of asynchronous and inline projections, as well as uses the database per tenant strategy:

services.AddMarten(opts =>
{
    opts.AutoCreateSchemaObjects = AutoCreate.All;
    opts.DatabaseSchemaName = "cli";

    // Note this app uses multiple databases for multi-tenancy
 opts.MultiTenantedWithSingleServer(ConnectionSource.ConnectionString)
        .WithTenants("tenant1", "tenant2", "tenant3");

    // Register all event store projections ahead of time
    opts.Projections
        .Add(new TripAggregationWithCustomName(), ProjectionLifecycle.Async);
    
    opts.Projections
        .Add(new DayProjection(), ProjectionLifecycle.Async);
    
    opts.Projections
        .Add(new DistanceProjection(), ProjectionLifecycle.Async);

    opts.Projections
        .Add(new SimpleAggregate(), ProjectionLifecycle.Inline);

    // This is actually important to register "live" aggregations too for the code generation
    opts.Projections.SelfAggregate<SelfAggregatingTrip>(ProjectionLifecycle.Live);
}).AddAsyncDaemon(DaemonMode.Solo);

At this point, let’s introduce the Marten.CommandLine Nuget dependency to the system just to add Marten related command line options directly to our application for typical database management utilities. Marten.CommandLine brings with it a dependency on Oakton that we’ll actually use as the command line parser for our built in tooling. Using the now “old-fashioned” pre-.NET 6 manner of running a console application, I add Oakton to the system like this:

public static Task<int> Main(string[] args)
{
    // Use Oakton for running the command line
    return CreateHostBuilder(args).RunOaktonCommands(args);
}

When you use the dotnet command line options, just keep in mind that the “–” separator you’re seeing me here is used to separate options directly to the dotnet executable itself on the left from arguments being passed to the application itself on the right of the “–” separator.

Now, turning to the command line at the root of our project, I’m going to type out this command to see the Oakton options for our application:

dotnet run -- help

Which gives us this output:

If you’re wondering, the commands db-apply and marten-apply are synonyms that’s there as to not break older users when we introduced the now, more generic “db” commands.

And next I’m going to see the usage for the projections command with dotnet run -- help projections, which gives me this output:

For the simplest usage, I’m just going to list off the known projections for the entire system with dotnet run -- projections --list:

Which will show us the four registered projections in the main IDocumentStore, and tells us that there are no registered projections in the separate IOtherStore.

Now, I’m just going to continuously run the asynchronous projections for the entire application — while another process is constantly pumping random events into the system so there’s always new work to be doing — with dotnet run -- projections, which will spit out this continuously updating table (with an assist from Spectre.Console):

What I hope you can tell here is that every asynchronous projection is actively running for each separate tenant database. The blue “High Water Mark” is telling us where the current event store for each database is at.

And finally, for the main reason why I tackled the projections command line overhaul last week, folks needed a way to rebuild projections for every database when using a database per tenant strategy.

While the new projections command will happily let you rebuild any combination of database, store, and projection name by flags or even an interactive mode, we can quickly trigger a full rebuild of all the asynchronous projections with dotnet run -- projections --rebuild, which is going to loop through every store and database like so:

For the moment, the rebuild works on all the projections for a single database at a time. I’m sure we’ll attempt some optimizations of the rebuilding process and try to understand how much we can really parallelize more, but for right now, our users have an out of the box way to rebuild projections across separate databases or separate stores.

This *might* be a YouTube video soon just to kick off my new channel for Marten/Jasper/Oakton/Alba/Lamar content.

A Vision for Stateful Resources at Development or Deployment Time

As is not atypical, I found a couple little issues with both Oakton and Jasper in the course of writing this post. To that end, if you want to use the functionality shown here yourself, just make sure you’re on at least Oakton 4.6.1 and Jasper 2.0-alpha-3.

I’ve spit out quite a bit of blogging content the past several weeks on both Marten and Jasper:

I’ve been showing some new integration between Jasper, Marten, and Rabbit MQ. This time out, I want to show the new “stateful resource” model in a third tool named Oakton to remove development and deployment time friction when using these tools on a software project. Oakton itself is a command line processing tool, that more importantly, can be used to quickly add command line utilities directly to your .Net executable.

Drawing from a sample project in the Jasper codebase, here’s the configuration for an issue tracking application that uses Jasper, Marten, and RabbitMQ with Jasper’s inbox/outbox integration using Postgresql:

using IntegrationTests;
using Jasper;
using Jasper.Persistence.Marten;
using Jasper.RabbitMQ;
using Marten;
using MartenAndRabbitIssueService;
using MartenAndRabbitMessages;
using Oakton;
using Oakton.Resources;

var builder = WebApplication.CreateBuilder(args);

builder.Host.ApplyOaktonExtensions();

builder.Host.UseJasper(opts =>
{
    // I'm setting this up to publish to the same process
    // just to see things work
    opts.PublishAllMessages()
        .ToRabbitExchange("issue_events", exchange => exchange.BindQueue("issue_events"))
        .UseDurableOutbox();

    opts.ListenToRabbitQueue("issue_events").UseInbox();

    opts.UseRabbitMq(factory =>
    {
        // Just connecting with defaults, but showing
        // how you *could* customize the connection to Rabbit MQ
        factory.HostName = "localhost";
        factory.Port = 5672;
    });
});

// This is actually important, this directs
// the app to build out all declared Postgresql and
// Rabbit MQ objects on start up if they do not already
// exist
builder.Services.AddResourceSetupOnStartup();

// Just pumping out a bunch of messages so we can see
// statistics
builder.Services.AddHostedService<Worker>();

builder.Services.AddMarten(opts =>
{
    // I think you would most likely pull the connection string from
    // configuration like this:
    // var martenConnectionString = builder.Configuration.GetConnectionString("marten");
    // opts.Connection(martenConnectionString);

    opts.Connection(Servers.PostgresConnectionString);
    opts.DatabaseSchemaName = "issues";

    // Just letting Marten know there's a document type
    // so we can see the tables and functions created on startup
    opts.RegisterDocumentType<Issue>();

    // I'm putting the inbox/outbox tables into a separate "issue_service" schema
}).IntegrateWithJasper("issue_service");

var app = builder.Build();

app.MapGet("/", () => "Hello World!");

// Actually important to return the exit code here!
return await app.RunOaktonCommands(args);

Just to describe what’s going on up above, the .NET code above is going to depend on:

  1. A Postgresql database with the necessary tables and functions that Marten needs to be able to persist issue data
  2. Additional tables in the Postgresql database for persisting the outgoing and incoming messages in the inbox/outbox usage
  3. A Rabbit MQ broker with the necessary exchanges, queues, and bindings for the issue application as it’s configured

In a perfect world, scratch that, in an acceptable world, a developer should be able to start from a fresh clone of this issue tracking codebase and be able to run the system and/or any integration tests locally almost immediately with very minimal friction along the way.

At this point, I’m a big fan of trying to run development infrastructure in Docker where it’s easy to spin things up on demand, and just as quickly shut it all down when you no longer need it. To that end, let’s just say we’ve got a docker-compose.yml file for both Postgresql and Rabbit MQ. Having that, I’ll type docker compose up -d from the command line to spin up both infrastructure elements.

Cool, but now I need to have the database schemas built out with Marten tables and the Jasper inbox/outbox tables plus the Rabbit MQ queues for the application. This is where Oakton and its new “stateful resource” model comes into play. Jasper’s Rabbit MQ plugin and the inbox/outbox storage both expose Oakton’s IStatefulResource interface for easy setup. Likewise, Marten has support for this model as well (in this case it’s just a very slim wrapper around Marten’s longstanding database schema management functionality).

If you’re not familiar with this, the double dash “–” argument in dotnet run helps .NET to know which arguments (“run”) apply to the dotnet executable and the arguments to the right of the “–” that are passed into the application itself.

Opening up the command line terminal of your preference to the root of the project, I type dotnet run -- help to see what options are available in our Jasper application through the usage of Oakton:

There’s a couple commands up there that will help us out with the database management, but I want to focus on the resources command. To that end, I’m going to type dotnet run -- resources list just to see what resources our issue tracker application has:

Just through the configuration up above, the various Jasper elements have registered “stateful resource” adapters to for Oakton for the underlying Marten database, the inbox/outbox data (Envelope Storage above), and Rabbit MQ.

In the next case, I’m going to use dotnet run -- resources check to see if all our infrastructure is configured the way our application needs — and I’m going to do this without first starting the database or the message broker, so this should fail spectacularly!

Here’s the summary output:

If you were to scroll up a bit, you’d see a lot of exceptions thrown describing what’s wrong (helpfully color coded by Spectre.Console) including this one explaining that an expected Rabbit MQ queue is missing:

So that’s not good. No worries though, I’ll start up the docker containers, then go back to the command line and type:

dotnet run -- resources setup

And here’s some of the output:

Forget the command line…

If you’ll notice the single line of code `builder.Services.AddResourceSetupOnStartup();` in the bootstrapping code above, that’s adding a hosted service to our application from Oakton that will verify and apply all configured set up to the known Marten database, the inbox/outbox storage, and the required Rabbit MQ objects. No command line chicanery necessary. I’m hopeful that this will enable developers to be more productive by dealing with this kind of environmental setup directly inside the application itself rather than recreating the definition of what’s necessary in external scripts.

This was a fair amount of work, so I’d be very welcome to any kind of feedback here.

Using Rabbit MQ with Jasper

I’ve spit out quite a bit of blogging content the past several weeks on both Marten and Jasper:

As a follow up today, I’d like to introduce Rabbit MQ integration with Jasper as the actual transport between processes. I’m again going to use a “Ping/Pong” sample of sending messages between two processes (you may want to refer to my previous ping/pong post). You can find the sample code for this post on GitHub.

Sending messages betwixt processes

The message types and Jasper handlers are basically identical to those in my last post if you want a reference. In the Pinger application, I’ve added a reference to the Jasper.RabbitMQ Nuget library, which adds transitive references to Jasper itself and the Rabbit MQ client library. In the application bootstrapping, I’ve got this to connect to Rabbit MQ, send Ping messages to a Rabbit MQ exchange named *pings*, and add a hosted service just to send a new Ping message once a second:

using System.Net.NetworkInformation;
using Jasper;
using Jasper.RabbitMQ;
using Oakton;
using Pinger;

return await Host.CreateDefaultBuilder(args)
    .UseJasper(opts =>
    {
        // Listen for messages coming into the pongs queue
        opts
            .ListenToRabbitQueue("pongs")

            // This won't be necessary by the time Jasper goes 2.0
            // but for now, I've got to help Jasper out a little bit
            .UseForReplies();

        // Publish messages to the pings queue
        opts.PublishMessage<Ping>().ToRabbitExchange("pings");

        // Configure Rabbit MQ connection properties programmatically
        // against a ConnectionFactory
        opts.UseRabbitMq(rabbit =>
        {
            // Using a local installation of Rabbit MQ
            // via a running Docker image
            rabbit.HostName = "localhost";
        })
            // Directs Jasper to build any declared queues, exchanges, or
            // bindings with the Rabbit MQ broker as part of bootstrapping time
            .AutoProvision();

        // This will send ping messages on a continuous
        // loop
        opts.Services.AddHostedService<PingerService>();
    }).RunOaktonCommands(args);

On the Ponger side, I’ve got this setup:

using Baseline.Dates;
using Jasper;
using Jasper.RabbitMQ;
using Oakton;

return await Host.CreateDefaultBuilder(args)
    .UseJasper(opts =>
    {
        // Going to listen to a queue named "pings", but disregard any messages older than
        // 15 seconds
        opts.ListenToRabbitQueue("pings", queue => queue.TimeToLive(15.Seconds()));

        // Configure Rabbit MQ connections and optionally declare Rabbit MQ
        // objects through an extension method on JasperOptions.Endpoints
        opts.UseRabbitMq() // This is short hand to connect locally
            .DeclareExchange("pings", exchange =>
            {
                // Also declares the queue too
                exchange.BindQueue("pings");
            })
            .AutoProvision()

            // Option to blow away existing messages in
            // all queues on application startup
            .AutoPurgeOnStartup();
    })
    .RunOaktonCommands(args);

When running the applications side by side, I’ll get output like this from Pinger:

Got pong #55
info: Jasper.Runtime.JasperRuntime[104]
      Successfully processed message PongMessage#01818741-30c2-4ab7-9fa8-d7870d194754 from rabbitmq://queue/pings
info: Jasper.Runtime.JasperRuntime[204]
      Sending agent for rabbitmq://exchange/pings has resumed
Got pong #56
info: Jasper.Runtime.JasperRuntime[104]
      Successfully processed message PongMessage#01818741-34b3-4d34-910a-c9fc4c191bfc from rabbitmq://queue/pings
info: Jasper.Runtime.JasperRuntime[204]
      Sending agent for rabbitmq://exchange/pings has resumed

and this from Ponger:

Got ping #57
info: Jasper.Runtime.JasperRuntime[104]
      Successfully processed message PingMessage#01818741-389e-4ddd-96cf-fb8a76e4f5f1 from rabbitmq://queue/pongs
info: Jasper.Runtime.JasperRuntime[204]
      Sending agent for rabbitmq://queue/pongs has resumed
Got ping #58
info: Jasper.Runtime.JasperRuntime[104]
      Successfully processed message PingMessage#01818741-3c8e-469f-a330-e8c9e173dc40 from rabbitmq://queue/pongs
info: Jasper.Runtime.JasperRuntime[204]
      Sending agent for rabbitmq://queue/pongs has resumed

and of course, since Jasper is a work in progress, there’s a bunch of erroneous or at least misleading messages about a sending agent getting resumed that I need to take care of…

So what’s in place right now that can be inferred from the code samples above? The Jasper + Rabbit MQ integration today provides:

  • A way to subscribe to incoming messages from a Rabbit MQ queue. It’s not shown here, but you have all the options you’d expect to configure Rabbit MQ prefetch counts, message handling parallelism, and opt into persistent, inbox messaging
  • Mechanisms to declare Rabbit MQ queues, exchanges, and bindings as well as the ability to fine tune Rabbit MQ options for these objects
  • Create declared Rabbit MQ objects at system startup time
  • Automatically purge old messages out of Rabbit MQ queues on start up. It’s not shown here, but you can do this on a queue by queue basis as well
  • Create publishing rules from a Jasper process to direct outgoing messages to Rabbit MQ exchanges or directly to queues

What’s also available but not shown here is opt in, conventional routing to route outgoing messages to Rabbit MQ exchanges based on the message type names or to automatically declare and configure subscriptions to Rabbit MQ queues based on the message types of the messages that this process handles. This conventional routing is suspiciously similar (copied from) MassTransit because:

  1. I like MassTransit’s conventional routing functionality
  2. Bi-directional wire compatibility mostly out of the box with MassTransit is a first class goal for Jasper 2.0

Why Rabbit MQ? What about other things?

Jasper has focused on Rabbit MQ as the main transport option out of the box because that’s what my shop already uses (and I’m most definitely trying to get us to use Jasper at work), it has a great local development option through Docker hosting, and frankly because it’s easy to use. Jasper also supports Pulsar of all things, and will definitely have a Kafka integration before 2.0. Other transports will be added based on user requests, and that’s also a good place for other folks to get involved.

I had someone asking about using Jasper with Amazon SQS, and that’s such a simple model, that I might build that out as a reference for other folks.

What’s up next?

Now that I’ve introduced Jasper, its outbox functionality, and its integration with Rabbit MQ, my next post is visiting the utilities baked into Jasper itself for dealing with stateful resources like databases or broker configuration at development and production time. This is going to include a lot of command line functionality baked into your application.

A Vision for Low Ceremony CQRS with Event Sourcing

Let me just put this stake into the ground. The combination of Marten and Jasper will quickly become the most productive and lowest ceremony tooling for event sourcing and CQRS architectures on the .NET stack.

And for any George Strait fans out there, this song may be relevant to the previous paragraph:)

CQRS and Event Sourcing

Just to define some terminology, the Command Query Responsibility Separation (CQRS) pattern was first described by Greg Young as an approach to software architecture where you very consciously separate “writes” that change the state of the system from “reads” against the system state. The key to CQRS is to use separate models of the system state for writing versus querying. That leads to something I think of as the “scary view of CQRS”:

The “scary” view of CQRS

The “query database” in my diagram is meant to be an optimized storage for the queries needed by the system’s clients. The “domain model storage” is some kind of persisted storage that’s optimized for writes — both capturing new data and presenting exactly the current system state that the command handlers would need in order to process or validate incoming commands.

Many folks have a visceral, negative reaction to this kind of CQRS diagram as it does appear to be more complexity than the more traditional approach where you have a single database model — almost inevitably in a relational database of some kind — and both reads and writes work off the same set of database tables. Hell, I walked out of an early presentation by Greg Young about what later became to be known as CQRS at QCon 2008 shaking my head thinking this was all nuts.

Here’s the deal though, when we’re writing systems against the one, true database model, we’re potentially spending a lot of time mapping incoming messages to the stored model, and probably even more time transforming the raw database model into a shape that’s appropriate for our system’s clients in a way that also creates some decoupling between clients and the ugly, raw details of our underlying database. The “scary” CQRS architecture arguably just brings that sometimes hidden work and complexity into the light of day.

Now let’s move on to Event Sourcing. Event sourcing is a style of system persistence where each state change is captured and stored as an explicit event object. There’s all sorts of value from keeping the raw change events around for later, but you of course do need to compile the events into derived, or “projected” models that represent the current system state. When combining event sourcing into a CQRS architecture, some projected models from the raw events can serve as both the “write model” inside of incoming commands, while others can be built as the “query model” that clients will use to query our system.

At the end of this section, I want to be clear that event sourcing and CQRS can be used independently of each other, but to paraphrase Forrest Gump, event sourcing and CQRS go together like peas and carrots.

Now, the event sourcing side of this especially might sound a little scary if you’re not familiar with some of the existing tooling, so let’s move on first to Marten…

Marten has a mature feature set for event sourcing already, and we’re grown quite a bit in capability toward our support for projections (read-only views of the raw events). If you’ll peek back at the “scary” view of CQRS above, Marten’s projection support solves the problem of keeping the raw events synchronized to both any kind of “write model” for incoming commands and the richer “query model” views for system clients within a single database. In my opinion, Marten removes the “scary” out of event sourcing and we’re on our way to being the best possible “event sourcing in a box” solution for .NET.

As that support has gotten more capable and robust though, our users have frequently been asking about how to subscribe to events being captured in Marten and how to relay those events reliably to some other kind of infrastructure — which could be a completely separate database, outgoing message queues, or some other kind of event streaming scenario.

Oskar Dudycz and I have been discussing how to solve this in Marten, and we’ve come up with these three main mechanisms for supporting subscriptions to event data:

  1. Some or all of the events being captured by Marten should be forwarded automatically at the time of event capture (IDocumentSession.SaveChangesAsync())
  2. When strict ordering is required, we’ll need some kind of integration into Marten’s async daemon to relay events to external infrastructure
  3. For massive amounts of data with ambitious performance targets, use something like Debezium to directly stream Marten events from Postgresql to Kafka/Pulsar/etc.

Especially for the first item in that list above, we need some kind of outbox integration with Marten sessions to reliably relay events from Marten to outgoing transports while keeping the system in a consistent state (i.e., don’t publish the events if the transaction fails).

Fortunately, there’s a functional outbox implementation for Marten in…

To be clear, Jasper as a project was pretty well mothballed by the combination of COVID and the massive slog toward Marten 4.0 (and then a smaller slog to 5.0). However, I’ve been able to get back to Jasper and yesterday kicked out a new Jasper 2.0.0-alpha-2 release and (Re) Introducing Jasper as a Command Bus.

For this post though, I want to explore the potential of the Jasper + Marten combination for CQRS architectures. To that end, a couple weeks I published Marten just got better for CQRS architectures, which showed some new APIs in Marten to simplify repetitive code around using Marten event sourcing within CQRS architectures. Part of the sample code in that post was this MVC Core controller that used some newer Marten functionality to handle an incoming command:

public async Task CompleteCharting(
    [FromBody] CompleteCharting charting, 
    [FromServices] IDocumentSession session)
{
    var stream = await session
        .Events.FetchForExclusiveWriting<ProviderShift>(charting.ShiftId);
 
    // Validation on the ProviderShift aggregate
    if (stream.Aggregate.Status != ProviderStatus.Charting)
    {
        throw new Exception("The shift is not currently charting");
    }
     
    // We "decided" to emit one new event
    stream.AppendOne(new ChartingFinished(stream.Aggregate.AppointmentId.Value, stream.Aggregate.BoardId));
 
    await session.SaveChangesAsync();
}

To review, that controller method:

  1. Takes in a command message of type CompleteCharting
  2. Loads the current state of the aggregate ProviderShift model referred to by the incoming command, and does so in a way that takes care of concurrency for us by waiting to get an exclusive lock on the particular ProviderShift
  3. Assuming that the validation against the ProviderShift succeeds, emits a new ChartingFinished event
  4. Saves the pending work with a database transaction

In that post, I pointed out that there were some potential flaws or missing functionality with this approach:

  1. We probably want some error handling to retry the operation if we hit concurrency exceptions or timeout trying to get the exclusive lock. In other words, we have to plan for concurrency exceptions
  2. It’d be good to be able to automatically publish the new ChartingFinished event to a queue to take further action within our system (or an external service if we were using messaging here)
  3. Lastly, I’d argue there’s some repetitive code up there that could be simplified

To address these points, I’m going to introduce Jasper and its integration with Marten (Jasper.Persistence.Marten) to the telehealth portal sample from my previous blog post.

I’m going to move the actual handling of the CompleteCharting to a Jasper handler shown below that is functionally equivalent to the controller method shown earlier (except I switched the concurrency protection to being optimistic):

// This is auto-discovered by Jasper
public class CompleteChartingHandler
{
    [MartenCommandWorkflow] // this opts into some Jasper middlware 
    public ChartingFinished Handle(CompleteCharting charting, ProviderShift shift)
    {
        if (shift.Status != ProviderStatus.Charting)
        {
            throw new Exception("The shift is not currently charting");
        }

        return new ChartingFinished(charting.AppointmentId, shift.BoardId);
    }
}

And the controller method gets simplified down to just relaying the command to Jasper:

    public Task CompleteCharting(
        [FromBody] CompleteCharting charting, 
        [FromServices] ICommandBus bus)
    {
        // Just delegating to Jasper here
        return bus.InvokeAsync(charting, HttpContext.RequestAborted);
    }

There’s some opportunity for some mechanisms to make the code above be a little less repetitive and efficient. Maybe by riding on Minimal APIs. That’s for a later date though:)

By using the new [MartenCommandWorkflow] attribute, we’re directing Jasper to surround the command handler with middleware that handles much of the Marten mechanics by:

  1. Loading the aggregate ProviderShift for the incoming CompleteCharting command (I’m omitting some details here for brevity, but there’s a naming convention that can be explicitly overridden to pluck the aggregate identity off the incoming command)
  2. Passing that ProviderShift aggregate into the Handle() method above
  3. Applying the returned event to the event stream for the ProviderShift
  4. Committing the outstanding changes in the active Marten session

The Handle() code above becomes an example of a Decider function. Even better yet, it’s completely decoupled from any kind of infrastructure and fully synchronous. I’m going to argue that this approach will make command handlers much easier to unit test, definitely easier to write, and easier to read later just because you’re only focused on the business logic.

So that covers the repetitive code problem, but let’s move on to automatically publishing the ChartingCompleted event and some error handling. I’m going to add Jasper through the application’s bootstrapping code as shown below:

builder.Host.UseJasper(opts =>
{
    // I'm choosing to process any ChartingFinished event messages
    // in a separate, local queue with persistent messages for the inbox/outbox
    opts.PublishMessage<ChartingFinished>()
        .ToLocalQueue("charting")
        .DurablyPersistedLocally();
    
    // If we encounter a concurrency exception, just try it immediately 
    // up to 3 times total
    opts.Handlers.OnException<ConcurrencyException>().RetryNow(3); 
    
    // It's an imperfect world, and sometimes transient connectivity errors
    // to the database happen
    opts.Handlers.OnException<NpgsqlException>()
        .RetryWithCooldown(50.Milliseconds(), 100.Milliseconds(), 250.Milliseconds());
});

Jasper comes with some pretty strong exception handling policy capabilities you’ll need to do grown up development. In this case, I’m just setting up some global policies in the application to retry message failures on either Marten concurrency exceptions or the inevitable, transient Postgresql connectivity hiccups. In the case of the concurrency exception, you may just need to start the work over to ensure you’re starting from the most recent aggregate changes. I used globally applied policies here, but Jasper will also allow you to override that on a message type by message type basis.

Lastly, let’s add the Jasper outbox integration for Marten and opt into automatic event publishing with this bit of configuration chained to the standard AddMarten() usage:

builder.Services.AddMarten(opts =>
{
    // Marten configuration...
})
    // I added this to enroll Marten in the Jasper outbox
    .IntegrateWithJasper()
    
    // I also added this to opt into events being forward to
    // the Jasper outbox during SaveChangesAsync()
    .EventForwardingToJasper();

And that’s actually that. The configuration above will add the Jasper outbox tables to the Marten database for us, and let Marten’s database schema management manage those extra database objects.

Back to the command handler (mildly elided):

public class CompleteChartingHandler
{
    [MartenCommandWorkflow] 
    public ChartingFinished Handle(CompleteCharting charting, ProviderShift shift)
    {
        // validation code goes here!

        return new ChartingFinished(charting.AppointmentId, shift.BoardId);
    }
}

By opting into the outbox integration and event forwarding to Jasper from Marten, when this command handler is executed, the ChartingFinished events will be published — in this case just to an in-memory queue, but it could also be to an external transport — with Jasper’s outbox implementation that guarantees that the message will be delivered at least once as long as the database transaction to save the new event succeeds.

Conclusion and What’s Next?

There’s a tremendous amount of work in the rear window to get to the functionality that I demonstrated here, and a substantial amount of ambition in the future to drive this forward. I would love any possible feedback both positive and negative. Marten is a team effort, but Jasper’s been mostly my baby for the past 3-4 years, and I’d be happy for anybody who would want to get involved with that. I’m way behind in documentation for Jasper and somewhat for Marten, but that’s in flight.

My next couple posts to follow up on this are to:

  • Do a deeper dive into Jasper’s outbox and explain why it’s different and arguably more useful than the outbox implementations in other leading .NET tools
  • Introduce the usage of Rabbit MQ with Jasper for external messaging
  • Take a detour into the development and deployment time command line utilities built into Jasper & Marten through Oakton

Marten just got better for CQRS architectures

I’m assuming some prior knowledge of Event Sourcing as an architectural pattern here. I highly recommend Oskar Dudycz’s Introduction to Event Sourcing training kit or this video from Derek Comartin. While both Event Sourcing and the closely associated CQRS architectural style are both useful without the other, I’m still assuming here that you’re interested in using Marten for event sourcing within a larger CQRS architecture.

So you’re adopting an event sourcing style with Marten for your persistence within a larger CQRS architectural style. Crudely speaking, all “writes” to the system state involve sending a command message to your CQRS service with a workflow something like this:

In the course of handling the command message, our command handler (or HTTP endpoint) needs to:

  1. Fetch a “write model” that represents the state for the current workflow. This projected “write model” will be used by the command handler to validate the incoming command and also to…
  2. Decide what subsequent events should be published to update the state of the system based on the existing state and the incoming command
  3. Persist the new events to the ongoing Marten event store
  4. Possibly publish some or all of the new events to an outgoing transport to be acted upon asynchronously
  5. Deal with concurrency concerns, especially if there’s any significant chance that other related commands maybe coming in for the same logical workflow at the same time

Do note that as I shift to implementations that I’m going to mostly bypass any discussion of design patterns or what I personally consider to be useless cruft from common CQRS approaches in the .Net or JVM worlds. I.e., no repositories will be used in any of this code.

As an example system, let’s say that we’re building a new, online telehealth system that among other things will track how a medical provider spends their time during a shift helping patients during their workday. Using Marten’s “self-aggregate” support, a simplified version of the provider shift state is represented by this model:

public class ProviderShift
{
    public Guid Id { get; set; }
    
    // Pay attention to this, this will come into play
    // later
    public int Version { get; set; }
    
    public Guid BoardId { get; private set; }
    public Guid ProviderId { get; init; }
    public ProviderStatus Status { get; private set; }
    public string Name { get; init; }
    
    public Guid? AppointmentId { get; set; }
    
    public static async Task<ProviderShift> Create(ProviderJoined joined, IQuerySession session)
    {
        var provider = await session.LoadAsync<Provider>(joined.ProviderId);
        return new ProviderShift
        {
            Name = $"{provider.FirstName} {provider.LastName}",
            Status = ProviderStatus.Ready,
            ProviderId = joined.ProviderId,
            BoardId = joined.BoardId
        };
    }

    public void Apply(ProviderReady ready)
    {
        AppointmentId = null;
        Status = ProviderStatus.Ready;
    }

    public void Apply(ProviderAssigned assigned)
    {
        Status = ProviderStatus.Assigned;
        AppointmentId = assigned.AppointmentId;
    }
    
    public void Apply(ProviderPaused paused)
    {
        Status = ProviderStatus.Paused;
        AppointmentId = null;
    }

    // This is kind of a catch all for any paperwork the
    // provider has to do after an appointment has ended
    // for the just concluded appointment
    public void Apply(ChartingStarted charting) => Status = ProviderStatus.Charting;
}

Next up, let’s play the user story for a provider to make their “charting” activity complete after a patient appointment concludes. Looking at the sequence diagram and the bullet list of concerns for each command handler, we’ve got a few things to worry about. Never fear though, because Marten has you (mostly) covered today with a couple new features introduced in Marten v5.4 last week.

Starting with this simple command:

public record CompleteCharting(
    Guid ShiftId, 
    Guid AppointmentId, 
    int Version);

We’ll use Marten’s brand new IEventStore.FetchForWriting<T>() API to whip up the basic command handler (just a small ASP.Net Core Controller endpoint):

    public async Task CompleteCharting(
        [FromBody] CompleteCharting charting, 
        [FromServices] IDocumentSession session)
    {
        /* We've got options for concurrency here! */
        var stream = await session
            .Events.FetchForWriting<ProviderShift>(charting.ShiftId);

        // Validation on the ProviderShift aggregate
        if (stream.Aggregate.Status != ProviderStatus.Charting)
        {
            throw new Exception("The shift is not currently charting");
        }
        
        // We "decided" to emit one new event
        stream.AppendOne(new ChartingFinished(stream.Aggregate.AppointmentId.Value, stream.Aggregate.BoardId));

        await session.SaveChangesAsync();
    }

The FetchForWriting() method used above is doing a couple different things:

  1. Finding the current, persisted version of the event stream for the provider shift and loading that into the current document session to help with optimistic concurrency checks
  2. Fetching the current state of the ProviderShift aggregate for the shift id coming up on the command. Note that this API papers over whether or not the aggregate in question is a “live aggregate” that needs to be calculated on the fly from the raw events or previously persisted as just a Marten document by either an inline or asynchronous projection. I think I would strongly recommend that “write model” aggregates be either inline or live to avoid eventual consistency issues.

Concurrency?!?

Hey, the hard truth is that it’s easy for the command to be accidentally or incidentally dispatched to your service multiple times from messaging infrastructure, multiple users doing the same action in different sessions, or somebody clumsy like me just accidentally clicking a button too many times. One way or another, we may need to harden our command handler against concurrency concerns.

The usage of FetchForWriting<T>() will actually set you up for optimistic concurrency checks. If someone else manages to successfully process a command against the same provider shift between the call to FetchForWriting<T>() and IDocumentSession.SaveChangesAsync(), you’ll get a Marten ConcurrencyException thrown by SaveChangesAsync() that will abort and rollback the transaction.

Moving on though, let’s tighten up the optimistic version check by first telling Marten what the version of the provider shift was that our command thinks that the provider shift is at on the server. First though, we need to get the current version back to the client that’s collecting changes to our provider shift. If you scan back to the ProviderShift aggregate above, you’ll see this property:

    public int Version { get; set; }

With another new little feature in Marten v5.4, the Marten projection support will automatically set the value of a Version to the latest stream version for a single stream aggregate like the ProviderShift. Knowing that, and assuming that ProviderShift is updated inline, we could just deliver the whole ProviderShift to the client with this little web service endpoint (using Marten.AspNetCore extensions):

    [HttpGet("/shift/{shiftId}")]
    public Task GetProviderShift(Guid shiftId, [FromServices] IQuerySession session)
    {
        return session.Json.WriteById<ProviderShift>(shiftId, HttpContext);
    }

The Version property can be a field, scoped as internal, or read-only. Marten is using a dynamically generated Lambda that can happily bypass whatever scoping rules you have to set the version to the latest event for the stream represented by this aggregate. The Version naming convention can also be explicitly ignored, or redirected to a totally differently named member. Lastly, it can even be a .Net Int64 type too — but if you’re doing that, you probably have some severe modeling issues that should be addressed first!

Back to our command handler. If the client has what’s effectively the “expected starting version” of the ProviderShift and sends the CompleteCharting command with that version, we can change the first line of our handler method code to this:

        var stream = await session
                
            // Note: I'm passing in the expected, starting provider shift
            // version from the command
            .Events.FetchForWriting<ProviderShift>(charting.ShiftId, charting.Version);

This new version will throw a ConcurrencyException right off the bat if the expected, starting version is not the same as the last, persisted version in the database. After that, it’s the same optimistic concurrency check at the point of calling SaveChangesAsync() to commit the changes.

Lastly, since Marten is built upon a real database instead of trying to be its own specialized storage engine like many other event sourcing tools, we’ve got one last trick. Instead of putzing around with optimistic concurrency checks let’s go to a pessimistic, exclusive lock on the specific provider shift so that only one session at a time can ever be writing to that provider shift with this variation:

        var stream = await session
                
            // Note: This will try to "wait" to claim an exclusive lock for writing
            // on the provider shift event stream
            .Events.FetchForExclusiveWriting<ProviderShift>(charting.ShiftId);
        

As you can see, Marten has some new functionality to make it even easier to use Marten within CQRS architectures by eliminating some previously repetitive code in both queries on projected state and in command handlers where you need to use Marten’s concurrency control.

Wait, not so fast, you missed some things!

I missed a couple very big things in the sample code above. For one, we’d probably want to broadcast the new events through some kind of service bus to allow other systems or just our own system to asynchronously do other work (like trying to assign our provider to another ready patient appointment). To do that reliably so that the event capture and the outgoing events being published succeed or fail together in one atomic action, I really need an “outbox” of some sort integrated into Marten.

I also left out any kind of potential error handling or message retry capabilities around the concurrency exceptions. And lastly (that I can think of offhand), I completely left out any discussion of the instrumentation you’d want in any kind of grown up system.

Since we’re in the middle of the NBA playoffs, I’m reminded of a Shaquille O’Neal quote from when his backup was Alonzo Mourning, and Mourning had a great game off the bench: “sometimes Superman needs some help from the Incredible Hulk.” In this case, part of the future of Marten is to be combined with another project called Jasper that is going to add external messaging with a robust outbox implementation for Marten to create a full stack for CQRS architectures. Maybe as soon as late next week or at least in June, I’ll write a follow up showing the Marten + Jasper combination that deals with the big missing pieces of this post.

JasperFx OSS Plans for .Net 6 (Marten et al)

I’m going to have to admit that I got caught flat footed by the .Net 6 release a couple weeks ago. I hadn’t really been paying much attention to the forthcoming changes, maybe got cocky by how easy the transition from netcoreapp3.1 to .Net 5 was, and have been unpleasantly surprised by how much work it’s going to take to move some OSS projects up to .Net 6. All at the same time that the advance users of the world are clamoring for all their dependencies to target .Net 6 yesterday.

All that being said, here’s my running list of plans to get the projects in the JasperFx GitHub organization successfully targeting .Net 6. I’ll make edits to this page as things get published to Nuget.

Baseline

Baseline is a grab bag utility library full of extension methods that I’ve relied on for years. Nobody uses it directly per se, but it’s a dependency of just about every other project in the organization, so it went first with the 3.2.2 release adding a .Net 6 target. No code changes were necessary other than adding .Net 6 to the CI testing. Easy money.

Oakton

EDIT: Oakton v4.0 is up on Nuget. WebApplication is supported, but you can’t override configuration in commands with this model like you can w/ HostBuilder only. I’ll do a follow up at some point to fill in this gap.

Oakton is a tool to add extensible command line options to .Net applications based on the HostBuilder model. Oakton is my problem child right now because it’s a dependency in several other projects and its current model does not play nicely with the new WebApplicationBuilder approach for configuring .Net 6 applications. I’d also like to get the Oakton documentation website moved to the VitePress + MarkdownSnippets model we’re using now for Marten and some of the other JasperFx projects. I think I’ll take a shortcut here and publish the Nuget and let the documentation catch up later.

Alba

Alba is an automated testing helper for ASP.Net Core. Just like Oakton, Alba worked very well with the HostBuilder model, but was thrown for a loop with the new WebApplicationBuilder configuration model that’s the mechanism for using the new Minimal API (*cough* inevitable Sinatra copy *cough*) model. Fortunately though, Hawxy came through with a big pull request to make Alba finally work with the WebApplicationFactory model that can accommodate the new WebApplicationBuilder model, so we’re back in business soon. Alba 5.1 will be published soon with that work after some documentation updates and hopefully some testing with the Oakton + WebApplicationBuilder + Alba model.

EDIT: Alba 7.0 is up with the necessary changes, but the docs will come later this week

Lamar

Lamar is an IoC/DI container and the modern successor to StructureMap. The biggest issue with Lamar on v6 was Nuget dependencies on the IServiceCollection model, plus needing some extra implementation to light up the implied service model of Minimal APIs. All the current unit tests and even integration tests with ASP.Net Core are passing on .Net 6. To finish up a new Lamar 7.0 release is:

  • One .Net 6 related bug in the diagnostics
  • Better Minimal API support
  • Upgrade Oakton & Baseline dependencies in some of the Lamar projects
  • Documentation updates for the new IAsyncDisposable support and usage with WebApplicationBuilder with or without Minimal API usage

EDIT: Lamar 7.0 is up on Nuget with .Net 6 support

Marten/Weasel

We just made the gigantic V4 release a couple months ago knowing that we’d have to follow up quickly with a V5 release with a few breaking changes to accommodate .Net 6 and the latest version of Npgsql. We are having to make a full point release, so that opens the door for other breaking changes that didn’t make it into V4 (don’t worry, I think shifting from V4 to V5 will be easy for most people). The other Marten core team members have been doing most of the work for this so far, but I’m going to jump into the fray later this week to do some last minute changes:

  • Review some internal changes to Npgsql that might have performance impacts on Marten
  • Consider adding an event streaming model within the new V4 async daemon. For folks that wanna use that to publish events to some kind of transport (Kafka? Some kind of queue?) with strict ordering. This won’t be much yet, but it keeps coming up so we might as well consider it.
  • Multi-tenancy through multiple databases. It keeps coming up, and potentially causes breaking API changes, so we’re at least going to explore it

I’m trying not to slow down the Marten V5 release with .Net 6 support for too long, so this is all either happening really fast or not at all. I’ll blog more later this week about multi-tenancy & Marten.

Weasel is a spin off library from Marten for database change detection and ADO.Net helpers that are reused in other projects now. It will be published simultaneously with Marten.

Jasper

Oh man, I’d love, love, love to have Jasper 2.0 done by early January so that it’ll be available for usage at my company on some upcoming work. This work is on hold while I deal with the other projects, my actual day job, and family and stuff.

Marten Takes a Giant Leap Forward with the Official V4 Release!

Starting next week I’ll be doing some more deep dives into new Marten V4 improvements and some more involved sample usages.

Today I’m very excited to announce the official release of Marten V4.0! The Nugets just went live, and we’ve published out completely revamped project website at https://martendb.io.

This has been at least a two year journey of significant development effort by the Marten core team and quite a few contributors, preceded by several years of brainstorming within the Marten community about the improvements realized by this release. There’s plenty more to do in the Marten backlog, but I think this V4 release puts Marten on a very solid technical foundation for the long term future.

This was a massive effort, and I’d like to especially thank the other core team members Oskar Dudycz for answering so many user questions and being the champion for our event sourcing feature set, and Babu Annamalai for the newly improved website and all our grown up DevOps infrastructure. Their contributions over the years and especially on this giant release have been invaluable.

I’d also like to thank:

  • JT for taking on the nullability sweep and many other things
  • Ville Häkli might have accidentally become our best tester and helped us discover and deal with several issues along the way
  • Julien Perignon and his team for their patience and help with the Marten V4 shakedown cruise
  • Barry Hagan started the ball rolling with Marten’s new, expanded metadata collection
  • Raif Atef for several helpful bug reports and some related fixes
  • Simon Cropp for several pull requests and doing some dirty work
  • Kasper Damgård for a lot of feedback on Linq queries and memory usage
  • Adam Barclay helped us improve Marten’s multi-tenancy support and its usability

and many others who raised actionable issues, gave us feedback, and even made code contributions. Keeping in mind that I personally grew up on a farm in the middle of nowhere in the U.S., it’s a little mind-blowing to me to work on a project of this magnitude that at a quick glance included contributors from at least five continents on this release.

One of my former colleagues at Calavista likes to ask prospective candidates for senior architect roles what project they’ve done that they’re the most proud of. I answered “Marten” at the time, but I think I mean that even more now.

What Changed in this Release?

To quote the immortal philosopher Ferris Bueller:

The question isn‘t ‘what are we going to do’, the question is ‘what aren’t we going to do? ‘

We did try to write up a list of breaking changes for V4 in the migration guide, but here’s some highlights:

  • We generally made a huge sweep of the Marten code internals looking for every possible opportunity to reduce object allocations and dictionary lookups for low level performance improvements. The new dynamic code generation approach in Marten helped get us to that point.
  • We think Marten is even easier to bootstrap in new projects with improvements to the IServiceCollection.AddMarten() extensions
  • Marten supports System.Text.Json — but use that with some caution of course
  • The Linq support took a big step forward with a near rewrite and filled in some missing support for better querying through child collections as a big example. The Linq support is now much more modular and we think that will help us continue to grow that support. It’s a small thing, but the Linq parsing was even optimized a little bit for performance
  • Event Sourcing in Marten got a lot of big improvements that were holding up adoption by some users, especially in regards to the asynchronous projection support. The “async daemon” was completely rewritten and is now much easier to incorporate into .Net systems.
  • As a big user request, Marten supports much more options for tracking flexible metadata like correlation ids and even user defined headers in both document and event storage
  • Multi-tenancy support was improved
  • Soft delete support got some additional usability features
  • PLv8 adoption has been a stumbling block, so all the features related to PLv8 were removed to a separate add-on library called Marten.PLv8
  • The schema management features in Marten made some significant strides and should be able to handle more scenarios with less manual intervention — we think/hope/let’s just be positive for now

What’s Next for Marten?

Full point OSS releases inevitably bring a barrage of user reported errors, questions about migrating, possibly confusing wording in new documentation, and lots of queries about some significant planned features we just couldn’t fit into this already giant release. For that matter, we’ll probably have to quickly spin out a V5 release for .Net 6 and Npgsql 6 because there’s breaking changes coming due to those dependencies. OSS projects are never finished, only abandoned, and there’ll be a world of things to take care of in the aftermath of 4.0 — but for right now, Don’t Steal My Sunshine!.

Efficient Web Services with Marten V4

We’re genuinely close to finally pulling the damn trigger on Marten V4. One of the last things I’m coding for the release is a new recipe for users to write very efficient web services backed by Marten database storage.

A Traditional .Net Approach

Before I get into the exact mechanics of that, let’s set the stage a little bit and draw some contrasts with a more traditional .Net web service stack. Let’s start by saying that in that traditional stack, you’re not using any kind of CQRS and the system state is persisted in the “one true model” approach using Entity Framework Core and an RDBMS like Sql Server. Now, in some web services you need to query data from the database and serve up a subset of that data into the outgoing HTTP response in a web service endpoint. The typical flow — with a focus on what’s happening under the covers — would be to:

  1. ASP.Net Core receives an HTTP GET, finds the proper endpoint, and calls the right handler for that route. Not that it actually matters, but let’s assume the endpoint handler is an MVC Core Controller method
  2. ASP.Net Core invokes a call to your DI container to build up the MVC controller object, and calls the right method for the route
  3. You’ve been to .Net conferences and internalized the idea that an MVC controller shouldn’t get too big or do things besides HTTP mediation, so you’re delegating to a tool like MediatR. MediatR itself is going to go through another DI container service resolution to find the right handler for the input model, then invoke that handler
  4. EF Core issues a query against your Sql Server database. If you’re needing to fetch data on more than just the root aggregate model, the query is going to be an outer join against all the child tables too
  5. EF Core loops around in the database rows and creates objects for your .Net domain model classes based on its ORM mappings
  6. You certainly don’t want to send the raw domain model on the wire because of coupling concerns, or don’t want every bit of data exposed to the client in a particular web service, so you use some kind of tool like AutoMapper to transform the internal domain model objects built up by EF Core into Data Transfer Objects (DTO) purposely designed to go over the wire.
  7. Lastly, you return the outgoing DTO model, which is serialized to JSON and sent down the HTTP response by MVC Core

Sound pretty common? That’s also a lot of overhead. A lot of memory allocations, data mappings between structures, JSON serialization, and a lot of dictionary lookups just to get data out of the database and spit it out into the HTTP response. It’s also a non-trivial amount of code, and I’d argue that some of the tools I mentioned are high ceremony.

Now do CQRS!

I initially thought of CQRS as looking like a whole lot more work to code, and that’s not an uncommon impression when folks are first introduced to it. I’ve come to realize over time that it’s not really more work so much as it’s really just doing about the same amount of work in different places and different times in the application’s workflow.

Now let’s at least introduce CQRS into our application architecture. I’m not saying that that automatically implies using event sourcing, but let’s say that you are writing a pre-built “read side” model of the state of your system directly to a database of some sort. Now from that same web service I was describing before, you just need to fetch that persisted “read side” model from the database and spit that state right out to the HTTP response.

Now then, I’ve just yada, yada’d all the complexity of the CQRS architecture that continuously updates the read side view for you, but hey, Marten does that for you too and that can be a shortly forthcoming follow up blog post.

Finally bringing Marten V4 into play, let’s say our read side model for an issue tracking system looks like this:

    public class Note
    {
        public string UserName { get; set; }
        public DateTime Timestamp { get; set; }
        public string Text { get; set; }
    }

    public class Issue
    {
        public Guid Id { get; set; }
        public string Description { get; set; }
        public bool Open { get; set; }

        public IList<Note> Notes { get; set; }
    }

Before anyone gets bent out of shape by this, it’s perfectly possible to tell Marten to serialize the persisted documents to JSON with camel or even snake casing to be more idiomatic JSON or Javascript friendly.

Now, let’s build out two controller endpoints, one that gives you an Issue payload by searching by its id, and a second endpoint that gives you all the open Issue models in a single JSON array payload. That controller — using some forthcoming functionality in a new Marten.AspNetCore Nuget — looks like this:

    public class GetIssueController: ControllerBase
    {
        private readonly IQuerySession _session;

        public GetIssueController(IQuerySession session)
        {
            _session = session;
        }

        [HttpGet("/issue/{issueId}")]
        public Task Get(Guid issueId)
        {
            // This "streams" the raw JSON to the HttpResponse
            // w/o ever having to read the full JSON string or
            // deserialize/serialize within the HTTP request
            return _session.Json
                .WriteById<Issue>(issueId, HttpContext);
        }

        [HttpGet("/issue/open")]
        public Task OpenIssues()
        {
            // This "streams" the raw JSON to the HttpResponse
            // w/o ever having to read the full JSON string or
            // deserialize/serialize within the HTTP request
            return _session.Query<Issue>()
                .Where(x => x.Open)
                .WriteArray(HttpContext);
        }

In the GET: /issue/{issueId} endpoint, you’ll notice the call to the new IQuerySession.Json.WriteById() extension method, and how I’m passing to it the current HttpContext. That method is:

  1. Executing a database query against the underlying Postgresql database. And in this case, all of the data is stored in a single column in a single row, so there’s not JOINs or sparse datasets like there would be with an ORM querying an object that has child collections.
  2. Write the raw bytes of the persisted JSON data directly to the HttpResponse.Body without ever bothering to write the whole thing to a .Net string and definitely without having to incur the overhead of JSON deserialization/serialization. That extension method also sets the HTTP content-length and content-type response headers, as well as setting the HTTP status code to 200 if the document is found, or 404 if the data is not found.

In the second HTTP endpoint for GET: /issue/open, the call to WriteArray(HttpContext) is doing something very similar, but writing the results as a JSON array.

By no means is this technique going to be applicable to every HTTP GET endpoint, but when it does, this is far, far more efficient and simpler to code than the more traditional approach that involves all the extra pieces and just has so much more memory allocations and hardware operations going on just to shoot some JSON down the wire.

For a little more context, here’s a test against the /issue/{issueId} endpoint, with a cameo from Alba to help me test the HTTP behavior:

        [Fact]
        public async Task stream_a_single_document_hit()
        {
            var issue = new Issue {Description = "It's bad", Open = true};

            var store = theHost.Services.GetRequiredService<IDocumentStore>();
            using (var session = store.LightweightSession())
            {
                session.Store(issue);
                await session.SaveChangesAsync();
            }

            var result = await theHost.Scenario(s =>
            {
                s.Get.Url($"/issue/{issue.Id}");
                s.StatusCodeShouldBeOk();
                s.ContentTypeShouldBe("application/json");
            });

            var read = result.ReadAsJson<Issue>();

            read.Description.ShouldBe(issue.Description);
        }

and one more test showing the “miss” behavior:

        [Fact]
        public async Task stream_a_single_document_miss()
        {
            await theHost.Scenario(s =>
            {
                s.Get.Url($"/issue/{Guid.NewGuid()}");
                s.StatusCodeShouldBe(404);
            });
        }

Feedback anyone?

This code isn’t even released yet even in an RC Nuget, so feel very free to make any kind of suggestion or send us feedback.

Integration Testing: IHost Lifecycle with NUnit

Starting yesterday, all of my content about automated testing is curated under the new Automated Testing page on this site.

I kicked off a new blog series yesterday with Integration Testing: IHost Lifecycle with xUnit.Net. I started by just discussing how to manage the lifecycle of a .Net IHost inside of an xUnit.Net testing library. I used xUnit.Net because I’m much more familiar with that library, but we mostly use NUnit for our testing at MedeAnalytics, so I’m going to see how the IHost lifecycle I discussed and demonstrated last time in xUnit.NEt could work in NUnit.

To catch you up from the previous post, I have two projects:

  1. An ASP.Net Core web service creatively named WebApplication. This web service has a small endpoint that allows you to post an array of numbers that returns a response telling you the sum and product of those numbers. The code for that controller action is shown in my previous post.
  2. A second testing project using NUnit that references WebApplication. The testing project is going to use Alba for integration testing at the HTTP layer.

With NUnit, I chose to use the SetupFixture construct to manage and share the IHost for the test suite like this:

    [SetUpFixture]
    public class Application
    {
        // Make this lazy so you don't build it out
        // when you don't need it.
        private static readonly Lazy<IAlbaHost> _host;

        static Application()
        {
            _host = new Lazy<IAlbaHost>(() => Program
                .CreateHostBuilder(Array.Empty<string>())
                .StartAlba());
        }

        public static IAlbaHost AlbaHost => _host.Value;

        // I want to expose the underlying Lamar container for some later
        // usage
        public static IContainer Container => (IContainer)_host.Value.Services;

        // Make sure that NUnit will shut down the AlbaHost when
        // all the projects are finished
        [OneTimeTearDown]
        public void Teardown()
        {
            if (_host.IsValueCreated)
            {
                _host.Value.Dispose();
            }
        }
    }

With the IHost instance managed by the Application static class above, I can consume the Alba host in an NUnit test like this:

    public class sample_integration_fixture
    {
        [Test]
        public async Task happy_path_arithmetic()
        {
            // Building the input body
            var input = new Numbers
            {
                Values = new[] {2, 3, 4}
            };

            var response = await Application.AlbaHost.Scenario(x =>
            {
                // Alba deals with Json serialization for us
                x.Post.Json(input).ToUrl("/math");
                
                // Enforce that the HTTP Status Code is 200 Ok
                x.StatusCodeShouldBeOk();
            });

            var output = response.ReadAsJson<Result>();
            output.Sum.ShouldBe(9);
            output.Product.ShouldBe(24);
        }
    }

And now a couple notes about what I did in Application:

  1. I think it’s important to create the IHost lazily, so that you don’t incur the cost of spinning up the IHost when you might be running other tests in your suite that don’t need the IHost. Rapid developer feedback is important, and that’s an awfully easy optimization that could pay off.
  2. The static Teardown() method is decorated with the `[OneTimeTearDown]` attribute to direct NUnit to call that method after all the tests are executed. I cannot stress enough how important it is to clean up resources in your test harness to ensure your ability to quickly iterate through subsequent test runs.
  3. NUnit has a very different model for parallelization than xUnit.Net, and it’s completely “opt in”, so I think there’s less to worry about on that front with NUnit.

At this point I don’t think I have a hard opinion about xUnit.Net vs. NUnit, and I certainly wouldn’t bother switching an existing project from one to the other (even though I’ve certainly done that plenty of times in the past). I haven’t thought this one through enough, but I still think that xUnit.Net is a little bit cleaner for unit testing, but NUnit might be better for integration testing because it gives you finer grained control over fixture lifecycle and has some built in support for test timeouts and retries. At one point I had high hopes for Fixie as another alternative, and that project has become active again, but it would have a long road to challenge either of the two now mainstream tools.

What’s Next?

This series is meant to support my colleagues at MedeAnalytics, so it’s driven by what we just happen to be talking about at any given point. Tomorrow I plan to put out a little post on some Lamar-specific tricks that are helpful in integration testing. Beyond that, I think dealing with database state is the most important thing we’re missing at work, so that needs to be a priority.