My OSS Plans for 2023

Before I start, I am lucky to be part of a great group of OSS collaborators across the board. In particular, thanks to Oskar, Babu, Khalid, Hawxy, and Eric Smith for helping make 2022 a hugely productive and satisfying year in OSS work for me. I’m looking forward to working with y’all more in the times ahead.

In recent years I’ve kicked off my side project work with an overly optimistic and hopelessly unrealistic list of ambitions for my OSS projects. You can find the 2022 and 2021 versions still hanging around, only somewhat fulfilled. I’m going to put down my markers for what I hope to accomplish in 2023 — and because I’m the kind of person who obsesses more about the list of things to do rather than looking back at accomplishments, I’ll take some time to review what was done in many of these projects in 2022. Onward.

Marten is going gang busters, and 2022 was a very encouraging year for the Marten core team & I. The sizable V5.0 release dropped in March with some significant usability improvements, multi-tenancy with a database per tenant(s) support, and other goodness specifically to deal with apparent flaws in the gigantic V4.0 release from late 2021.

For 2023, the V6 release will come soon, mostly with changes to underlying dependencies.

Beyond that, I think that V7 will be a massively ambitious release in terms of important new features — hopefully in time for Event Sourcing Live 2023. If I had a magic wand that would magically give us all enough bandwidth to pull it off, my big hopes for Marten V7 are:

  • The capability to massively scale the Event Store functionality in Marten to much, much larger systems
  • Improved throughput and capacity with asynchronous projections
  • A formal, in the box subscription model
  • The ability to shard document database entities
  • Dive into the Linq support again, but this time use Postgresql V15 specific functionality to make the generated queries more efficient — especially for any possible query that goes through child collections. I haven’t done the slightest bit of detailed analysis on that one yet though
  • The ability to rebuild projections with zero downtime and/or faster projection rebuilds

Marten will also be impacted by the work being done with…

After a couple years of having almost given up on it, I restarted work pretty heavily on what had been called Jasper. While building a sample application for a conference talk, Oskar & I realized there was some serious opportunity for combining Marten and the then-Jasper for very low ceremony CQRS architectures. Now, what’s the best way to revitalize an OSS project that was otherwise languishing and basically a failure in terms of adoption? You guessed it, rename the project with an obvious theme related to an already successful OSS project and get some new, spiffier graphics and better website! And basically all new internals, new features, quite a few performance improvements, better instrumentation capabilities, more robust error handling, and a unique runtime model that I very sincerely believe will lead to better developer productivity and better application performance than existing tools in the .NET space.

Hence, Wolverine is the new, improved message bus and local mediator (I like to call that a “command bus” so as to not suffer the obvious comparisons to MediatR which I feel shortchanges Wolverine’s much greater ambitions). Right now I’m very happy with the early feedback from Wolverine’s JetBrains webinar (careful, the API changed a bit since then) and its DotNetRocks episode.

Right now the goal is to make it to 1.0 by the end of January — with the proviso that Marten V6 has to go first. The remaining work is mostly to finish the documentation website and a handful of tactical feature items mostly to prove out some of the core abstractions before minting 1.0.

Luckily for me, a small group of us at work have started a proof of concept for rebuilding/converting/migrating a very large system currently using NHibernate, Sql Server, and NServiceBus to Wolverine + Marten. That’s going to be an absolutely invaluable learning experience that will undoubtedly shape the short term work in both tools.

Beyond 1.0, I’m hoping to effectively use Wolverine to level up on a lot of technologies by adding:

  • Some other transport options (Kafka? Kinesis? EventBridge?)
  • Additional persistence options with Cosmos Db and Dynamo Db being the likely candidates so far
  • A SignalR transport
  • First class serverless support using Wolverine’s runtime model, with some way of optimizing the cold start
  • An option to use Wolverine’s runtime model for ASP.Net Core API endpoints. I think there’s some opportunity to allow for a low ceremony, high performance alternative for HTTP API creation while still being completely within the ASP.Net Core ecosystem

I hope that Wolverine is successful by itself, but the real goal of Wolverine is to allow folks to combine it with Marten to form the….

“Critter Stack”

The hope with Marten + Wolverine is to create a very effective platform for server-side .NET development in general. More specifically, the goal of the “critter stack” combination is to become the acknowledged industry leader for building systems with a CQRS plus Event Sourcing architectural model. And I mean across all development platforms and programming languages.

Pride goeth before destruction, and an haughty spirit before a fall.

Proverbs 16:18 KJV

And let me just more humbly say that there’s a ways to go to get there, but I’m feeling optimistic right now and want to set out sights pretty high. I especially feel good about having unintentionally made a huge career bet on Postgresql.

Lamar recently got its 10.0 release to add first class .NET 7.0 support (while also dropping anything < .NET 6) and a couple performance improvements and bug fixes. There hasn’t been any new functionality added in the last year except for finally getting first class support for IAsyncDisposable. It’s unlikely that there will be much development in the new year for Lamar, but we use it at work, I still think it has advantages over the built in DI container from .NET, and it’s vital for Wolverine. Lamar is here to stay.

Alba

Alba 7.0 (and a couple minor releases afterward) added first class .NET 7 support, much better support for testing Minimal API routes that accept and/or return JSON, and other tactical fixes (mostly by Hawxy).

See Alba for Effective ASP.Net Core Integration Testing for more information on how Alba improved this year.

I don’t have any specific plans for Alba this year, but I use Alba to test pieces of Marten and Wolverine and we use it at work. If I manage to get my way, we’ll be converting as many slow, unreliable Selenium based tests to fast running Alba tests against HTTP endpoints in 2023 at work. Alba is here to stay.

Not that this is germane to this post, but the very lightly traveled road behind that sign has a straightaway section where you can see for a couple miles at a time. I may or may not have tried to find out exactly how fast my first car could really go on that stretch of road at one point.

Oakton had a significant new feature set around the idea of “stateful resources” added in 2022, specifically meant for supporting both Marten and Wolverine. We also cleaned up the documentation website. The latest version 6.0 brought Oakton up to .NET 7 while also using shared dependencies with the greater JasperFx family (Marten, Wolverine, Lamar, etc.). I don’t exactly remember when, but it also got better “help” presentation by leveraging Spectre.Console more.

I don’t have any specific plans for Oakton, but it’s the primary command line parser and command line utility library for both Marten, Wolverine, and Lamar, so it’s going to be actively maintained.

And finally, I’ve registered my own company called “Jasper Fx Software.” It’s going much slower than I’d hoped, but at some point early in 2023 I’ll have my shingle out to provide support contracts, consulting, and custom development with the tools above. It’s just a side hustle for now, but we’ll see if that can become something viable over time.

To be clear about this, the Marten core team & I are very serious about building a paid, add-on model to Marten + Wolverine and some of the new features I described up above are likely to fall under that umbrella. I’m sneaking that in at the end of this, but that’s probably the main ambition for me personally in the new year.

What about?…

If it’s not addressed in this post, it’s either dead (StructureMap) or something I consider just to be a supporting player (Weasel). Storyteller alas, is likely not coming back. Unless it does as something renamed to “Bobcat” as a tool specifically designed to help automate tests for Marten or Wolverine where xUnit.Net by itself doesn’t do so hot. And if Bobcat does end up existing, it’ll leverage existing tools as much as possible.

Wolverine and “Clone n’ Go!” Development

I’ve been able to talk and write a bit about Wolverine in the last couple weeks. This builds on the previous blog posts in this list:

When I start with a brand new codebase, I want to be able to be up and going mere minutes after doing an initial clone of the Git repository. And by “going,” I mean being able to run all the tests and running any kind of application in the codebase.

In most cases an application codebase I work with these days is going to have infrastructure dependencies. Usually a database, possibly some messaging infrastructure as well. Not to worry, because Wolverine has you covered with a lot of functionality out of the box to get your infrastructural dependencies configured in the shape you need to start running your application.

Before I get into Wolverine specifics, I’m assuming that the basic developer box has some baseline infrastructure installed:

  • The latest .NET SDK
  • Docker Desktop
  • Git itself
  • Node.js — not used by this post at all, but it’s almost impossible to not need Node.js at some point these days

Yet again, I want to go back to the simple banking application from previous posts that was using both Marten and Rabbit MQ for external messaging. Here’s the application bootstrapping:

using AppWithMiddleware;
using IntegrationTests;
using JasperFx.Core;
using Marten;
using Oakton;
using Wolverine;
using Wolverine.FluentValidation;
using Wolverine.Marten;
using Wolverine.RabbitMQ;

var builder = WebApplication.CreateBuilder(args);

builder.Services.AddMarten(opts =>
{
    // This would be from your configuration file in typical usage
    opts.Connection(Servers.PostgresConnectionString);
    opts.DatabaseSchemaName = "wolverine_middleware";
})
    // This is the wolverine integration for the outbox/inbox,
    // transactional middleware, saga persistence we don't care about
    // yet
    .IntegrateWithWolverine()
    
    // Just letting Marten build out known database schema elements upfront
    // Helps with Wolverine integration in development
    .ApplyAllDatabaseChangesOnStartup();

builder.Host.UseWolverine(opts =>
{
    // Middleware introduced in previous posts
    opts.Handlers.AddMiddlewareByMessageType(typeof(AccountLookupMiddleware));
    opts.UseFluentValidation();

    // Explicit routing for the AccountUpdated
    // message handling. This has precedence over conventional routing
    opts.PublishMessage<AccountUpdated>()
        .ToLocalQueue("signalr")

        // Throw the message away if it's not successfully
        // delivered within 10 seconds
        .DeliverWithin(10.Seconds())
        
        // Not durable
        .BufferedInMemory();
    
    var rabbitUri = builder.Configuration.GetValue<Uri>("rabbitmq-broker-uri");
    opts.UseRabbitMq(rabbitUri)
        // Just do the routing off of conventions, more or less
        // queue and/or exchange based on the Wolverine message type name
        .UseConventionalRouting()
        
        // This tells Wolverine to set up any missing Rabbit MQ queues, exchanges,
        // or bindings needed by the application if they are missing
        .AutoProvision() 
        .ConfigureSenders(x => x.UseDurableOutbox());
});

var app = builder.Build();

// One Minimal API that just delegates directly to Wolverine
app.MapPost("/accounts/debit", (DebitAccount command, IMessageBus bus) => bus.InvokeAsync(command));

// This is important, I'm opting into Oakton to be my
// command line executor for extended options
return await app.RunOaktonCommands(args);

After cloning this codebase, I should be able to quickly run a docker compose up -d command from the root of the codebase to set up dependencies like this:

version: '3'
services:
  postgresql:
    image: "clkao/postgres-plv8:latest"
    ports:
     - "5433:5432"
    environment:
      - POSTGRES_DATABASE=postgres
      - POSTGRES_USER=postgres
      - POSTGRES_PASSWORD=postgres
  
  rabbitmq:
    image: "rabbitmq:3-management"
    ports:
     - "5672:5672"
     - "15672:15672"

As it is, the Wolverine setup I showed above would allow you to immediately be up and running because:

  • In its default setting Marten is able to detect and build out missing database schema objects in the underlying application database at runtime
  • The Postgresql database schema objects necessary for Wolverine’s transactional outbox are created at bootstrapping time if they’re missing by Marten with the combination of the IntegrateWithWolverine() call and the ApplyAllDatabaseChangesOnStartup() declaration.
  • Any missing Rabbit MQ queues or exchanges are created at runtime due to the AutoProvision() declaration we made in the Rabbit MQ integration with Wolverine

Cool, right?

But there’s more! Wolverine heavily uses the related Oakton library for expanded command line utilities that can be helpful for diagnosing configuration issues, checking up on infrastructure, or applying infrastructure set up at deployment time instead of depending on doing things at runtime.

If I go to the root of the main project and type dotnet run -- help, I’ll get a list of the available command line options like this:

The available commands are:

  Alias           Description
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
  check-env       Execute all environment checks against the application
  codegen         Utilities for working with JasperFx.CodeGeneration and JasperFx.RuntimeCompiler
  db-apply        Applies all outstanding changes to the database(s) based on the current configuration
  db-assert       Assert that the existing database(s) matches the current configuration
  db-dump         Dumps the entire DDL for the configured Marten database
  db-patch        Evaluates the current configuration against the database and writes a patch and drop file if there are any differences
  describe        Writes out a description of your running application to either the console or a file
  help            List all the available commands
  marten-apply    Applies all outstanding changes to the database based on the current configuration
  marten-assert   Assert that the existing database matches the current Marten configuration
  marten-dump     Dumps the entire DDL for the configured Marten database
  marten-patch    Evaluates the current configuration against the database and writes a patch and drop file if there are any differences
  projections     Marten's asynchronous projection and projection rebuilds
  resources       Check, setup, or teardown stateful resources of this system
  run             Start and run this .Net application
  storage         Administer the envelope storage


Use dotnet run -- ? [command name] or dotnet run -- help [command name] to see usage help about a specific command

Let me call out just a few highlights:

  • `dotnet run — resources setup` would do any necessary set up of both the Marten or Rabbit MQ items. Likewise, if we were using Sql Server as the backing storage and integrating that with Wolverine as the outbox storage, this command would set up the necessary Sql Server tables and functions if they were missing. This generically applies as well to Wolverine’s Azure Service Bus or Amazon SQS integrations
  • `dotnet run — check-env` would run a set of environment checks to verify that the application can connect to the configured Rabbit MQ broker, the Postgresql database, and any other checks you may have. This is a great way to make deployments “fail fast”
  • `dotnet run — storage clear` would delete any persisted messages in the Wolverine inbox/outbox to remove old messages that might interfere with successful testing

Questions, comments, feedback? Hopefully this shows that Wolverine is absolutely intended for “grown up development” in real life.

Ephemeral Messages with Wolverine

I’ve been able to talk and write a bit about Wolverine in the last couple weeks. This builds on the previous blog posts in this list:

This post is a little bonus content that I accidentally cut from the previous post.

Last time I talked about Wolverine’s support for the transactional outbox pattern for messages that just absolutely have to be delivered. About the same day that I was writing that post, I was also talking with a colleague through a very different messaging scenario where a stream of status updates were being streamed to WebSocket connected clients. In this case, the individual messages being broadcast only had temporary validity, and were quickly obsolete. There’s absolutely no need for message persistence or guaranteed delivery. There’s also no good reason to even attempt to deliver a message in this case that’s more than a few seconds old.

To that end, let’s go back yet again to the command handler for the DebitAccount command, but in this version I’m going to cascade an AccountUpdated message that would ostensibly be broadcast through WebSockets to any connected client:

    [Transactional] 
    public static IEnumerable<object> Handle(
        DebitAccount command, 
        Account account, 
        IDocumentSession session)
    {
        account.Balance -= command.Amount;
     
        // This just marks the account as changed, but
        // doesn't actually commit changes to the database
        // yet. That actually matters as I hopefully explain
        session.Store(account);
 
        // Conditionally trigger other, cascading messages
        if (account.Balance > 0 && account.Balance < account.MinimumThreshold)
        {
            yield return new LowBalanceDetected(account.Id);
        }
        else if (account.Balance < 0)
        {
            yield return new AccountOverdrawn(account.Id);
        }

        // Send out a status update message that is maybe being 
        // broadcast to websocket-connected clients
        yield return new AccountUpdated(account.Id, account.Balance);
    }

Now I need to switch to the Wolverine bootstrapping and configure some explicit routing of the AccountUpdated message. In this case, I’m going to let the WebSocket messaging of the AccountUpdated messages happen from a non-durable, local queue:

builder.Host.UseWolverine(opts =>
{
    // Middleware introduced in previous posts
    opts.Handlers.AddMiddlewareByMessageType(typeof(AccountLookupMiddleware));
    opts.UseFluentValidation();

    // Explicit routing for the AccountUpdated
    // message handling. This has precedence over conventional routing
    opts.PublishMessage<AccountUpdated>()
        .ToLocalQueue("signalr")

        // Throw the message away if it's not successfully
        // delivered within 10 seconds
        
        // THIS CONFIGURATION ITEM WAS ADDED IN v0.9.6
        .DeliverWithin(10.Seconds())
        
        // Not durable
        .BufferedInMemory();
    
    var rabbitUri = builder.Configuration.GetValue<Uri>("rabbitmq-broker-uri");
    opts.UseRabbitMq(rabbitUri)
        // Just do the routing off of conventions, more or less
        // queue and/or exchange based on the Wolverine message type name
        .UseConventionalRouting()
        .ConfigureSenders(x => x.UseDurableOutbox());

});

The call to DeliverWithin(10.Seconds()) puts a rule on the local “signalr” queue that all messages published to that queue will have an effective expiration date of 10 seconds from the point at which the message was published. If the web socket publishing is backed up, or there’s a couple failure/retry cycles that delays the message, Wolverine will discard the message before it’s processed.

This option is perfect for transient status messages that have short shelf lives. Wolverine also lets you happily mix and match durable messaging and transient messages in the same message batch, as I hope is evident in the sample handler method in the first code sample.

Lastly, I used a fluent interface to apply the “deliver within” rule at the local queue level. That can also be applied at the message type level with an attribute like this alternative usage:

// The attribute directs Wolverine to send this message with 
// a "deliver within 5 seconds, or discard" directive
[DeliverWithin(5)]
public record AccountUpdated(Guid AccountId, decimal Balance);

Or lastly, I can set the “deliver within” rule on a message by message basis at the time of sending the message like so:

        // "messaging" is a Wolverine IMessageContext or IMessageBus service 
        // Do the deliver within rule on individual messages
        await messaging.SendAsync(new AccountUpdated(account.Id, account.Balance),
            new DeliveryOptions { DeliverWithin = 5.Seconds() });

I’ll try to sneak in one more post before mostly shutting down for Christmas and New Year’s. Next time up I’d like to talk about Wolverine’s support for grown up “clone n’ go” development through its facilities for configuring infrastructure like Postgresql or Rabbit MQ for you based on your application configuration.

Transactional Outbox/Inbox with Wolverine and why you care

I’ve been able to talk and write a bit about Wolverine in the last couple weeks. This builds on the last two blog posts in this list:

Alright, back to the sample message handler from my previous two blog posts here’s the shorthand version:

    [Transactional] 
    public static async Task Handle(
        DebitAccount command, 
        Account account, 
        IDocumentSession session, 
        IMessageContext messaging)
    {
        account.Balance -= command.Amount;
     
        // This just marks the account as changed, but
        // doesn't actually commit changes to the database
        // yet. That actually matters as I hopefully explain
        session.Store(account);
 
        // Conditionally trigger other, cascading messages
        if (account.Balance > 0 && account.Balance < account.MinimumThreshold)
        {
            await messaging.SendAsync(new LowBalanceDetected(account.Id));
        }
        else if (account.Balance < 0)
        {
            await messaging.SendAsync(new AccountOverdrawn(account.Id));
         
            // Give the customer 10 days to deal with the overdrawn account
            await messaging.ScheduleAsync(new EnforceAccountOverdrawnDeadline(account.Id), 10.Days());
        }
    }

and just for the sake of completion, here is a longer hand, completely equivalent version of the same handler:

[Transactional] 
public static async Task Handle(
    DebitAccount command, 
    Account account, 
    IDocumentSession session, 
    IMessageContext messaging)
{
    account.Balance -= command.Amount;
     
    // This just marks the account as changed, but
    // doesn't actually commit changes to the database
    // yet. That actually matters as I hopefully explain
    session.Store(account);
 
    if (account.Balance > 0 && account.Balance < account.MinimumThreshold)
    {
        await messaging.SendAsync(new LowBalanceDetected(account.Id));
    }
    else if (account.Balance < 0)
    {
        await messaging.SendAsync(new AccountOverdrawn(account.Id));
         
        // Give the customer 10 days to deal with the overdrawn account
        await messaging.ScheduleAsync(new EnforceAccountOverdrawnDeadline(account.Id), 10.Days());
    }
}

To review just a little bit, that Wolverine style message handler at runtime is committing changes to an Account in the underlying database and potentially sending out additional messages based on the state of the Account. For folks who are experienced with asynchronous messaging systems who hear me say that Wolverine does not support any kind of 2 phase commits between the database and message brokers, you’re probably already concerned with some potential problems in that code above:

  • Maybe the database changes fail, but there are “ghost” messages already queued that pertain to data changes that never actually happened
  • Maybe the messages actually manage to get through to their downstream handlers and are applied erroneously because the related database changes have not yet been applied. That’s a race condition that absolutely happens if you’re not careful (ask me how I know 😦 )
  • Maybe the database changes succeed, but the messages fail to be sent because of a network hiccup or who knows what problem happens with the message broker

Needless to say, there’s genuinely a lot of potential problems from those handful lines of code up above. Some of you reading this have probably already said to yourself that this calls for using some sort of transactional outbox — and Wolverine thinks so too!

The general idea of an “outbox” is to obviate the lack of true 2 phase commits by ensuring that outgoing messages are held until the database transaction is successful, then somehow guaranteeing that the messages will be sent out afterward. In the case of Wolverine and its integration with Marten, the order of operations in the message handler (in either version) shown above is to:

  1. Tell Marten that the Account document needs to be persisted. Nothing happens at this point other than marking the document as changed
  2. The handler creates messages that are registered with the current IMessageContext. Again, the messages do not actually go out here, instead they are routed by Wolverine to know exactly how and where they should be sent later
  3. The Wolverine + Marten [Transactional] middleware is calling the Marten IDocumentSession.SaveChangesAsync() method that makes the changes to the Account document and also creates new database records to persist any outgoing messages in the underlying Postgresql application database in one single, native database transaction. Even better, with the Marten integration, all the database operations are even happening in one single batched database call for maximum efficiency.
  4. When Marten successfully commits the database transaction, it tells Wolverine to “flush” the outgoing messages to the sending agents in Wolverine (depending on configuration and exact transport type, the messages might be sent “inline” or batched up with other messages to go out later).

To be clear, Wolverine also supports a transactional outbox with EF Core against either Sql Server or Postgresql. I’ll blog and/or document that soon.

The integration with Marten that’s in the WolverineFx.Marten Nuget isn’t that bad (I hope). First off, in my application bootstrapping I chain the IntegrateWithWolverine() call to the standard Marten bootstrapping like this:

using Wolverine.Marten;

var builder = WebApplication.CreateBuilder(args);

builder.Services.AddMarten(opts =>
{
    // This would be from your configuration file in typical usage
    opts.Connection(Servers.PostgresConnectionString);
    opts.DatabaseSchemaName = "wolverine_middleware";
})
    // This is the wolverine integration for the outbox/inbox,
    // transactional middleware, saga persistence we don't care about
    // yet
    .IntegrateWithWolverine()
    
    // Just letting Marten build out known database schema elements upfront
    // Helps with Wolverine integration in development
    .ApplyAllDatabaseChangesOnStartup();

For the moment, I’m going to say that all the “cascading messages” from the DebitAccount message handler are being handled by local, in memory queues. At this point — and I’d love to have feedback on the applicability or usability of this approach — each endpoint has to be explicitly enrolled into the durable outbox or inbox (for incoming, listening endpoints) mechanics. Knowing both of those things, I’m going to add a little bit of configuration to make every local queue durable:

builder.Host.UseWolverine(opts =>
{
    // Middleware introduced in previous posts
    opts.Handlers.AddMiddlewareByMessageType(typeof(AccountLookupMiddleware));
    opts.UseFluentValidation();
    
    // The nomenclature might be inconsistent here, but the key
    // point is to make the local queues durable
    opts.Policies
        .AllLocalQueues(x => x.UseDurableInbox());
});

If instead I chose to publish some of the outgoing messages with Rabbit MQ to other processes (or just want the messages queued), I can add the WolverineFx.RabbitMQ Nuget and change the bootstrapping to this:

builder.Host.UseWolverine(opts =>
{
    // Middleware introduced in previous posts
    opts.Handlers.AddMiddlewareByMessageType(typeof(AccountLookupMiddleware));
    opts.UseFluentValidation();

    var rabbitUri = builder.Configuration.GetValue<Uri>("rabbitmq-broker-uri");
    opts.UseRabbitMq(rabbitUri)
        // Just do the routing off of conventions, more or less
        // queue and/or exchange based on the Wolverine message type name
        .UseConventionalRouting()
        .ConfigureSenders(x => x.UseDurableOutbox());
});

I just threw a bunch of details at you all, so let me try to anticipate a couple questions you might have and also try to answer them:

  • Do the messages get delivered before the transaction completes? No, they’re held in memory until the transaction completes, then get sent
  • What happens if the message delivery fails? The Wolverine sending agents run in a hosted service within your application. When message delivery fails, the sending agent will try it again up to a configurable amount of times (100 is the default). Read the next question though before the “100” number bugs you:
  • What happens if the whole message broker is down? Wolverine’s sending agents have a crude circuit breaker and will stop trying to send message batches if there are too many failures in a period of time, then resume sending after a periodic “ping” message gets though. Long story short, Wolverine will buffer outgoing messages in the application database until Wolverine is able to reach the message broker.
  • What happens if the application process fails between the transaction succeeding and the message getting to the broker? The message will be recovered and sent by either another active node of the application if running in a cluster, or by restarting the single application process.
  • So you can do this in a cluster without sending the message multiple times? Yep.
  • What if you have zillions of stored messages and you restart the application, will it overwhelm the process and cause harm? It’s paged, distributes a bit between nodes, and there’s some back pressure to keep it from having too many outgoing messages in memory.
  • Can I use Sql Server instead? Yes. But for the moment, it’s like the scene in Blues Brothers when Elwood asks what kinds of music they have and the waitress replies “we have both kinds, Country and Western.”
  • Can I tell Wolverine to throw away a message that’s old and maybe out of date if it still hasn’t been processed? Yes, and I’ll show a bit of that in the next post.
  • What about messages that are routed to a non-durable endpoint as part of an outbox’d transaction? Good question! Wolverine is still holding those messages in memory until the message being processed successfully finishes, then kicks them out to in memory sending agents. Those sending agents have their own internal queues and retry loops for maximum resiliency. And actually for that matter, Wolverine has a built in in memory outbox to at least deal with ordering between the message processing and actually sending outgoing messages.

Next Time

WordPress just cut off the last section, so I’ll write a short follow up on mixing in non-durable message queues with message expirations. Next week I’ll keep on this sample application by discussing how Wolverine & its friends try really hard for a “clone n’go” developer workflow where you can be up and running mere minutes with all the database & message broker infrastructure up and going after a fresh clone of the codebase.

How Wolverine allows for easier testing

Yesterday I blogged about the new Wolverine alpha release with a sample that hopefully showed off how Wolverine’s different approach can lead to better developer productivity and higher performance than similar tools. Today I want to follow up on that by extending the code sample with other functionality, but then diving into how Wolverine (hopefully) makes automated unit or integration testing easier than what you may be used to.

From yesterday’s sample, I showed this small message handler for applying a debit to a bank account from an incoming message:

public static class DebitAccountHandler
{
    [Transactional] 
    public static void Handle(DebitAccount command, Account account, IDocumentSession session)
    {
        account.Balance -= command.Amount;
        session.Store(account);
    }
}

Today let’s extend this to:

  1. Raise an event if the balance gets below a specified threshold for the account
  2. Or raises a different event if the balance goes negative, but also…
  3. Sends a second “timeout” message to carry out some kind of enforcement action against the account if it is still negative by then

Here’s the new event and command messages:

public record LowBalanceDetected(Guid AccountId) : IAccountCommand;
public record AccountOverdrawn(Guid AccountId) : IAccountCommand;

// We'll change this in a little bit
public class EnforceAccountOverdrawnDeadline : IAccountCommand
{
    public Guid AccountId { get; }

    public EnforceAccountOverdrawnDeadline(Guid accountId)
    {
        AccountId = accountId;
    }
}

Now, we could extend the message handler to raise the necessary events and the overdrawn enforcement command message like so:

    [Transactional] 
    public static async Task Handle(
        DebitAccount command, 
        Account account, 
        IDocumentSession session, 
        IMessageContext messaging)
    {
        account.Balance -= command.Amount;
        
        // This just marks the account as changed, but
        // doesn't actually commit changes to the database
        // yet. That actually matters as I hopefully explain
        session.Store(account);

        if (account.Balance > 0 && account.Balance < account.MinimumThreshold)
        {
            await messaging.SendAsync(new LowBalanceDetected(account.Id));
        }
        else if (account.Balance < 0)
        {
            await messaging.SendAsync(new AccountOverdrawn(account.Id));
            
            // Give the customer 10 days to deal with the overdrawn account
            await messaging.ScheduleAsync(new EnforceAccountOverdrawnDeadline(account.Id), 10.Days());
        }
    }

And just to add a little more context, here’s part of what the message handler for the EnforceAccountOverdrawnDeadline could look like:

    public static void Handle(EnforceAccountOverdrawnDeadline command, Account account)
    {
        // Don't do anything if the balance has been corrected
        if (account.Balance >= 0) return;
        
        // Dunno, send in the goons? Report them to a credit agency? Guessing
        // nothing pleasant happens here
    }

Alrighty then, back to the new version of the message handler that raises extra event messages depending on the state of the account. You’ll notice that I used method injection to pass in the Wolverine IMessageContext for the current message being handled. That gives me access to spawn additional messages and even schedule the execution of a command for a later time. You should notice that I now had to make the handler method asynchronous as the various SendAsync() calls return ValueTask, so it’s a little uglier now. Don’t worry, we’re going to come back to that, so don’t settle for this quite yet.

I’m going to leave this for the next post, but if you’re experienced with asynchronous messaging you’re screaming that there’s a potential race condition or risk of phantom data or messages between the extra messages going out and the Account being committed. Tomorrow I’ll discuss how Wolverine’s transactional outbox support removes those very real, very common problems in asynchronous message processing.

So let’s jump into what a unit test could look like for the message handler for the DebitAccount method. To start with, I’ll use Wolverine’s built in TestMessageContext to act as a “spy” on the method. A couple tests might look like this using my typical testing stack of xUnit.Net, Shouldly, and NSubstitute:

public class when_the_account_is_overdrawn : IAsyncLifetime
{
    private readonly Account theAccount = new Account
    {
        Balance = 1000,
        MinimumThreshold = 100,
        Id = Guid.NewGuid()
    };

    private readonly TestMessageContext theContext = new TestMessageContext();
    
    // I happen to like NSubstitute for mocking or dynamic stubs
    private readonly IDocumentSession theDocumentSession = Substitute.For<IDocumentSession>();
    


    public async Task InitializeAsync()
    {
        var command = new DebitAccount(theAccount.Id, 1200);
        await DebitAccountHandler.Handle(command, theAccount, theDocumentSession, theContext);
    }

    [Fact]
    public void the_account_balance_should_be_negative()
    {
        theAccount.Balance.ShouldBe(-200);
    }

    [Fact]
    public void raises_an_account_overdrawn_message()
    {
        // ShouldHaveMessageOfType() is an extension method in 
        // Wolverine itself to facilitate unit testing assertions like this
        theContext.Sent.ShouldHaveMessageOfType<AccountOverdrawn>()
            .AccountId.ShouldBe(theAccount.Id);
    }

    [Fact]
    public void raises_an_overdrawn_deadline_message_in_10_days()
    {
        var scheduledTime  = theContext.ScheduledMessages()
            // Also an extension method in Wolverine for testing
            .ShouldHaveEnvelopeForMessageType<EnforceAccountOverdrawnDeadline>()
            .ScheduledTime;
        
        // Um, do something to verify that the scheduled time is 10 days from this moment
        // and also:
        //  https://github.com/JasperFx/wolverine/issues/110
    }

    public Task DisposeAsync()
    {
        return Task.CompletedTask;
    }
}

It’s not horrendous, and I’ve seen much, much worse in real life code. All the same though, let’s aim for easier code to test by removing more infrastructure code and trying to get to purely synchronous code. To get there, I’m first going to start with the EnforceAccountOverdrawnDeadline message type and change it slightly to this:

// I'm hard coding the delay time for execution, just
// go with that for now please:)
public record EnforceAccountOverdrawnDeadline(Guid AccountId) : TimeoutMessage(10.Days()), IAccountCommand;

And now back the the Handle(DebitAccount) handler, we’ll use Wolverine’s concept of cascading messages to simplify the handler and make it completely synchronous:

    [Transactional] 
    public static IEnumerable<object> Handle(
        DebitAccount command, 
        Account account, 
        IDocumentSession session)
    {
        account.Balance -= command.Amount;
        
        // This just marks the account as changed, but
        // doesn't actually commit changes to the database
        // yet. That actually matters as I hopefully explain
        session.Store(account);

        if (account.Balance > 0 && account.Balance < account.MinimumThreshold)
        {
            yield return new LowBalanceDetected(account.Id);
        }
        else if (account.Balance < 0)
        {
            yield return new AccountOverdrawn(account.Id);
            
            // Give the customer 10 days to deal with the overdrawn account
            yield return new EnforceAccountOverdrawnDeadline(account.Id);
        }
    }

Now, we’re able to mostly use state-based testing, eliminate the fake IMessageContext, and work with strictly synchronous code. Here’s the new version of the test class from before:

public class when_the_account_is_overdrawn 
{
    private readonly Account theAccount = new Account
    {
        Balance = 1000,
        MinimumThreshold = 100,
        Id = Guid.NewGuid()
    };

    
    // I happen to like NSubstitute for mocking or dynamic stubs
    private readonly IDocumentSession theDocumentSession = Substitute.For<IDocumentSession>();
    private readonly object[] theOutboundMessages;

    public when_the_account_is_overdrawn()
    {
        var command = new DebitAccount(theAccount.Id, 1200);
        theOutboundMessages = DebitAccountHandler.Handle(command, theAccount, theDocumentSession)
            .ToArray();
    }

    [Fact]
    public void the_account_balance_should_be_negative()
    {
        theAccount.Balance.ShouldBe(-200);
    }

    [Fact]
    public void raises_an_account_overdrawn_message()
    {
        // ShouldHaveMessageOfType() is an extension method in 
        // Wolverine itself to facilitate unit testing assertions like this
        theOutboundMessages.ShouldHaveMessageOfType<AccountOverdrawn>()
            .AccountId.ShouldBe(theAccount.Id);
    }

    [Fact]
    public void raises_an_overdrawn_deadline_message_in_10_days()
    {
        var scheduledTime  = theOutboundMessages
            // Also an extension method in Wolverine for testing
            .ShouldHaveEnvelopeForMessageType<EnforceAccountOverdrawnDeadline>();
    }

    [Fact]
    public void should_not_raise_account_balance_low_event()
    {
        theOutboundMessages.ShouldHaveNoMessageOfType<LowBalanceDetected>();
    }
}

The second version of both the handler method and the accompanying unit test is arguably simpler because:

  1. We were able to make the handler method synchronous which helpfully removes some boilerplate code, which is especially helpful if you use xUnit.Net because that allows us to eschew the IAsyncLifetime thing.
  2. Except for verifying that the account data was stored, all of the unit test code is now using state-based testing, which is generally easier to understand and write than interaction-based tests that necessarily depend on mock objects

Wolverine in general also made the handler method easier to test through the middleware I introduced in my previous post that “pushes” in the Account data to the handler method instead of making you jump through data access code and potential mock/stub object setup to inject the data inputs.

At the end of the day, I think that Wolverine not only does quite a bit to simplify your actual application code by doing more to isolate business functionality away from infrastructure, Wolverine also leads to more easily testable code for effective Test Driven Development.

But what about……….?

I meant to also show Wolverine’s built in integration testing support, but to be honest, I’m about to meet a friend for lunch and I’ve gotta wrap this up in the next 10 minutes. In subsequent posts I’m going to stick with this example and extend that into integration testing across the original message and into the cascading messages. I’ll also get into the very important details about Wolverine’s transactional outbox support.

Introducing Wolverine for Effective Server Side .NET Development

TL;DR — Wolverine’s runtime model is significantly different than other tools with similar functionality in the .NET world in a way that leads to simpler application code and more efficient runtime execution.

I was able to push a new version of Wolverine today based on the newly streamlined API worked out in this GitHub issue. Big thanks to Oskar, Eric, and Blake for their help in coming to what I feel turned out be a great improvement in usability — even though I took some convincing to get there. Also some huge thanks to Babu for the website scaffolding and publishing, and Khalid for all his graphics help and general encouragement.

The Wolverine docs — such as they are — are up on the Wolverine website.

In a nutshell, Wolverine is a mediator and message bus tool. There’s plenty of those tools already in the .NET space, so let me drop right into how Wolverine’s execution pipeline is genuinely unique and potentially does much more than older tools to improve developer productivity.

I’m going to build a very crude banking service that includes a message endpoint that will need to:

  1. Accept a message to apply a debit to a given account
  2. Verify that the debit amount is non-zero before you do anything else
  3. Load the information for the designated account from the database
  4. Apply the debit to the current balance of the account
  5. Persist the changes in balance back to the database

I’ll introduce “cascaded” messages tomorrow for things business rules like an account reaching a low balance or being overdrawn, but I’m ignoring that today to make this a smaller post.

While Wolverine supports EF Core and SQL Server as well, I unsurprisingly want to use Marten as a lower ceremony approach to application persistence in this particular case.

Before I even try to write the message handler, let me skip a couple design steps and say that I’m going to utilize three different sets of middleware to deal with cross cutting concerns:

  1. I’m going to use Wolverine’s built in Fluent Validation middleware to apply any known validation rules for the incoming messages. I’m honestly not sure I’d use this in real life, but this was built out and demo’d today as a way to demonstrate what’s “special” about Wolverine’s runtime architecture.
  2. Wolverine’s transactional & outbox middleware (duh)
  3. Custom middleware to load and push account data related to the incoming message to the message handler, or log and abort the message processing when the account data referenced in the message does not exist — and this more than anything is where the example code will show off Wolverine’s different approach. This example came directly from a common use case in a huge system at my work that uses NServiceBus.

Keeping in mind that we’ll be using some Wolverine middleware in a second, here’s the simple message handler to implement exactly the numbered list above:

public static class DebitAccountHandler
{
    // This explicitly adds the transactional middleware
    // The Fluent Validation middleware is applied because there's a validator
    // The Account argument is passed in by the AccountLookupMiddleware middleware
    [Transactional] // This could be done w/ a policy, but I'm opting to do this explicitly here
    public static void Handle(
        // The actual command message
        DebitAccount command, 
        
        // The current data for the account stored in the database
        // This will be "pushed" in by middleware
        Account account, 
        
        // The Marten document session service scoped to the 
        // current message being handled. 
        // Wolverine supports method injection similar to ASP.NET minimal api
        IDocumentSession session)
    {
        // decrement the balance
        account.Balance -= command.Amount;
        
        // Just telling Marten that this account document changed
        // so that it can be persisted by the middleware
        session.Store(account);
    }
}

I would argue that that handler method is very easy to understand. By removing away so many infrastructure concerns, a developer is able to mostly focus on business logic in isolation even without having to introduce all the baggage of some sort of hexagonal architecture style. Moreover, using Wolverine’s middleware allowed me to write purely synchronous code, which also reduces the code noise. Finally, but being able to “push” the business entity state into the method, I’m much more able to quickly write unit tests for the code and do TDD as I work.

Every message processing tool has middleware strategies for validation or transaction handling, but let’s take the example of the account data instead. When reviewing a very large system at work that uses NServiceBus, I noticed a common pattern of needing to load an entity from the database related to the incoming message and aborting the message handling if the entity does not exist. It’s an obvious opportunity for using middleware to eliminate the duplicated code.

First off, we need some way to “know” what the account id is for the incoming message. In this case I chose to use a marker interface just because that’s easy:

public interface IAccountCommand
{
    Guid AccountId { get; }
}

And the DebitAccountCommand becomes:

public record DebitAccount(Guid AccountId, decimal Amount) : IAccountCommand;

The actual middleware implementation is this:

// This is *a* way to build middleware in Wolverine by basically just
// writing functions/methods. There's a naming convention that
// looks for Before/BeforeAsync or After/AfterAsync
public static class AccountLookupMiddleware
{
    // The message *has* to be first in the parameter list
    // Before or BeforeAsync tells Wolverine this method should be called before the actual action
    public static async Task<(HandlerContinuation, Account?)> BeforeAsync(
        IAccountCommand command, 
        ILogger logger, 
        IDocumentSession session, 
        CancellationToken cancellation)
    {
        var account = await session.LoadAsync<Account>(command.AccountId, cancellation);
        if (account == null)
        {
            logger.LogInformation("Unable to find an account for {AccountId}, aborting the requested operation", command.AccountId);
        }
        
        return (account == null ? HandlerContinuation.Stop : HandlerContinuation.Continue, account);
    }
}

There’s also a Fluent Validation validator for the command as well (again, not sure I’d actually do it this way myself, but it shows off Wolverine’s middleware capabilities):

public class DebitAccountValidator : AbstractValidator<DebitAccount>
{
    public DebitAccountValidator()
    {
        RuleFor(x => x.Amount).GreaterThan(0);
    }
}

Stepping back to the actual application, I first added the WolverineFx.Marten Nuget reference to a brand new ASP.Net Core web api application, and made the following Program file to bootstrap the application:

using AppWithMiddleware;
using IntegrationTests;
using Marten;
using Oakton;
using Wolverine;
using Wolverine.FluentValidation;
using Wolverine.Marten;

var builder = WebApplication.CreateBuilder(args);

builder.Services.AddMarten(opts =>
{
    // This would be from your configuration file in typical usage
    opts.Connection(Servers.PostgresConnectionString);
    opts.DatabaseSchemaName = "wolverine_middleware";
}).IntegrateWithWolverine()
    // Just letting Marten build out known database schema elements upfront
    .ApplyAllDatabaseChangesOnStartup();

builder.Host.UseWolverine(opts =>
{
    // Custom middleware to load and pass account data into message
    // handlers
    opts.Handlers.AddMiddlewareByMessageType(typeof(AccountLookupMiddleware));

    // This will register all the Fluent Validation validators, and
    // apply validation middleware where the command type has
    // a validator
    opts.UseFluentValidation();
});

var app = builder.Build();

// One Minimal API that just delegates directly to Wolverine
app.MapPost("/accounts/debit", (DebitAccount command, IMessageBus bus) => bus.InvokeAsync(command));

return await app.RunOaktonCommands(args);

After all of those pieces are put together, let’s finally talk about how Wolverine’s runtime execution is really different. Wolverine’s “special sauce” is that instead of forcing you to write your code wrapped around the framework, Wolverine conforms to your application code by generating code at runtime (don’t worry, it can be done ahead of time as well to minimize cold start time).

For example, here’s the runtime code that’s generated for the DebitAccountHandler.Handle() method:

// <auto-generated/>
#pragma warning disable
using FluentValidation;
using Microsoft.Extensions.Logging;
using Wolverine.FluentValidation;
using Wolverine.Marten.Publishing;

namespace Internal.Generated.WolverineHandlers
{
    // START: DebitAccountHandler1928499868
    public class DebitAccountHandler1928499868 : Wolverine.Runtime.Handlers.MessageHandler
    {
        private readonly FluentValidation.IValidator<AppWithMiddleware.DebitAccount> _validator;
        private readonly Wolverine.Marten.Publishing.OutboxedSessionFactory _outboxedSessionFactory;
        private readonly Wolverine.FluentValidation.IFailureAction<AppWithMiddleware.DebitAccount> _failureAction;
        private readonly Microsoft.Extensions.Logging.ILogger<AppWithMiddleware.DebitAccount> _logger;

        public DebitAccountHandler1928499868(FluentValidation.IValidator<AppWithMiddleware.DebitAccount> validator, Wolverine.Marten.Publishing.OutboxedSessionFactory outboxedSessionFactory, Wolverine.FluentValidation.IFailureAction<AppWithMiddleware.DebitAccount> failureAction, Microsoft.Extensions.Logging.ILogger<AppWithMiddleware.DebitAccount> logger)
        {
            _validator = validator;
            _outboxedSessionFactory = outboxedSessionFactory;
            _failureAction = failureAction;
            _logger = logger;
        }



        public override async System.Threading.Tasks.Task HandleAsync(Wolverine.Runtime.MessageContext context, System.Threading.CancellationToken cancellation)
        {
            await using var documentSession = _outboxedSessionFactory.OpenSession(context);
            var debitAccount = (AppWithMiddleware.DebitAccount)context.Envelope.Message;
            (var handlerContinuation, var account) = await AppWithMiddleware.AccountLookupMiddleware.BeforeAsync((AppWithMiddleware.IAccountCommand)context.Envelope.Message, ((Microsoft.Extensions.Logging.ILogger)_logger), documentSession, cancellation).ConfigureAwait(false);
            if (handlerContinuation == Wolverine.HandlerContinuation.Stop) return;
            Wolverine.FluentValidation.Internals.FluentValidationExecutor.ExecuteOne<AppWithMiddleware.DebitAccount>(_validator, _failureAction, debitAccount);
            AppWithMiddleware.DebitAccountHandler.Handle(debitAccount, account, documentSession);

            // Commit the unit of work
            await documentSession.SaveChangesAsync(cancellation).ConfigureAwait(false);
        }

    }

    // END: DebitAccountHandler1928499868


}

It’s auto-generated code, so it’s admittedly ugly as sin, but there’s some things I’d like you to notice or just no:

  • This class is only instantiated one single time in the application and held in memory for the rest of the application instance’s lifecycle
  • That handler code is managing service lifecycle and service disposal, but yet there’s no IoC tool anywhere in sight
  • The code only has to create and resolve the Fluent Validation validator one single time, and it’s inlined access the complete rest of the way
  • The code up above minimizes the number of objects allocated per message being handled compared to other tools that utilize some kind of Russian Doll middleware model by dodging the need for framework adapter objects that are created and destroyed for each message execution. Less garbage collector thrashing from fewer object allocations means better performance and scalability
  • The Fluent Validation middleware wouldn’t even apply to message types that don’t have any known validators, and can optimize itself. Contrast that with Fluent Validation middleware strategies with something like MediatR where it would create a decorator object for each request and loop through an empty enumerable even when there are not validator for a given message type. That’s not a lot of overhead per se, but that adds up when there’s a lot of framework cruft.
  • You might have to take my word for it, but having built other frameworks and having spent a long time poring over the internals of other similar frameworks, Wolverine is going to do a lot fewer object allocations, indirections, and dictionary lookups at runtime than other tools with similar capabilities

I’m following this up immediately tomorrow by adding some “cascading” messages and diving into the built in testing support within Wolverine.

Wolverine on DotNetRocks

The fine folks at DotNetRocks graciously allowed me to come on and talk about Wolverine and its combination with Marten into the new “Critter Stack” for highly productive server side development in .NET.

Wolverine is a new framework (but based on previous tools dating back over a decade) for server side .NET development that acts as both an in-process mediator tool and message bus for asynchronous messaging between processes. In an admittedly crowded field, Wolverine stands apart from older tools in a couple important ways:

  • While Wolverine will happily take care of infrastructure concerns like error handling, logging, distributed tracing with OpenTelemetry, serialization, performance metrics, and interacting directly with message brokers, Wolverine does a much better job of keeping out of your application code
  • Wolverine’s runtime model — including its robust middleware strategy — completely bypasses the performance problems the older tools incur
  • Application code testability is a first class goal with Wolverine, and it shows. Unit testing is certainly easier with Wolverine keeping more infrastructure concerns out of your code, while also adding some unique test automation support for integration testing
  • Developer productivity is enhanced by aiming for baking in infrastructure setup and configuration directly into Wolverine. And because some of Wolverine’s productivity boost admittedly comes from coding convention magic, Wolverine can tell you exactly what it’s going to do at runtime through its built in diagnostics.

The usage is already going to change next week based on a lot of early user feedback, but you can easily get the gist of what Wolverine is like in usage through the JetBrains webinar on Wolverine a couple weeks back:

Alba for Effective ASP.Net Core Integration Testing

Alba is a small library that enables easy integration testing of ASP.Net Core routes completely in process within an NUnit/xUnit.Net/MSTest project. Alba 7.1 just dropped today with .NET 7 support, improved JSON handling for Minimal API endpoints, and multipart form support.

Quickstart with Minimal API

Keeping things almost absurdly simple, let’s say that you have a Minimal API route (taken from the Alba tests) like so:

app.MapPost("/go", (PostedMessage input) => new OutputMessage(input.Id));

Now, over in your testing project, you could write a crude test for the route above like so:

    [Fact]
    public async Task sample_test()
    {
        // This line only matters if you use Oakton for the command line
        // processing
        OaktonEnvironment.AutoStartHost = true;
        
        // I'm doing this inline to make the sample easier to understand,
        // but you'd want to share the AlbaHost between tests because
        // this is expensive
        await using var host = await AlbaHost.For<MinimalApiWithOakton.Program>();
        
        var guid = Guid.NewGuid();
        
        var result = await _host.PostJson(new PostedMessage(guid), "/go")
            .Receive<OutputMessage>();

        result.Id.ShouldBe(guid);
    }

A couple notes about the code above:

  • The test is bootstrapping your actual application using its configuration, but using the TestServer in place of Kestrel as the web server.
  • The call to PostJson() is using the application’s JSON serialization configuration, just in case you’ve customized the JSON serialization. Likewise, the call to Receive<T>() is also using the application’s JSON serialization mechanism to be consistent. This functionality was improved in Alba 7 to “know” whether to use MVC Core or Minimal API style JSON serialization (but you can explicitly override that in mixed applications on a case by case basis)
  • When the test executes, it’s running through your entire application’s ASP.Net Core pipeline including any and all registered middleware

If you choose to use Alba with >= .NET 6 style application bootstrapping inside of an inferred Program.Main() method, be aware that you will need to grant your test project visibility to the internals of your main project something like this:

  <ItemGroup>
    <InternalsVisibleTo Include="ProjectName.Tests" />
  </ItemGroup>

How does Alba fit into projects?

I think most people by now are somewhat familiar with the testing pyramid idea (or testing trophy or any other number of shapes). Just to review, it’s the idea that a software system is best served by being backed by a mix of automated tests between solitary unit tests, intermediate integration tests, and some number of end to end, black box tests.

We can debate what the exact composition of your test pyramid should be on a particular project until the cows come home. For my part, I want more fast running, easier to write tests and fewer potentially nasty Selenium/Playwright/Cypress.io tests that tend towards being slow and brittle. I like Alba in particular because it allows our teams at work to test at the HTTP web service layer through to the database completely within process — meaning the tests can be executed on demand without any kind of deployment. In short, Alba sits in the middle of the pyramid graphic above and makes those very valuable kind of tests easier to write, execute, and debug for the developers working on the system.

Using Context/Specification to better express complicated tests

I’m trying to help one of our teams at work that constantly modifies a very large, very complex, 12-15 year old managed workflow system. Like many shops, we’re working to improve our testing practices, and our developers are pretty diligent about adding tests for new code.

Great, but the next step in my opinion is to adopt some different approaches for structuring the code to make unit testing easier and lead toward smaller, more focused unit tests when possible (see my post on a Real Life TDD Example for some of my thinking on that).

All that being said, it’s a very complicated system with data elements out the wazoo that coordinates work across a bevy of internal and external services. Sometimes there’s a single operation that necessarily does a lot of things in one unit of work (almost inevitably an NServiceBus message handler in this case) like:

  • Changing state in business entities based on incoming commands — and the system will frequently change more than one entity at a time
  • Sending out additional command or event messages based on the inputs and existing state of the system

To deal with the complexity of testing these kinds of message handlers, I’m suggesting that we dust off the old BDD-ish “Context/Specification” style of tests. If you think of automated tests generally following some sort of arrange/act/assertion structure, the Context/Specification style in an OO language is going to follow this structure:

  1. A class with a name that describes the scenario being tested
  2. A single scenario set up that performs both the “arrange” and “act” parts of the logical test group
  3. Multiple, granular tests with descriptive names that make a single, logical assertion against the expectations of the desired behavior

Jumping into a simple example, here’s a test class from the built in Open Telemetry instrumentation in Wolverine:

public class when_creating_an_execution_activity
{
    private readonly Activity theActivity;
    private readonly Envelope theEnvelope;

    public when_creating_an_execution_activity()
    {
        // In BDD terms....
        // Given a message envelope
        // When creating a new Otel activity for processing a message
        // Then the activity uses the envelope conversation id as the otel messaging conversation id
        // And [a bunch of other things]
        theEnvelope = ObjectMother.Envelope();
        theEnvelope.ConversationId = Guid.NewGuid();

        theEnvelope.MessageType = "FooMessage";
        theEnvelope.CorrelationId = Guid.NewGuid().ToString();
        theEnvelope.Destination = new Uri("tcp://localhost:6666");

        theActivity = new Activity("process");
        theEnvelope.WriteTags(theActivity);
    }

    [Fact]
    public void should_set_the_otel_conversation_id_to_correlation_id()
    {
        theActivity.GetTagItem(WolverineTracing.MessagingConversationId)
            .ShouldBe(theEnvelope.ConversationId);
    }

    [Fact]
    public void tags_the_message_id()
    {
        theActivity.GetTagItem(WolverineTracing.MessagingMessageId)
            .ShouldBe(theEnvelope.Id);
    }

    [Fact]
    public void sets_the_message_system_to_destination_uri_scheme()
    {
        theActivity.GetTagItem(WolverineTracing.MessagingSystem)
            .ShouldBe("tcp");
    }

    [Fact]
    public void sets_the_message_type_name()
    {
        theActivity.GetTagItem(WolverineTracing.MessageType)
            .ShouldBe(theEnvelope.MessageType);
    }

    [Fact]
    public void the_destination_should_be_the_envelope_destination()
    {
        theActivity.GetTagItem(WolverineTracing.MessagingDestination)
            .ShouldBe(theEnvelope.Destination);
    }

    [Fact]
    public void should_set_the_payload_size_bytes_when_it_exists()
    {
        theActivity.GetTagItem(WolverineTracing.PayloadSizeBytes)
            .ShouldBe(theEnvelope.Data!.Length);
    }

    [Fact]
    public void trace_the_conversation_id()
    {
        theActivity.GetTagItem(WolverineTracing.MessagingConversationId)
            .ShouldBe(theEnvelope.ConversationId);
    }
}

In the case above, the constructor is doing the “arrange” and “act” part of the group of tests, but each individual [Fact] is a logical assertion on the expected outcomes.

Here’s some takeaways from this style and when and where it might be useful:

  • It’s long been a truism that unit tests should have a single logical assertion. That’s just a rule of thumb, but I still find it to be useful in making tests readable and “digestable”
  • With that testing style, I find it easier to work on one assertion at a time in a red/green/refactor cycle than it can be to specify all the related assertions in one bigger test
  • Arguably, that style can at least sometimes do a much better job of making the tests act as useful documentation about how the system should behave than more monolithic tests
  • This style doesn’t require the usage of specialized Gherkin style tools, but at some point when you’re dealing with data intensive tests a Gherkin-based tool becomes much more attractive
  • This style is verbose, and it’s not my default test structure for everything by any means

For structure or grouping, you might structure these tests like:

// Some people like to use the other class to group the tests
// in IDE test runners. It's not necessary, but it might be
// advantageous
public class SomeHandlerSpecs
{
    // A single scenario
    public class when_some_description_of_the_specific_scenario1
    {
        public when_some_description_of_the_specific_scenario1()
        {
            // shared context setup
            // the logical "arrange" and "act"
        }

        [Fact]
        public void then_some_kind_of_descriptive_name_for_a_single_logical_assertion()
        {
            // do an assertion
        }
        
        [Fact]
        public void then_some_kind_of_descriptive_name_for_a_single_logical_assertion_2()
        {
            // do an assertion
        }
    }
    
    // A second scenario
    public class when_some_description_of_the_second_scenario1
    {
        public when_some_description_of_the_second_scenario1()
        {
            // shared context setup
            // the logical "arrange" and "act"
        }

        [Fact]
        public void then_some_kind_of_descriptive_name_for_a_single_logical_assertion()
        {
            // do an assertion
        }
        
        [Fact]
        public void then_some_kind_of_descriptive_name_for_a_single_logical_assertion_2()
        {
            // do an assertion
        }
    }
}

Admittedly, I frequently end up doing quite a bit of copy/paste between different scenarios when I use this style. I’m going to say that’s mostly okay because test code should be optimized for readability rather than for eliminating duplication as we would in production code (see the discussion about DAMP vs DRY in this post for more context).

To be honest, I couldn’t remember what this style of test was even called until I spent some time googling for better examples today. I remember this being a major topic of discussion in the late 00’s, but not really since. I think it’s maybe a shame that Behavior Driven Development (BDD) became too synonymous with Cucumber tooling, because there was definitely some very useful thinking going on with BDD approaches. Way too many “how many Angels can dance on the head of a pin” arguments too of course too though.

Here’s an old talk from Philip Japikse that’s the best resource I could find this morning on this idea.

Marten and Friend’s (Hopefully) Big Future!

Marten was conceived and launched way back in 2016 as an attempt to quickly improve the performance and stability of a mission critical web application by utilizing Postgresql and its new JSON capabilities as a replacement for a 3rd party document database – and do that in a hurry before the next busy season. My former colleagues and I did succeed in that endeavor, but more importantly for the longer run, Marten was also launched as an open source project on GitHub and quickly attracted attention from other developers. The addition of an originally small feature set for event sourcing dramatically increased interest and participation in Marten. 

Fast forward to today, and we have a vibrant community of engaged users and a core team of contributors that are constantly improving the tool and discussing ideas about how to make it even better. The giant V4 release last year brought an overhaul of almost all the library internals and plenty of new capabilities. V5 followed early in 2022 with more multi-tenancy options and better tooling for development lifecycles and database management based on early issues with V4. 

At this point, I’d list the strong points of Marten that we’ve already achieved as:

  • A very useful document database option that provides the powerful developer productivity you expect from NoSQL solutions while also supporting a strong consistency model that’s usually missing from NoSQL databases. 
  • A wide range of viable hosting options by virtue of being on top of Postgresql. No cloud vendor lock-in with Marten!
  • Quite possibly the easiest way to build an application using Event Sourcing in .NET with both event storage and user defined view projections in the box
  • A great local development story through the simple ability to run Postgresql in a Docker container and Marten’s focus on an “it just works” style database schema management subsystem
  • The aforementioned core team and active user base makes Marten a viable OSS tool for teams wanting some reassurance that Marten is going to be well supported in the future

Great! But now it’s time to talk about the next steps we’re planning to take Marten to even greater heights in the forthcoming Marten V6 that’s being planned now. The overarching theme is to remove the most common hurdles for not choosing Marten. By and large, I think the biggest themes for Marten are:

  1. Scalability, so Marten can be used for much larger data sets. From user feedback, Marten is able to handle data sets of 10 million events today, but there’s opportunities to go far, far larger than that.
  2. Improvements to operational support. Database migrations when documents change, rebuilding projections without downtime, usage metrics, and better support for using multiple databases for multi-tenancy
  3. Marten is in good shape as a purely storage option for Event Sourcing, but users are very often asking for an array of subscription options to propagate events captured by Marten
  4. More powerful options for aggregating event data into more complex projected views
  5. Improving the Linq and other querying support is a seemingly never-ending battle
  6. The lack of professional support for Marten. Obviously a lot of shops and teams are perfectly comfortable with using FOSS tools knowing that they may have to roll up their sleeves and pitch in with support, but other shops are not comfortable with this at all and will not allow FOSS usage for critical functions. More on this later.

First though, Marten is getting a new “critter” friend in the larger JasperFx project family:

Wolverine is a new/old OSS command bus and messaging tool for .NET. It’s what was formerly being developed as Jasper, but the Marten team decided to rebrand the tool as a natural partner with Marten (both animals plus Weasel are members of the Mustelidae family). While both Marten and Wolverine are happily usable without each other, we think that the integration of these tools gives us the opportunity to build a full fledged platform for building applications in .NET using a CQRS architecture with Event Sourcing. Moreover, we think there’s a significant gap in .NET for this kind of tooling and we hope to fill that. 

So, onto future plans…

There’s a couple immediate ways to improve the scalability of Marten we’re planning to build in Marten V6. The first idea is to utilize Postgresql table sharding in a couple different ways. 

First, we can enable sharding on document tables based on user defined criteria through Marten configuration. The big challenge there is to provide a good migration strategy for doing this as it requires at least a 3 step process of copying the existing table data off to the side before creating the new tables. 

The next idea is to shard the event storage tables as well, with the immediate idea being to shard off of archived status to effectively create a “hot” storage of recent events and a “cold” storage of older events that are much less frequently accessed. This would allow Marten users to keep the active “hot” event storage to a much smaller size and therefore greatly improve potential performance even as the database continues to grow.

We’re not done “sharding” yet, but this time we need to shift to the asynchronous projection support in Marten. The core team has some ideas to improve the throughput of the asynchronous projection code as it is, but today it’s limited to only running on one single application node with “hot/cold” rollover support. With some help from Wolverine, we’re hoping to build a “sharded” asynchronous projection that can shard the processing of single projections and distribute the projection work across potentially many nodes as shown in the following diagram:

The asynchronous projection sharding is going to be a big deal for Marten all by itself, but there’s some other potentially big wins for Marten V6 with better tooling for projection rebuilds and asynchronous projections in general:

  1. Some kind of user interface to monitor and manage the asynchronous projections
  2. Faster projection rebuilds
  3. Zero downtime projection rebuilds

Marten + Wolverine == “Critter Stack” 

Again, both Marten and Wolverine will be completely usable independently, but we think there’s some potential synergy through the combination. One of the potential advantages of combining the tools is to use Wolverine’s messaging to give Marten a full fledged subscription model for Marten events. All told we’re planning three different mechanisms for propagating Marten events to the rest of your system:

  1. Through Wolverine’s transactional outbox right at the point of event capture when you care more about immediate delivery than strict ordering (this is already working)
  2. Through Martens asynchronous daemon when you do need strict ordering
  3. If this works out, through CDC event streaming straight from the database to Kafka/Pulsar/Kinesis

That brings me to the last topic I wanted to talk about in this post. Marten and Wolverine in their current form will remain FOSS under the MIT license, but it’s past time to make a real business out of these tools.

I don’t know how this is exactly going to work out yet, but the core Marten team is actively planning on building a business around Marten and now Wolverine. I’m not sure if this will be the front company, but I personally have formed a new company named “Jasper Fx Software” for my own activity – but that’s going to be limited to just being side work for at least awhile. 

The general idea – so far – is to offer:

  • Support contracts for Marten 
  • Consulting services, especially for help modeling and maximizing the usage of the event sourcing support
  • Training workshops
  • Add on products that add the advanced features I described earlier in this post

Maybe success leads us to offering a SaaS model for Marten, but I see that as a long way down the road.

What think you gentle reader? Does any of this sound attractive? Should we be focusing on something else altogether?