Wolverine meets EF Core and Sql Server

Heads up, you will need at least Wolverine 0.9.7 for these samples!

I’ve mostly been writing about Wolverine samples that involve its “critter stack” compatriot Marten as the persistence tooling. I’m obviously deeply invested in making that “critter stack” the highest productivity combination for server side development basically anywhere.

Today though, let’s go meet potential Wolverine users where they actually live and finally talk about how to integrate Entity Framework Core (EF Core) and SQL Server into Wolverine applications.

All of the samples in this post are from the EFCoreSample project in the Wolverine codebase. There’s also some newly published documentation about integrating EF Core with Wolverine now too.

Alright, let’s say that we’re building a simplistic web service to capture information about Item entities (so original) and we’ve decided to use SQL Server as the backing database and use EF Core as our ORM for persistence — and also use Wolverine as an in memory mediator because why not?

I’m going to start by creating a brand new project with the dotnet new webapi template. Next I’m going to add some Nuget references for:

  1. Microsoft.EntityFrameworkCore.SqlServer
  2. WolverineFx.SqlServer
  3. WolverineFx.EntityFrameworkCore

Now, let’s say that I have a simplistic DbContext class to define my EF Core mappings like so:

public class ItemsDbContext : DbContext
{
    public ItemsDbContext(DbContextOptions<ItemsDbContext> options) : base(options)
    {
    }

    public DbSet<Item> Items { get; set; }

    protected override void OnModelCreating(ModelBuilder modelBuilder)
    {
        // Your normal EF Core mapping
        modelBuilder.Entity<Item>(map =>
        {
            map.ToTable("items");
            map.HasKey(x => x.Id);
            map.Property(x => x.Name);
        });
    }
}

Now let’s switch to the Program file that holds all our application bootstrapping and configuration:

using ItemService;
using Microsoft.EntityFrameworkCore;
using Oakton;
using Oakton.Resources;
using Wolverine;
using Wolverine.EntityFrameworkCore;
using Wolverine.SqlServer;

var builder = WebApplication.CreateBuilder(args);

// Just the normal work to get the connection string out of
// application configuration
var connectionString = builder.Configuration.GetConnectionString("sqlserver");

#region sample_optimized_efcore_registration

// If you're okay with this, this will register the DbContext as normally,
// but make some Wolverine specific optimizations at the same time
builder.Services.AddDbContextWithWolverineIntegration<ItemsDbContext>(
    x => x.UseSqlServer(connectionString));

builder.Host.UseWolverine(opts =>
{
    // Setting up Sql Server-backed message storage
    // This requires a reference to Wolverine.SqlServer
    opts.PersistMessagesWithSqlServer(connectionString);

    // Enrolling all local queues into the
    // durable inbox/outbox processing
    opts.Policies.UseDurableLocalQueues();
});

// This is rebuilding the persistent storage database schema on startup
builder.Host.UseResourceSetupOnStartup();

builder.Services.AddControllers();
builder.Services.AddEndpointsApiExplorer();
builder.Services.AddSwaggerGen();

var app = builder.Build();

// Make sure the EF Core db is set up
await app.Services.GetRequiredService<ItemsDbContext>().Database.EnsureCreatedAsync();

// Configure the HTTP request pipeline.
if (app.Environment.IsDevelopment())
{
    app.UseSwagger();
    app.UseSwaggerUI();
}

app.MapControllers();

app.MapPost("/items/create", (CreateItemCommand command, IMessageBus bus) => bus.InvokeAsync(command));

// Opt into using Oakton for command parsing
await app.RunOaktonCommands(args);

In the code above, I’ve:

  1. Added a service registration for the new ItemsDbContext EF Core class, but I did so with a special Wolverine wrapper that adds some optimizations for us, quietly adds some mapping to the ItemsDbContext at runtime for the Wolverine message storage, and also enables Wolverine’s transactional middleware and stateful saga support for EF Core.
  2. I added Wolverine to the application, and used the PersistMessagesWithSqlServer() extension method to tell Wolverine to add message storage for SQL Server in the default dbo schema (that can be overridden). This also adds Wolverine’s durable agent for its transactional outbox and inbox running as a background service in an IHostedService
  3. I directed the application to build out any missing database schema objects on application startup through the call to builder.Host.UseResourceSetupOnStartup(); If you’re curious, this is using Oakton’s stateful resource model.
  4. For the sake of testing this little bugger, I’m having the application build the implied database schema from the ItemsDbContext as well

Moving on, let’s build a simple message handler that creates a new Item, persists that with EF Core, and raises a new ItemCreated event message:

    public static ItemCreated Handle(
        // This would be the message
        CreateItemCommand command,

        // Any other arguments are assumed
        // to be service dependencies
        ItemsDbContext db)
    {
        // Create a new Item entity
        var item = new Item
        {
            Name = command.Name
        };

        // Add the item to the current
        // DbContext unit of work
        db.Items.Add(item);

        // This event being returned
        // by the handler will be automatically sent
        // out as a "cascading" message
        return new ItemCreated
        {
            Id = item.Id
        };
    }

Simple enough, but a couple notes about that code:

  • I didn’t explicitly call the SaveChangesAsync() method on our ItemsDbContext to commit the changes, and that’s because Wolverine sees that the handler has a dependency on an EF Core DbContext type, so it automatically wraps its EF Core transactional middleware around the handler
  • The ItemCreated object returned from the message handler is a Wolverine cascaded message, and will be sent out upon successful completion of the original CreateItemCommand message — including the transactional middleware that wraps the handler.
  • And oh, by the way, we want the ItemCreated message to be persisted in the underlying Sql Server database as part of the transaction being committed so that Wolverine’s transactional outbox functionality makes sure that message gets processed (eventually) even if the process somehow fails between publishing the new message and that message being successfully completed.

I should also note that as a potentially significant performance optimization, Wolverine is able to persist the ItemCreated message when ItemsDbContext.SaveChangesAsync() is called to enroll in EF Core’s ability to batch changes to the database rather than incurring the cost of extra network hops if we’d used raw SQL.

Hopefully that’s all pretty easy to follow, even though there’s some “magic” there. If you’re curious, here’s the actual code that Wolverine is generating to handle the CreateItemCommand message (just remember that auto-generated code tends to be ugly as sin):

// <auto-generated/>
#pragma warning disable
using Microsoft.EntityFrameworkCore;

namespace Internal.Generated.WolverineHandlers
{
    // START: CreateItemCommandHandler1452615242
    public class CreateItemCommandHandler1452615242 : Wolverine.Runtime.Handlers.MessageHandler
    {
        private readonly Microsoft.EntityFrameworkCore.DbContextOptions<ItemService.ItemsDbContext> _dbContextOptions;

        public CreateItemCommandHandler1452615242(Microsoft.EntityFrameworkCore.DbContextOptions<ItemService.ItemsDbContext> dbContextOptions)
        {
            _dbContextOptions = dbContextOptions;
        }



        public override async System.Threading.Tasks.Task HandleAsync(Wolverine.Runtime.MessageContext context, System.Threading.CancellationToken cancellation)
        {
            await using var itemsDbContext = new ItemService.ItemsDbContext(_dbContextOptions);
            var createItemCommand = (ItemService.CreateItemCommand)context.Envelope.Message;
            var outgoing1 = ItemService.CreateItemCommandHandler.Handle(createItemCommand, itemsDbContext);
            // Outgoing, cascaded message
            await context.EnqueueCascadingAsync(outgoing1).ConfigureAwait(false);

        }

    }

So that’s EF Core within a Wolverine handler, and using SQL Server as the backing message store. One of the weaknesses of some of the older messaging tools in .NET is that they’ve long lacked a usable outbox feature outside of the context of their message handlers (both NServiceBus and MassTransit have just barely released “real” outbox features), but that’s a frequent need in the applications at my own shop and we’ve had to work around these limitations. Fortunately though, Wolverine’s outbox functionality is usable outside of message handlers.

As an example, let’s implement basically the same functionality we did in the message handler, but this time in an ASP.Net Core Controller method:

    [HttpPost("/items/create2")]
    public async Task Post(
        [FromBody] CreateItemCommand command,
        [FromServices] IDbContextOutbox<ItemsDbContext> outbox)
    {
        // Create a new Item entity
        var item = new Item
        {
            Name = command.Name
        };

        // Add the item to the current
        // DbContext unit of work
        outbox.DbContext.Items.Add(item);

        // Publish a message to take action on the new item
        // in a background thread
        await outbox.PublishAsync(new ItemCreated
        {
            Id = item.Id
        });

        // Commit all changes and flush persisted messages
        // to the persistent outbox
        // in the correct order
        await outbox.SaveChangesAndFlushMessagesAsync();
    }  

In the sample above I’m using the Wolverine IDbContextOutbox<T> service to wrap the ItemsDbContext and automatically enroll the EF Core service in Wolverine’s outbox. This service exposes all the possible ways to publish messages through Wolverine’s normal IMessageBus entrypoint.

Here’s a slightly different possible usage where I directly inject ItemsDbContext, but also a Wolverine IDbContextOutbox service:

    [HttpPost("/items/create3")]
    public async Task Post3(
        [FromBody] CreateItemCommand command,
        [FromServices] ItemsDbContext dbContext,
        [FromServices] IDbContextOutbox outbox)
    {
        // Create a new Item entity
        var item = new Item
        {
            Name = command.Name
        };

        // Add the item to the current
        // DbContext unit of work
        dbContext.Items.Add(item);

        // Gotta attach the DbContext to the outbox
        // BEFORE sending any messages
        outbox.Enroll(dbContext);
        
        // Publish a message to take action on the new item
        // in a background thread
        await outbox.PublishAsync(new ItemCreated
        {
            Id = item.Id
        });

        // Commit all changes and flush persisted messages
        // to the persistent outbox
        // in the correct order
        await outbox.SaveChangesAndFlushMessagesAsync();
    }   

That’s about all there is, but to sum it up:

  • Wolverine is able to use SQL Server as its persistent message store for durable messaging
  • There’s a ton of functionality around managing the database schema for you so you can focus on just getting stuff done
  • Wolverine has transactional middleware that can be applied automatically around your handlers as a way to simplify your message handlers while also getting the durable outbox messaging
  • EF Core is absolutely something that’s supported by Wolverine
Advertisement

Wolverine delivers the instrumentation you want and need

I’ve been able to talk and write a bit about Wolverine in the last couple weeks. This builds on the previous blog posts in this list:

By no means am I trying to imply that you shouldn’t attempt useful performance or load testing prior to deployment, it’s just that actual users and clients are unpredictable and it’s better to be prepared to respond to unexpected performance issues.

Wolverine is absolutely meant for “grown up development”, which pretty well makes strong support for project instrumentation a necessity both at development time and in production environments. To that end, here’s a few things I think about instrumentation that hopefully explain Wolverine’s approach to instrumentation:

  1. It’s effectively impossible in many cases to comprehensively performance or load test your applications in absolutely accurate ways ahead of actual customer usage. Maybe more accurately, we’re going to be completely blindsided by exactly how the customers of our system use our system or what the client datasets are going to be like in reality. Rather than gnash our teeth in futility over our lack of omniscience, we should strive to have very solid performance metric collection in our system that can spot potential performance issues and even describe why or how the performance problems exist.
  2. Logging code can easily overload the “real” application code with repetitive noise code and make the code as a whole harder to read and understand. This is especially true for code where developers are relying on copious debugging statements maybe a bit in lieu of strong testing. Personally, I want all the necessary application logging without having to obfuscate the application code with copious amounts of explicit logging statement. I’ll occasionally bump into folks who have a strong preference to eschew any kind of application framework and write the most explicit code possible. Put me in the opposite camp. I want my application code as clean as possible and to delegate as much tedious overhead functionality as possible to an application framework.
  3. And finally, Open Telemetry might as well be a de facto requirement in all enterprise-y applications at this point, especially for distributed applications — which is exactly what Wolverine was originally designed to do!

Alright, on to what Wolverine already does out of the box.

Logging Integration

Wolverine does all of its logging through the standard .NET ILogger abstractions, and that integration happens automatically out of the box with any standard Wolverine setup using the UseWolverine() extension method like so:

builder.Host.UseWolverine(opts =>
{
    // Whatever Wolverine configuration your system needs...
});

So what’s logged? Out of the box:

  • Every message that’s sent, received, or processed successfully
  • Any kind of message processing failure
  • Any kind of error handling continuation like retries, “requeues,” or moving a message to the dead letter queue
  • All transport events like circuits closing or reopening
  • Background processing events in the durable inbox/outbox processing
  • Basically everything meaningful

When I look at my own shop’s legacy systems that heavily leverage NServiceBus for message handling, I see a lot of explicit logging code that I think would be absolutely superfluous when we move that code to Wolverine instead. Also, it’s de rigueur for newer .NET frameworks to come out of the box with ILogger integration, but that’s still something that’s frequently an explicit step in older .NET frameworks like many of Wolverine’s older competitors.

Open Telemetry

Full support for Open Telemetry tracing including messages received, sent, and processed is completely out of the box in Wolverine through the System.Diagnostics.DiagnosticSource library. You do have to write just a tiny bit of explicit code to export any collected telemetry data. Here’s some sample code from a .NET application’s Program file to do exactly that, with a Jaeger exporter as well just for fun:

// builder.Services is an IServiceCollection object
builder.Services.AddOpenTelemetryTracing(x =>
{
    x.SetResourceBuilder(ResourceBuilder
            .CreateDefault()
            .AddService("OtelWebApi")) // <-- sets service name
        .AddJaegerExporter()
        .AddAspNetCoreInstrumentation()

        // This is absolutely necessary to collect the Wolverine
        // open telemetry tracing information in your application
        .AddSource("Wolverine");
});

I should also say though, that the above code required a handful of additional dependencies just for all the Open Telemetry this or that:

    <ItemGroup>
        <PackageReference Include="OpenTelemetry" Version="1.3.0"/>
        <PackageReference Include="OpenTelemetry.Api" Version="1.3.0"/>
        <PackageReference Include="OpenTelemetry.Exporter.Jaeger" Version="1.3.0"/>
        <PackageReference Include="OpenTelemetry.Extensions.Hosting" Version="1.0.0-rc8"/>
        <PackageReference Include="OpenTelemetry.Instrumentation.AspNetCore" Version="1.0.0-rc8"/>
    </ItemGroup>

Wolverine itself does not have any direct dependencies on any OpenTelemetry Nuget. In no small part because that stuff all seems a bit unstable right now:(.

Metrics

Wolverine also has quite a few out of the box metrics that are directly exposed by System.Diagnostics.Meter, but alas, I’m out of time for tonight and that’s worthy of its own post all by itself. Next time out!

My OSS Plans for 2023

Before I start, I am lucky to be part of a great group of OSS collaborators across the board. In particular, thanks to Oskar, Babu, Khalid, Hawxy, and Eric Smith for helping make 2022 a hugely productive and satisfying year in OSS work for me. I’m looking forward to working with y’all more in the times ahead.

In recent years I’ve kicked off my side project work with an overly optimistic and hopelessly unrealistic list of ambitions for my OSS projects. You can find the 2022 and 2021 versions still hanging around, only somewhat fulfilled. I’m going to put down my markers for what I hope to accomplish in 2023 — and because I’m the kind of person who obsesses more about the list of things to do rather than looking back at accomplishments, I’ll take some time to review what was done in many of these projects in 2022. Onward.

Marten is going gang busters, and 2022 was a very encouraging year for the Marten core team & I. The sizable V5.0 release dropped in March with some significant usability improvements, multi-tenancy with a database per tenant(s) support, and other goodness specifically to deal with apparent flaws in the gigantic V4.0 release from late 2021.

For 2023, the V6 release will come soon, mostly with changes to underlying dependencies.

Beyond that, I think that V7 will be a massively ambitious release in terms of important new features — hopefully in time for Event Sourcing Live 2023. If I had a magic wand that would magically give us all enough bandwidth to pull it off, my big hopes for Marten V7 are:

  • The capability to massively scale the Event Store functionality in Marten to much, much larger systems
  • Improved throughput and capacity with asynchronous projections
  • A formal, in the box subscription model
  • The ability to shard document database entities
  • Dive into the Linq support again, but this time use Postgresql V15 specific functionality to make the generated queries more efficient — especially for any possible query that goes through child collections. I haven’t done the slightest bit of detailed analysis on that one yet though
  • The ability to rebuild projections with zero downtime and/or faster projection rebuilds

Marten will also be impacted by the work being done with…

After a couple years of having almost given up on it, I restarted work pretty heavily on what had been called Jasper. While building a sample application for a conference talk, Oskar & I realized there was some serious opportunity for combining Marten and the then-Jasper for very low ceremony CQRS architectures. Now, what’s the best way to revitalize an OSS project that was otherwise languishing and basically a failure in terms of adoption? You guessed it, rename the project with an obvious theme related to an already successful OSS project and get some new, spiffier graphics and better website! And basically all new internals, new features, quite a few performance improvements, better instrumentation capabilities, more robust error handling, and a unique runtime model that I very sincerely believe will lead to better developer productivity and better application performance than existing tools in the .NET space.

Hence, Wolverine is the new, improved message bus and local mediator (I like to call that a “command bus” so as to not suffer the obvious comparisons to MediatR which I feel shortchanges Wolverine’s much greater ambitions). Right now I’m very happy with the early feedback from Wolverine’s JetBrains webinar (careful, the API changed a bit since then) and its DotNetRocks episode.

Right now the goal is to make it to 1.0 by the end of January — with the proviso that Marten V6 has to go first. The remaining work is mostly to finish the documentation website and a handful of tactical feature items mostly to prove out some of the core abstractions before minting 1.0.

Luckily for me, a small group of us at work have started a proof of concept for rebuilding/converting/migrating a very large system currently using NHibernate, Sql Server, and NServiceBus to Wolverine + Marten. That’s going to be an absolutely invaluable learning experience that will undoubtedly shape the short term work in both tools.

Beyond 1.0, I’m hoping to effectively use Wolverine to level up on a lot of technologies by adding:

  • Some other transport options (Kafka? Kinesis? EventBridge?)
  • Additional persistence options with Cosmos Db and Dynamo Db being the likely candidates so far
  • A SignalR transport
  • First class serverless support using Wolverine’s runtime model, with some way of optimizing the cold start
  • An option to use Wolverine’s runtime model for ASP.Net Core API endpoints. I think there’s some opportunity to allow for a low ceremony, high performance alternative for HTTP API creation while still being completely within the ASP.Net Core ecosystem

I hope that Wolverine is successful by itself, but the real goal of Wolverine is to allow folks to combine it with Marten to form the….

“Critter Stack”

The hope with Marten + Wolverine is to create a very effective platform for server-side .NET development in general. More specifically, the goal of the “critter stack” combination is to become the acknowledged industry leader for building systems with a CQRS plus Event Sourcing architectural model. And I mean across all development platforms and programming languages.

Pride goeth before destruction, and an haughty spirit before a fall.

Proverbs 16:18 KJV

And let me just more humbly say that there’s a ways to go to get there, but I’m feeling optimistic right now and want to set out sights pretty high. I especially feel good about having unintentionally made a huge career bet on Postgresql.

Lamar recently got its 10.0 release to add first class .NET 7.0 support (while also dropping anything < .NET 6) and a couple performance improvements and bug fixes. There hasn’t been any new functionality added in the last year except for finally getting first class support for IAsyncDisposable. It’s unlikely that there will be much development in the new year for Lamar, but we use it at work, I still think it has advantages over the built in DI container from .NET, and it’s vital for Wolverine. Lamar is here to stay.

Alba

Alba 7.0 (and a couple minor releases afterward) added first class .NET 7 support, much better support for testing Minimal API routes that accept and/or return JSON, and other tactical fixes (mostly by Hawxy).

See Alba for Effective ASP.Net Core Integration Testing for more information on how Alba improved this year.

I don’t have any specific plans for Alba this year, but I use Alba to test pieces of Marten and Wolverine and we use it at work. If I manage to get my way, we’ll be converting as many slow, unreliable Selenium based tests to fast running Alba tests against HTTP endpoints in 2023 at work. Alba is here to stay.

Not that this is germane to this post, but the very lightly traveled road behind that sign has a straightaway section where you can see for a couple miles at a time. I may or may not have tried to find out exactly how fast my first car could really go on that stretch of road at one point.

Oakton had a significant new feature set around the idea of “stateful resources” added in 2022, specifically meant for supporting both Marten and Wolverine. We also cleaned up the documentation website. The latest version 6.0 brought Oakton up to .NET 7 while also using shared dependencies with the greater JasperFx family (Marten, Wolverine, Lamar, etc.). I don’t exactly remember when, but it also got better “help” presentation by leveraging Spectre.Console more.

I don’t have any specific plans for Oakton, but it’s the primary command line parser and command line utility library for both Marten, Wolverine, and Lamar, so it’s going to be actively maintained.

And finally, I’ve registered my own company called “Jasper Fx Software.” It’s going much slower than I’d hoped, but at some point early in 2023 I’ll have my shingle out to provide support contracts, consulting, and custom development with the tools above. It’s just a side hustle for now, but we’ll see if that can become something viable over time.

To be clear about this, the Marten core team & I are very serious about building a paid, add-on model to Marten + Wolverine and some of the new features I described up above are likely to fall under that umbrella. I’m sneaking that in at the end of this, but that’s probably the main ambition for me personally in the new year.

What about?…

If it’s not addressed in this post, it’s either dead (StructureMap) or something I consider just to be a supporting player (Weasel). Storyteller alas, is likely not coming back. Unless it does as something renamed to “Bobcat” as a tool specifically designed to help automate tests for Marten or Wolverine where xUnit.Net by itself doesn’t do so hot. And if Bobcat does end up existing, it’ll leverage existing tools as much as possible.

Wolverine and “Clone n’ Go!” Development

I’ve been able to talk and write a bit about Wolverine in the last couple weeks. This builds on the previous blog posts in this list:

When I start with a brand new codebase, I want to be able to be up and going mere minutes after doing an initial clone of the Git repository. And by “going,” I mean being able to run all the tests and running any kind of application in the codebase.

In most cases an application codebase I work with these days is going to have infrastructure dependencies. Usually a database, possibly some messaging infrastructure as well. Not to worry, because Wolverine has you covered with a lot of functionality out of the box to get your infrastructural dependencies configured in the shape you need to start running your application.

Before I get into Wolverine specifics, I’m assuming that the basic developer box has some baseline infrastructure installed:

  • The latest .NET SDK
  • Docker Desktop
  • Git itself
  • Node.js — not used by this post at all, but it’s almost impossible to not need Node.js at some point these days

Yet again, I want to go back to the simple banking application from previous posts that was using both Marten and Rabbit MQ for external messaging. Here’s the application bootstrapping:

using AppWithMiddleware;
using IntegrationTests;
using JasperFx.Core;
using Marten;
using Oakton;
using Wolverine;
using Wolverine.FluentValidation;
using Wolverine.Marten;
using Wolverine.RabbitMQ;

var builder = WebApplication.CreateBuilder(args);

builder.Services.AddMarten(opts =>
{
    // This would be from your configuration file in typical usage
    opts.Connection(Servers.PostgresConnectionString);
    opts.DatabaseSchemaName = "wolverine_middleware";
})
    // This is the wolverine integration for the outbox/inbox,
    // transactional middleware, saga persistence we don't care about
    // yet
    .IntegrateWithWolverine()
    
    // Just letting Marten build out known database schema elements upfront
    // Helps with Wolverine integration in development
    .ApplyAllDatabaseChangesOnStartup();

builder.Host.UseWolverine(opts =>
{
    // Middleware introduced in previous posts
    opts.Handlers.AddMiddlewareByMessageType(typeof(AccountLookupMiddleware));
    opts.UseFluentValidation();

    // Explicit routing for the AccountUpdated
    // message handling. This has precedence over conventional routing
    opts.PublishMessage<AccountUpdated>()
        .ToLocalQueue("signalr")

        // Throw the message away if it's not successfully
        // delivered within 10 seconds
        .DeliverWithin(10.Seconds())
        
        // Not durable
        .BufferedInMemory();
    
    var rabbitUri = builder.Configuration.GetValue<Uri>("rabbitmq-broker-uri");
    opts.UseRabbitMq(rabbitUri)
        // Just do the routing off of conventions, more or less
        // queue and/or exchange based on the Wolverine message type name
        .UseConventionalRouting()
        
        // This tells Wolverine to set up any missing Rabbit MQ queues, exchanges,
        // or bindings needed by the application if they are missing
        .AutoProvision() 
        .ConfigureSenders(x => x.UseDurableOutbox());
});

var app = builder.Build();

// One Minimal API that just delegates directly to Wolverine
app.MapPost("/accounts/debit", (DebitAccount command, IMessageBus bus) => bus.InvokeAsync(command));

// This is important, I'm opting into Oakton to be my
// command line executor for extended options
return await app.RunOaktonCommands(args);

After cloning this codebase, I should be able to quickly run a docker compose up -d command from the root of the codebase to set up dependencies like this:

version: '3'
services:
  postgresql:
    image: "clkao/postgres-plv8:latest"
    ports:
     - "5433:5432"
    environment:
      - POSTGRES_DATABASE=postgres
      - POSTGRES_USER=postgres
      - POSTGRES_PASSWORD=postgres
  
  rabbitmq:
    image: "rabbitmq:3-management"
    ports:
     - "5672:5672"
     - "15672:15672"

As it is, the Wolverine setup I showed above would allow you to immediately be up and running because:

  • In its default setting Marten is able to detect and build out missing database schema objects in the underlying application database at runtime
  • The Postgresql database schema objects necessary for Wolverine’s transactional outbox are created at bootstrapping time if they’re missing by Marten with the combination of the IntegrateWithWolverine() call and the ApplyAllDatabaseChangesOnStartup() declaration.
  • Any missing Rabbit MQ queues or exchanges are created at runtime due to the AutoProvision() declaration we made in the Rabbit MQ integration with Wolverine

Cool, right?

But there’s more! Wolverine heavily uses the related Oakton library for expanded command line utilities that can be helpful for diagnosing configuration issues, checking up on infrastructure, or applying infrastructure set up at deployment time instead of depending on doing things at runtime.

If I go to the root of the main project and type dotnet run -- help, I’ll get a list of the available command line options like this:

The available commands are:

  Alias           Description
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
  check-env       Execute all environment checks against the application
  codegen         Utilities for working with JasperFx.CodeGeneration and JasperFx.RuntimeCompiler
  db-apply        Applies all outstanding changes to the database(s) based on the current configuration
  db-assert       Assert that the existing database(s) matches the current configuration
  db-dump         Dumps the entire DDL for the configured Marten database
  db-patch        Evaluates the current configuration against the database and writes a patch and drop file if there are any differences
  describe        Writes out a description of your running application to either the console or a file
  help            List all the available commands
  marten-apply    Applies all outstanding changes to the database based on the current configuration
  marten-assert   Assert that the existing database matches the current Marten configuration
  marten-dump     Dumps the entire DDL for the configured Marten database
  marten-patch    Evaluates the current configuration against the database and writes a patch and drop file if there are any differences
  projections     Marten's asynchronous projection and projection rebuilds
  resources       Check, setup, or teardown stateful resources of this system
  run             Start and run this .Net application
  storage         Administer the envelope storage


Use dotnet run -- ? [command name] or dotnet run -- help [command name] to see usage help about a specific command

Let me call out just a few highlights:

  • `dotnet run — resources setup` would do any necessary set up of both the Marten or Rabbit MQ items. Likewise, if we were using Sql Server as the backing storage and integrating that with Wolverine as the outbox storage, this command would set up the necessary Sql Server tables and functions if they were missing. This generically applies as well to Wolverine’s Azure Service Bus or Amazon SQS integrations
  • `dotnet run — check-env` would run a set of environment checks to verify that the application can connect to the configured Rabbit MQ broker, the Postgresql database, and any other checks you may have. This is a great way to make deployments “fail fast”
  • `dotnet run — storage clear` would delete any persisted messages in the Wolverine inbox/outbox to remove old messages that might interfere with successful testing

Questions, comments, feedback? Hopefully this shows that Wolverine is absolutely intended for “grown up development” in real life.

Ephemeral Messages with Wolverine

I’ve been able to talk and write a bit about Wolverine in the last couple weeks. This builds on the previous blog posts in this list:

This post is a little bonus content that I accidentally cut from the previous post.

Last time I talked about Wolverine’s support for the transactional outbox pattern for messages that just absolutely have to be delivered. About the same day that I was writing that post, I was also talking with a colleague through a very different messaging scenario where a stream of status updates were being streamed to WebSocket connected clients. In this case, the individual messages being broadcast only had temporary validity, and were quickly obsolete. There’s absolutely no need for message persistence or guaranteed delivery. There’s also no good reason to even attempt to deliver a message in this case that’s more than a few seconds old.

To that end, let’s go back yet again to the command handler for the DebitAccount command, but in this version I’m going to cascade an AccountUpdated message that would ostensibly be broadcast through WebSockets to any connected client:

    [Transactional] 
    public static IEnumerable<object> Handle(
        DebitAccount command, 
        Account account, 
        IDocumentSession session)
    {
        account.Balance -= command.Amount;
     
        // This just marks the account as changed, but
        // doesn't actually commit changes to the database
        // yet. That actually matters as I hopefully explain
        session.Store(account);
 
        // Conditionally trigger other, cascading messages
        if (account.Balance > 0 && account.Balance < account.MinimumThreshold)
        {
            yield return new LowBalanceDetected(account.Id);
        }
        else if (account.Balance < 0)
        {
            yield return new AccountOverdrawn(account.Id);
        }

        // Send out a status update message that is maybe being 
        // broadcast to websocket-connected clients
        yield return new AccountUpdated(account.Id, account.Balance);
    }

Now I need to switch to the Wolverine bootstrapping and configure some explicit routing of the AccountUpdated message. In this case, I’m going to let the WebSocket messaging of the AccountUpdated messages happen from a non-durable, local queue:

builder.Host.UseWolverine(opts =>
{
    // Middleware introduced in previous posts
    opts.Handlers.AddMiddlewareByMessageType(typeof(AccountLookupMiddleware));
    opts.UseFluentValidation();

    // Explicit routing for the AccountUpdated
    // message handling. This has precedence over conventional routing
    opts.PublishMessage<AccountUpdated>()
        .ToLocalQueue("signalr")

        // Throw the message away if it's not successfully
        // delivered within 10 seconds
        
        // THIS CONFIGURATION ITEM WAS ADDED IN v0.9.6
        .DeliverWithin(10.Seconds())
        
        // Not durable
        .BufferedInMemory();
    
    var rabbitUri = builder.Configuration.GetValue<Uri>("rabbitmq-broker-uri");
    opts.UseRabbitMq(rabbitUri)
        // Just do the routing off of conventions, more or less
        // queue and/or exchange based on the Wolverine message type name
        .UseConventionalRouting()
        .ConfigureSenders(x => x.UseDurableOutbox());

});

The call to DeliverWithin(10.Seconds()) puts a rule on the local “signalr” queue that all messages published to that queue will have an effective expiration date of 10 seconds from the point at which the message was published. If the web socket publishing is backed up, or there’s a couple failure/retry cycles that delays the message, Wolverine will discard the message before it’s processed.

This option is perfect for transient status messages that have short shelf lives. Wolverine also lets you happily mix and match durable messaging and transient messages in the same message batch, as I hope is evident in the sample handler method in the first code sample.

Lastly, I used a fluent interface to apply the “deliver within” rule at the local queue level. That can also be applied at the message type level with an attribute like this alternative usage:

// The attribute directs Wolverine to send this message with 
// a "deliver within 5 seconds, or discard" directive
[DeliverWithin(5)]
public record AccountUpdated(Guid AccountId, decimal Balance);

Or lastly, I can set the “deliver within” rule on a message by message basis at the time of sending the message like so:

        // "messaging" is a Wolverine IMessageContext or IMessageBus service 
        // Do the deliver within rule on individual messages
        await messaging.SendAsync(new AccountUpdated(account.Id, account.Balance),
            new DeliveryOptions { DeliverWithin = 5.Seconds() });

I’ll try to sneak in one more post before mostly shutting down for Christmas and New Year’s. Next time up I’d like to talk about Wolverine’s support for grown up “clone n’ go” development through its facilities for configuring infrastructure like Postgresql or Rabbit MQ for you based on your application configuration.

Transactional Outbox/Inbox with Wolverine and why you care

I’ve been able to talk and write a bit about Wolverine in the last couple weeks. This builds on the last two blog posts in this list:

Alright, back to the sample message handler from my previous two blog posts here’s the shorthand version:

    [Transactional] 
    public static async Task Handle(
        DebitAccount command, 
        Account account, 
        IDocumentSession session, 
        IMessageContext messaging)
    {
        account.Balance -= command.Amount;
     
        // This just marks the account as changed, but
        // doesn't actually commit changes to the database
        // yet. That actually matters as I hopefully explain
        session.Store(account);
 
        // Conditionally trigger other, cascading messages
        if (account.Balance > 0 && account.Balance < account.MinimumThreshold)
        {
            await messaging.SendAsync(new LowBalanceDetected(account.Id));
        }
        else if (account.Balance < 0)
        {
            await messaging.SendAsync(new AccountOverdrawn(account.Id));
         
            // Give the customer 10 days to deal with the overdrawn account
            await messaging.ScheduleAsync(new EnforceAccountOverdrawnDeadline(account.Id), 10.Days());
        }
    }

and just for the sake of completion, here is a longer hand, completely equivalent version of the same handler:

[Transactional] 
public static async Task Handle(
    DebitAccount command, 
    Account account, 
    IDocumentSession session, 
    IMessageContext messaging)
{
    account.Balance -= command.Amount;
     
    // This just marks the account as changed, but
    // doesn't actually commit changes to the database
    // yet. That actually matters as I hopefully explain
    session.Store(account);
 
    if (account.Balance > 0 && account.Balance < account.MinimumThreshold)
    {
        await messaging.SendAsync(new LowBalanceDetected(account.Id));
    }
    else if (account.Balance < 0)
    {
        await messaging.SendAsync(new AccountOverdrawn(account.Id));
         
        // Give the customer 10 days to deal with the overdrawn account
        await messaging.ScheduleAsync(new EnforceAccountOverdrawnDeadline(account.Id), 10.Days());
    }
}

To review just a little bit, that Wolverine style message handler at runtime is committing changes to an Account in the underlying database and potentially sending out additional messages based on the state of the Account. For folks who are experienced with asynchronous messaging systems who hear me say that Wolverine does not support any kind of 2 phase commits between the database and message brokers, you’re probably already concerned with some potential problems in that code above:

  • Maybe the database changes fail, but there are “ghost” messages already queued that pertain to data changes that never actually happened
  • Maybe the messages actually manage to get through to their downstream handlers and are applied erroneously because the related database changes have not yet been applied. That’s a race condition that absolutely happens if you’re not careful (ask me how I know 😦 )
  • Maybe the database changes succeed, but the messages fail to be sent because of a network hiccup or who knows what problem happens with the message broker

Needless to say, there’s genuinely a lot of potential problems from those handful lines of code up above. Some of you reading this have probably already said to yourself that this calls for using some sort of transactional outbox — and Wolverine thinks so too!

The general idea of an “outbox” is to obviate the lack of true 2 phase commits by ensuring that outgoing messages are held until the database transaction is successful, then somehow guaranteeing that the messages will be sent out afterward. In the case of Wolverine and its integration with Marten, the order of operations in the message handler (in either version) shown above is to:

  1. Tell Marten that the Account document needs to be persisted. Nothing happens at this point other than marking the document as changed
  2. The handler creates messages that are registered with the current IMessageContext. Again, the messages do not actually go out here, instead they are routed by Wolverine to know exactly how and where they should be sent later
  3. The Wolverine + Marten [Transactional] middleware is calling the Marten IDocumentSession.SaveChangesAsync() method that makes the changes to the Account document and also creates new database records to persist any outgoing messages in the underlying Postgresql application database in one single, native database transaction. Even better, with the Marten integration, all the database operations are even happening in one single batched database call for maximum efficiency.
  4. When Marten successfully commits the database transaction, it tells Wolverine to “flush” the outgoing messages to the sending agents in Wolverine (depending on configuration and exact transport type, the messages might be sent “inline” or batched up with other messages to go out later).

To be clear, Wolverine also supports a transactional outbox with EF Core against either Sql Server or Postgresql. I’ll blog and/or document that soon.

The integration with Marten that’s in the WolverineFx.Marten Nuget isn’t that bad (I hope). First off, in my application bootstrapping I chain the IntegrateWithWolverine() call to the standard Marten bootstrapping like this:

using Wolverine.Marten;

var builder = WebApplication.CreateBuilder(args);

builder.Services.AddMarten(opts =>
{
    // This would be from your configuration file in typical usage
    opts.Connection(Servers.PostgresConnectionString);
    opts.DatabaseSchemaName = "wolverine_middleware";
})
    // This is the wolverine integration for the outbox/inbox,
    // transactional middleware, saga persistence we don't care about
    // yet
    .IntegrateWithWolverine()
    
    // Just letting Marten build out known database schema elements upfront
    // Helps with Wolverine integration in development
    .ApplyAllDatabaseChangesOnStartup();

For the moment, I’m going to say that all the “cascading messages” from the DebitAccount message handler are being handled by local, in memory queues. At this point — and I’d love to have feedback on the applicability or usability of this approach — each endpoint has to be explicitly enrolled into the durable outbox or inbox (for incoming, listening endpoints) mechanics. Knowing both of those things, I’m going to add a little bit of configuration to make every local queue durable:

builder.Host.UseWolverine(opts =>
{
    // Middleware introduced in previous posts
    opts.Handlers.AddMiddlewareByMessageType(typeof(AccountLookupMiddleware));
    opts.UseFluentValidation();
    
    // The nomenclature might be inconsistent here, but the key
    // point is to make the local queues durable
    opts.Policies
        .AllLocalQueues(x => x.UseDurableInbox());
});

If instead I chose to publish some of the outgoing messages with Rabbit MQ to other processes (or just want the messages queued), I can add the WolverineFx.RabbitMQ Nuget and change the bootstrapping to this:

builder.Host.UseWolverine(opts =>
{
    // Middleware introduced in previous posts
    opts.Handlers.AddMiddlewareByMessageType(typeof(AccountLookupMiddleware));
    opts.UseFluentValidation();

    var rabbitUri = builder.Configuration.GetValue<Uri>("rabbitmq-broker-uri");
    opts.UseRabbitMq(rabbitUri)
        // Just do the routing off of conventions, more or less
        // queue and/or exchange based on the Wolverine message type name
        .UseConventionalRouting()
        .ConfigureSenders(x => x.UseDurableOutbox());
});

I just threw a bunch of details at you all, so let me try to anticipate a couple questions you might have and also try to answer them:

  • Do the messages get delivered before the transaction completes? No, they’re held in memory until the transaction completes, then get sent
  • What happens if the message delivery fails? The Wolverine sending agents run in a hosted service within your application. When message delivery fails, the sending agent will try it again up to a configurable amount of times (100 is the default). Read the next question though before the “100” number bugs you:
  • What happens if the whole message broker is down? Wolverine’s sending agents have a crude circuit breaker and will stop trying to send message batches if there are too many failures in a period of time, then resume sending after a periodic “ping” message gets though. Long story short, Wolverine will buffer outgoing messages in the application database until Wolverine is able to reach the message broker.
  • What happens if the application process fails between the transaction succeeding and the message getting to the broker? The message will be recovered and sent by either another active node of the application if running in a cluster, or by restarting the single application process.
  • So you can do this in a cluster without sending the message multiple times? Yep.
  • What if you have zillions of stored messages and you restart the application, will it overwhelm the process and cause harm? It’s paged, distributes a bit between nodes, and there’s some back pressure to keep it from having too many outgoing messages in memory.
  • Can I use Sql Server instead? Yes. But for the moment, it’s like the scene in Blues Brothers when Elwood asks what kinds of music they have and the waitress replies “we have both kinds, Country and Western.”
  • Can I tell Wolverine to throw away a message that’s old and maybe out of date if it still hasn’t been processed? Yes, and I’ll show a bit of that in the next post.
  • What about messages that are routed to a non-durable endpoint as part of an outbox’d transaction? Good question! Wolverine is still holding those messages in memory until the message being processed successfully finishes, then kicks them out to in memory sending agents. Those sending agents have their own internal queues and retry loops for maximum resiliency. And actually for that matter, Wolverine has a built in in memory outbox to at least deal with ordering between the message processing and actually sending outgoing messages.

Next Time

WordPress just cut off the last section, so I’ll write a short follow up on mixing in non-durable message queues with message expirations. Next week I’ll keep on this sample application by discussing how Wolverine & its friends try really hard for a “clone n’go” developer workflow where you can be up and running mere minutes with all the database & message broker infrastructure up and going after a fresh clone of the codebase.

How Wolverine allows for easier testing

Yesterday I blogged about the new Wolverine alpha release with a sample that hopefully showed off how Wolverine’s different approach can lead to better developer productivity and higher performance than similar tools. Today I want to follow up on that by extending the code sample with other functionality, but then diving into how Wolverine (hopefully) makes automated unit or integration testing easier than what you may be used to.

From yesterday’s sample, I showed this small message handler for applying a debit to a bank account from an incoming message:

public static class DebitAccountHandler
{
    [Transactional] 
    public static void Handle(DebitAccount command, Account account, IDocumentSession session)
    {
        account.Balance -= command.Amount;
        session.Store(account);
    }
}

Today let’s extend this to:

  1. Raise an event if the balance gets below a specified threshold for the account
  2. Or raises a different event if the balance goes negative, but also…
  3. Sends a second “timeout” message to carry out some kind of enforcement action against the account if it is still negative by then

Here’s the new event and command messages:

public record LowBalanceDetected(Guid AccountId) : IAccountCommand;
public record AccountOverdrawn(Guid AccountId) : IAccountCommand;

// We'll change this in a little bit
public class EnforceAccountOverdrawnDeadline : IAccountCommand
{
    public Guid AccountId { get; }

    public EnforceAccountOverdrawnDeadline(Guid accountId)
    {
        AccountId = accountId;
    }
}

Now, we could extend the message handler to raise the necessary events and the overdrawn enforcement command message like so:

    [Transactional] 
    public static async Task Handle(
        DebitAccount command, 
        Account account, 
        IDocumentSession session, 
        IMessageContext messaging)
    {
        account.Balance -= command.Amount;
        
        // This just marks the account as changed, but
        // doesn't actually commit changes to the database
        // yet. That actually matters as I hopefully explain
        session.Store(account);

        if (account.Balance > 0 && account.Balance < account.MinimumThreshold)
        {
            await messaging.SendAsync(new LowBalanceDetected(account.Id));
        }
        else if (account.Balance < 0)
        {
            await messaging.SendAsync(new AccountOverdrawn(account.Id));
            
            // Give the customer 10 days to deal with the overdrawn account
            await messaging.ScheduleAsync(new EnforceAccountOverdrawnDeadline(account.Id), 10.Days());
        }
    }

And just to add a little more context, here’s part of what the message handler for the EnforceAccountOverdrawnDeadline could look like:

    public static void Handle(EnforceAccountOverdrawnDeadline command, Account account)
    {
        // Don't do anything if the balance has been corrected
        if (account.Balance >= 0) return;
        
        // Dunno, send in the goons? Report them to a credit agency? Guessing
        // nothing pleasant happens here
    }

Alrighty then, back to the new version of the message handler that raises extra event messages depending on the state of the account. You’ll notice that I used method injection to pass in the Wolverine IMessageContext for the current message being handled. That gives me access to spawn additional messages and even schedule the execution of a command for a later time. You should notice that I now had to make the handler method asynchronous as the various SendAsync() calls return ValueTask, so it’s a little uglier now. Don’t worry, we’re going to come back to that, so don’t settle for this quite yet.

I’m going to leave this for the next post, but if you’re experienced with asynchronous messaging you’re screaming that there’s a potential race condition or risk of phantom data or messages between the extra messages going out and the Account being committed. Tomorrow I’ll discuss how Wolverine’s transactional outbox support removes those very real, very common problems in asynchronous message processing.

So let’s jump into what a unit test could look like for the message handler for the DebitAccount method. To start with, I’ll use Wolverine’s built in TestMessageContext to act as a “spy” on the method. A couple tests might look like this using my typical testing stack of xUnit.Net, Shouldly, and NSubstitute:

public class when_the_account_is_overdrawn : IAsyncLifetime
{
    private readonly Account theAccount = new Account
    {
        Balance = 1000,
        MinimumThreshold = 100,
        Id = Guid.NewGuid()
    };

    private readonly TestMessageContext theContext = new TestMessageContext();
    
    // I happen to like NSubstitute for mocking or dynamic stubs
    private readonly IDocumentSession theDocumentSession = Substitute.For<IDocumentSession>();
    


    public async Task InitializeAsync()
    {
        var command = new DebitAccount(theAccount.Id, 1200);
        await DebitAccountHandler.Handle(command, theAccount, theDocumentSession, theContext);
    }

    [Fact]
    public void the_account_balance_should_be_negative()
    {
        theAccount.Balance.ShouldBe(-200);
    }

    [Fact]
    public void raises_an_account_overdrawn_message()
    {
        // ShouldHaveMessageOfType() is an extension method in 
        // Wolverine itself to facilitate unit testing assertions like this
        theContext.Sent.ShouldHaveMessageOfType<AccountOverdrawn>()
            .AccountId.ShouldBe(theAccount.Id);
    }

    [Fact]
    public void raises_an_overdrawn_deadline_message_in_10_days()
    {
        var scheduledTime  = theContext.ScheduledMessages()
            // Also an extension method in Wolverine for testing
            .ShouldHaveEnvelopeForMessageType<EnforceAccountOverdrawnDeadline>()
            .ScheduledTime;
        
        // Um, do something to verify that the scheduled time is 10 days from this moment
        // and also:
        //  https://github.com/JasperFx/wolverine/issues/110
    }

    public Task DisposeAsync()
    {
        return Task.CompletedTask;
    }
}

It’s not horrendous, and I’ve seen much, much worse in real life code. All the same though, let’s aim for easier code to test by removing more infrastructure code and trying to get to purely synchronous code. To get there, I’m first going to start with the EnforceAccountOverdrawnDeadline message type and change it slightly to this:

// I'm hard coding the delay time for execution, just
// go with that for now please:)
public record EnforceAccountOverdrawnDeadline(Guid AccountId) : TimeoutMessage(10.Days()), IAccountCommand;

And now back the the Handle(DebitAccount) handler, we’ll use Wolverine’s concept of cascading messages to simplify the handler and make it completely synchronous:

    [Transactional] 
    public static IEnumerable<object> Handle(
        DebitAccount command, 
        Account account, 
        IDocumentSession session)
    {
        account.Balance -= command.Amount;
        
        // This just marks the account as changed, but
        // doesn't actually commit changes to the database
        // yet. That actually matters as I hopefully explain
        session.Store(account);

        if (account.Balance > 0 && account.Balance < account.MinimumThreshold)
        {
            yield return new LowBalanceDetected(account.Id);
        }
        else if (account.Balance < 0)
        {
            yield return new AccountOverdrawn(account.Id);
            
            // Give the customer 10 days to deal with the overdrawn account
            yield return new EnforceAccountOverdrawnDeadline(account.Id);
        }
    }

Now, we’re able to mostly use state-based testing, eliminate the fake IMessageContext, and work with strictly synchronous code. Here’s the new version of the test class from before:

public class when_the_account_is_overdrawn 
{
    private readonly Account theAccount = new Account
    {
        Balance = 1000,
        MinimumThreshold = 100,
        Id = Guid.NewGuid()
    };

    
    // I happen to like NSubstitute for mocking or dynamic stubs
    private readonly IDocumentSession theDocumentSession = Substitute.For<IDocumentSession>();
    private readonly object[] theOutboundMessages;

    public when_the_account_is_overdrawn()
    {
        var command = new DebitAccount(theAccount.Id, 1200);
        theOutboundMessages = DebitAccountHandler.Handle(command, theAccount, theDocumentSession)
            .ToArray();
    }

    [Fact]
    public void the_account_balance_should_be_negative()
    {
        theAccount.Balance.ShouldBe(-200);
    }

    [Fact]
    public void raises_an_account_overdrawn_message()
    {
        // ShouldHaveMessageOfType() is an extension method in 
        // Wolverine itself to facilitate unit testing assertions like this
        theOutboundMessages.ShouldHaveMessageOfType<AccountOverdrawn>()
            .AccountId.ShouldBe(theAccount.Id);
    }

    [Fact]
    public void raises_an_overdrawn_deadline_message_in_10_days()
    {
        var scheduledTime  = theOutboundMessages
            // Also an extension method in Wolverine for testing
            .ShouldHaveEnvelopeForMessageType<EnforceAccountOverdrawnDeadline>();
    }

    [Fact]
    public void should_not_raise_account_balance_low_event()
    {
        theOutboundMessages.ShouldHaveNoMessageOfType<LowBalanceDetected>();
    }
}

The second version of both the handler method and the accompanying unit test is arguably simpler because:

  1. We were able to make the handler method synchronous which helpfully removes some boilerplate code, which is especially helpful if you use xUnit.Net because that allows us to eschew the IAsyncLifetime thing.
  2. Except for verifying that the account data was stored, all of the unit test code is now using state-based testing, which is generally easier to understand and write than interaction-based tests that necessarily depend on mock objects

Wolverine in general also made the handler method easier to test through the middleware I introduced in my previous post that “pushes” in the Account data to the handler method instead of making you jump through data access code and potential mock/stub object setup to inject the data inputs.

At the end of the day, I think that Wolverine not only does quite a bit to simplify your actual application code by doing more to isolate business functionality away from infrastructure, Wolverine also leads to more easily testable code for effective Test Driven Development.

But what about……….?

I meant to also show Wolverine’s built in integration testing support, but to be honest, I’m about to meet a friend for lunch and I’ve gotta wrap this up in the next 10 minutes. In subsequent posts I’m going to stick with this example and extend that into integration testing across the original message and into the cascading messages. I’ll also get into the very important details about Wolverine’s transactional outbox support.

Introducing Wolverine for Effective Server Side .NET Development

TL;DR — Wolverine’s runtime model is significantly different than other tools with similar functionality in the .NET world in a way that leads to simpler application code and more efficient runtime execution.

I was able to push a new version of Wolverine today based on the newly streamlined API worked out in this GitHub issue. Big thanks to Oskar, Eric, and Blake for their help in coming to what I feel turned out be a great improvement in usability — even though I took some convincing to get there. Also some huge thanks to Babu for the website scaffolding and publishing, and Khalid for all his graphics help and general encouragement.

The Wolverine docs — such as they are — are up on the Wolverine website.

In a nutshell, Wolverine is a mediator and message bus tool. There’s plenty of those tools already in the .NET space, so let me drop right into how Wolverine’s execution pipeline is genuinely unique and potentially does much more than older tools to improve developer productivity.

I’m going to build a very crude banking service that includes a message endpoint that will need to:

  1. Accept a message to apply a debit to a given account
  2. Verify that the debit amount is non-zero before you do anything else
  3. Load the information for the designated account from the database
  4. Apply the debit to the current balance of the account
  5. Persist the changes in balance back to the database

I’ll introduce “cascaded” messages tomorrow for things business rules like an account reaching a low balance or being overdrawn, but I’m ignoring that today to make this a smaller post.

While Wolverine supports EF Core and SQL Server as well, I unsurprisingly want to use Marten as a lower ceremony approach to application persistence in this particular case.

Before I even try to write the message handler, let me skip a couple design steps and say that I’m going to utilize three different sets of middleware to deal with cross cutting concerns:

  1. I’m going to use Wolverine’s built in Fluent Validation middleware to apply any known validation rules for the incoming messages. I’m honestly not sure I’d use this in real life, but this was built out and demo’d today as a way to demonstrate what’s “special” about Wolverine’s runtime architecture.
  2. Wolverine’s transactional & outbox middleware (duh)
  3. Custom middleware to load and push account data related to the incoming message to the message handler, or log and abort the message processing when the account data referenced in the message does not exist — and this more than anything is where the example code will show off Wolverine’s different approach. This example came directly from a common use case in a huge system at my work that uses NServiceBus.

Keeping in mind that we’ll be using some Wolverine middleware in a second, here’s the simple message handler to implement exactly the numbered list above:

public static class DebitAccountHandler
{
    // This explicitly adds the transactional middleware
    // The Fluent Validation middleware is applied because there's a validator
    // The Account argument is passed in by the AccountLookupMiddleware middleware
    [Transactional] // This could be done w/ a policy, but I'm opting to do this explicitly here
    public static void Handle(
        // The actual command message
        DebitAccount command, 
        
        // The current data for the account stored in the database
        // This will be "pushed" in by middleware
        Account account, 
        
        // The Marten document session service scoped to the 
        // current message being handled. 
        // Wolverine supports method injection similar to ASP.NET minimal api
        IDocumentSession session)
    {
        // decrement the balance
        account.Balance -= command.Amount;
        
        // Just telling Marten that this account document changed
        // so that it can be persisted by the middleware
        session.Store(account);
    }
}

I would argue that that handler method is very easy to understand. By removing away so many infrastructure concerns, a developer is able to mostly focus on business logic in isolation even without having to introduce all the baggage of some sort of hexagonal architecture style. Moreover, using Wolverine’s middleware allowed me to write purely synchronous code, which also reduces the code noise. Finally, but being able to “push” the business entity state into the method, I’m much more able to quickly write unit tests for the code and do TDD as I work.

Every message processing tool has middleware strategies for validation or transaction handling, but let’s take the example of the account data instead. When reviewing a very large system at work that uses NServiceBus, I noticed a common pattern of needing to load an entity from the database related to the incoming message and aborting the message handling if the entity does not exist. It’s an obvious opportunity for using middleware to eliminate the duplicated code.

First off, we need some way to “know” what the account id is for the incoming message. In this case I chose to use a marker interface just because that’s easy:

public interface IAccountCommand
{
    Guid AccountId { get; }
}

And the DebitAccountCommand becomes:

public record DebitAccount(Guid AccountId, decimal Amount) : IAccountCommand;

The actual middleware implementation is this:

// This is *a* way to build middleware in Wolverine by basically just
// writing functions/methods. There's a naming convention that
// looks for Before/BeforeAsync or After/AfterAsync
public static class AccountLookupMiddleware
{
    // The message *has* to be first in the parameter list
    // Before or BeforeAsync tells Wolverine this method should be called before the actual action
    public static async Task<(HandlerContinuation, Account?)> BeforeAsync(
        IAccountCommand command, 
        ILogger logger, 
        IDocumentSession session, 
        CancellationToken cancellation)
    {
        var account = await session.LoadAsync<Account>(command.AccountId, cancellation);
        if (account == null)
        {
            logger.LogInformation("Unable to find an account for {AccountId}, aborting the requested operation", command.AccountId);
        }
        
        return (account == null ? HandlerContinuation.Stop : HandlerContinuation.Continue, account);
    }
}

There’s also a Fluent Validation validator for the command as well (again, not sure I’d actually do it this way myself, but it shows off Wolverine’s middleware capabilities):

public class DebitAccountValidator : AbstractValidator<DebitAccount>
{
    public DebitAccountValidator()
    {
        RuleFor(x => x.Amount).GreaterThan(0);
    }
}

Stepping back to the actual application, I first added the WolverineFx.Marten Nuget reference to a brand new ASP.Net Core web api application, and made the following Program file to bootstrap the application:

using AppWithMiddleware;
using IntegrationTests;
using Marten;
using Oakton;
using Wolverine;
using Wolverine.FluentValidation;
using Wolverine.Marten;

var builder = WebApplication.CreateBuilder(args);

builder.Services.AddMarten(opts =>
{
    // This would be from your configuration file in typical usage
    opts.Connection(Servers.PostgresConnectionString);
    opts.DatabaseSchemaName = "wolverine_middleware";
}).IntegrateWithWolverine()
    // Just letting Marten build out known database schema elements upfront
    .ApplyAllDatabaseChangesOnStartup();

builder.Host.UseWolverine(opts =>
{
    // Custom middleware to load and pass account data into message
    // handlers
    opts.Handlers.AddMiddlewareByMessageType(typeof(AccountLookupMiddleware));

    // This will register all the Fluent Validation validators, and
    // apply validation middleware where the command type has
    // a validator
    opts.UseFluentValidation();
});

var app = builder.Build();

// One Minimal API that just delegates directly to Wolverine
app.MapPost("/accounts/debit", (DebitAccount command, IMessageBus bus) => bus.InvokeAsync(command));

return await app.RunOaktonCommands(args);

After all of those pieces are put together, let’s finally talk about how Wolverine’s runtime execution is really different. Wolverine’s “special sauce” is that instead of forcing you to write your code wrapped around the framework, Wolverine conforms to your application code by generating code at runtime (don’t worry, it can be done ahead of time as well to minimize cold start time).

For example, here’s the runtime code that’s generated for the DebitAccountHandler.Handle() method:

// <auto-generated/>
#pragma warning disable
using FluentValidation;
using Microsoft.Extensions.Logging;
using Wolverine.FluentValidation;
using Wolverine.Marten.Publishing;

namespace Internal.Generated.WolverineHandlers
{
    // START: DebitAccountHandler1928499868
    public class DebitAccountHandler1928499868 : Wolverine.Runtime.Handlers.MessageHandler
    {
        private readonly FluentValidation.IValidator<AppWithMiddleware.DebitAccount> _validator;
        private readonly Wolverine.Marten.Publishing.OutboxedSessionFactory _outboxedSessionFactory;
        private readonly Wolverine.FluentValidation.IFailureAction<AppWithMiddleware.DebitAccount> _failureAction;
        private readonly Microsoft.Extensions.Logging.ILogger<AppWithMiddleware.DebitAccount> _logger;

        public DebitAccountHandler1928499868(FluentValidation.IValidator<AppWithMiddleware.DebitAccount> validator, Wolverine.Marten.Publishing.OutboxedSessionFactory outboxedSessionFactory, Wolverine.FluentValidation.IFailureAction<AppWithMiddleware.DebitAccount> failureAction, Microsoft.Extensions.Logging.ILogger<AppWithMiddleware.DebitAccount> logger)
        {
            _validator = validator;
            _outboxedSessionFactory = outboxedSessionFactory;
            _failureAction = failureAction;
            _logger = logger;
        }



        public override async System.Threading.Tasks.Task HandleAsync(Wolverine.Runtime.MessageContext context, System.Threading.CancellationToken cancellation)
        {
            await using var documentSession = _outboxedSessionFactory.OpenSession(context);
            var debitAccount = (AppWithMiddleware.DebitAccount)context.Envelope.Message;
            (var handlerContinuation, var account) = await AppWithMiddleware.AccountLookupMiddleware.BeforeAsync((AppWithMiddleware.IAccountCommand)context.Envelope.Message, ((Microsoft.Extensions.Logging.ILogger)_logger), documentSession, cancellation).ConfigureAwait(false);
            if (handlerContinuation == Wolverine.HandlerContinuation.Stop) return;
            Wolverine.FluentValidation.Internals.FluentValidationExecutor.ExecuteOne<AppWithMiddleware.DebitAccount>(_validator, _failureAction, debitAccount);
            AppWithMiddleware.DebitAccountHandler.Handle(debitAccount, account, documentSession);

            // Commit the unit of work
            await documentSession.SaveChangesAsync(cancellation).ConfigureAwait(false);
        }

    }

    // END: DebitAccountHandler1928499868


}

It’s auto-generated code, so it’s admittedly ugly as sin, but there’s some things I’d like you to notice or just no:

  • This class is only instantiated one single time in the application and held in memory for the rest of the application instance’s lifecycle
  • That handler code is managing service lifecycle and service disposal, but yet there’s no IoC tool anywhere in sight
  • The code only has to create and resolve the Fluent Validation validator one single time, and it’s inlined access the complete rest of the way
  • The code up above minimizes the number of objects allocated per message being handled compared to other tools that utilize some kind of Russian Doll middleware model by dodging the need for framework adapter objects that are created and destroyed for each message execution. Less garbage collector thrashing from fewer object allocations means better performance and scalability
  • The Fluent Validation middleware wouldn’t even apply to message types that don’t have any known validators, and can optimize itself. Contrast that with Fluent Validation middleware strategies with something like MediatR where it would create a decorator object for each request and loop through an empty enumerable even when there are not validator for a given message type. That’s not a lot of overhead per se, but that adds up when there’s a lot of framework cruft.
  • You might have to take my word for it, but having built other frameworks and having spent a long time poring over the internals of other similar frameworks, Wolverine is going to do a lot fewer object allocations, indirections, and dictionary lookups at runtime than other tools with similar capabilities

I’m following this up immediately tomorrow by adding some “cascading” messages and diving into the built in testing support within Wolverine.

Wolverine on DotNetRocks

The fine folks at DotNetRocks graciously allowed me to come on and talk about Wolverine and its combination with Marten into the new “Critter Stack” for highly productive server side development in .NET.

Wolverine is a new framework (but based on previous tools dating back over a decade) for server side .NET development that acts as both an in-process mediator tool and message bus for asynchronous messaging between processes. In an admittedly crowded field, Wolverine stands apart from older tools in a couple important ways:

  • While Wolverine will happily take care of infrastructure concerns like error handling, logging, distributed tracing with OpenTelemetry, serialization, performance metrics, and interacting directly with message brokers, Wolverine does a much better job of keeping out of your application code
  • Wolverine’s runtime model — including its robust middleware strategy — completely bypasses the performance problems the older tools incur
  • Application code testability is a first class goal with Wolverine, and it shows. Unit testing is certainly easier with Wolverine keeping more infrastructure concerns out of your code, while also adding some unique test automation support for integration testing
  • Developer productivity is enhanced by aiming for baking in infrastructure setup and configuration directly into Wolverine. And because some of Wolverine’s productivity boost admittedly comes from coding convention magic, Wolverine can tell you exactly what it’s going to do at runtime through its built in diagnostics.

The usage is already going to change next week based on a lot of early user feedback, but you can easily get the gist of what Wolverine is like in usage through the JetBrains webinar on Wolverine a couple weeks back:

Alba for Effective ASP.Net Core Integration Testing

Alba is a small library that enables easy integration testing of ASP.Net Core routes completely in process within an NUnit/xUnit.Net/MSTest project. Alba 7.1 just dropped today with .NET 7 support, improved JSON handling for Minimal API endpoints, and multipart form support.

Quickstart with Minimal API

Keeping things almost absurdly simple, let’s say that you have a Minimal API route (taken from the Alba tests) like so:

app.MapPost("/go", (PostedMessage input) => new OutputMessage(input.Id));

Now, over in your testing project, you could write a crude test for the route above like so:

    [Fact]
    public async Task sample_test()
    {
        // This line only matters if you use Oakton for the command line
        // processing
        OaktonEnvironment.AutoStartHost = true;
        
        // I'm doing this inline to make the sample easier to understand,
        // but you'd want to share the AlbaHost between tests because
        // this is expensive
        await using var host = await AlbaHost.For<MinimalApiWithOakton.Program>();
        
        var guid = Guid.NewGuid();
        
        var result = await _host.PostJson(new PostedMessage(guid), "/go")
            .Receive<OutputMessage>();

        result.Id.ShouldBe(guid);
    }

A couple notes about the code above:

  • The test is bootstrapping your actual application using its configuration, but using the TestServer in place of Kestrel as the web server.
  • The call to PostJson() is using the application’s JSON serialization configuration, just in case you’ve customized the JSON serialization. Likewise, the call to Receive<T>() is also using the application’s JSON serialization mechanism to be consistent. This functionality was improved in Alba 7 to “know” whether to use MVC Core or Minimal API style JSON serialization (but you can explicitly override that in mixed applications on a case by case basis)
  • When the test executes, it’s running through your entire application’s ASP.Net Core pipeline including any and all registered middleware

If you choose to use Alba with >= .NET 6 style application bootstrapping inside of an inferred Program.Main() method, be aware that you will need to grant your test project visibility to the internals of your main project something like this:

  <ItemGroup>
    <InternalsVisibleTo Include="ProjectName.Tests" />
  </ItemGroup>

How does Alba fit into projects?

I think most people by now are somewhat familiar with the testing pyramid idea (or testing trophy or any other number of shapes). Just to review, it’s the idea that a software system is best served by being backed by a mix of automated tests between solitary unit tests, intermediate integration tests, and some number of end to end, black box tests.

We can debate what the exact composition of your test pyramid should be on a particular project until the cows come home. For my part, I want more fast running, easier to write tests and fewer potentially nasty Selenium/Playwright/Cypress.io tests that tend towards being slow and brittle. I like Alba in particular because it allows our teams at work to test at the HTTP web service layer through to the database completely within process — meaning the tests can be executed on demand without any kind of deployment. In short, Alba sits in the middle of the pyramid graphic above and makes those very valuable kind of tests easier to write, execute, and debug for the developers working on the system.