Building a Critter Stack Application: The “Stateful Resource” Model

Hey, did you know that JasperFx Software is ready for formal support plans for Marten and Wolverine? Not only are we trying to make the “Critter Stack” tools be viable long term options for your shop, we’re also interested in hearing your opinions about the tools and how they should change. We’re also certainly open to help you succeed with your software development projects on a consulting basis whether you’re using any part of the Critter Stack or any other .NET server side tooling.

Let’s build a small web service application using the whole “Critter Stack” and their friends, one small step at a time. For right now, the “finished” code is at CritterStackHelpDesk on GitHub.

The posts in this series are:

  1. Event Storming
  2. Marten as Event Store
  3. Marten Projections
  4. Integrating Marten into Our Application
  5. Wolverine as Mediator
  6. Web Service Query Endpoints with Marten
  7. Dealing with Concurrency
  8. Wolverine’s Aggregate Handler Workflow FTW!
  9. Command Line Diagnostics with Oakton
  10. Integration Testing Harness
  11. Marten as Document Database
  12. Asynchronous Processing with Wolverine
  13. Durable Outbox Messaging and Why You Care!
  14. Wolverine HTTP Endpoints
  15. Easy Unit Testing with Pure Functions
  16. Vertical Slice Architecture
  17. Messaging with Rabbit MQ
  18. The “Stateful Resource” Model (this post)
  19. Resiliency

I’ve personally spent quite a bit of time helping teams and organizations deal with older, legacy codebases where it might easily take a couple days of working painstakingly through the instructions in a large Wiki page of some sort in order to make their codebase work on a local development environment. That’s indicative of a high friction environment, and definitely not what we’d ideally like to have for our own teams.

Thinking about the external dependencies of our incident tracking, help desk api we’ve utilized:

  1. Marten for persistence, which requires our system to need PostgreSQL database schema objects
  2. Wolverine’s PostgreSQL-backed transactional outbox support, which also requires its own set of PostgreSQL database schema objects
  3. Rabbit MQ for asynchronous messaging, which requires queues, exchanges, and bindings to be set up in our message broker for the application to work

That’s a bit of stuff that needs to be configured within the Rabbit MQ or PostgreSQL infrastructure around our service in order to run our integration tests or the application itself for local testing.

Instead of the error prone, painstaking manual set up laboriously laid out in a Wiki page somewhere where you can’t remember where it is, let’s leverage the Critter Stack’s “Stateful Resource” model to quickly set our system up ready to run in development.

Building on our existing application configuration, I’m going to add a couple more lines of code to our system’s Program file:

// Depending on your DevOps setup and policies,
// you may or may not actually want this enabled
// in production installations, but some folks do
if (builder.Environment.IsDevelopment())
{
    // This will direct our application to set up
    // all known "stateful resources" at application bootstrapping
    // time
    builder.Services.AddResourceSetupOnStartup();
}

And that’s that. If you’re using the integration test harness like we did in an earlier post, or just starting up the application normally, the application will check for the existence of any of the following, and try to build out anything that’s missing from:

  • The known Marten document tables and all the database objects to support Marten’s event sourcing
  • The necessary tables and functions for Wolverine’s transactional inbox, outbox, and scheduled message tables (I’ll add a post later on those)
  • The known Rabbit MQ exchanges, queues, and bindings

Your application will have to have administrative privileges over all the resources for any of this to work of course, but you would have that at development time at least.

With this capability in place, the procedure for a new developer getting started with our codebase is to:

  1. Does a clean git clone of our codebase on to his local box
  2. Runs docker compose up to start up all the necessary infrastructure they need to run the system or the system’s integration tests locally
  3. Just run the integration tests or start the system and go!

Easy-peasy.

But wait, there’s more! Assuming you have Oakton set up as your command line like we did in an earlier post, you’ve got some command line tooling that can help as well.

If you omit the call to builder.Services.AddResourceSetupOnStartup();, you could still go to the command line and use this command just once to set everything up:

dotnet run -- resources setup

To check on the status of any or all of the resources, you can use:

dotnet run -- resources check

which for the HelpDesk.API, gives you this:

If you want to tear down all the existing data — and at least attempt to purge any Rabbit MQ queues of all messages — you can use:

dotnet run -- resources clear

There’s a few other options you can read about in the Oakton documentation for the Stateful Resource model, but for right now, type dotnet run -- help resources and you can see Oakton’s built in help for the resources command that runs down the supported usage:

Summary and What’s Next

The Critter Stack is trying really hard to create a productive, low friction development ecosystem for your projects. One of the ways it tries to make that happen is by being able to set up infrastructural dependencies automatically at runtime so a developer and just “clone n’ go” without the excruciating pain of the multi-page Wiki getting started instructions so painfully common in legacy codebases.

This stateful resource model is also supported for Kafka transport (which is also local development friendly) and the cloud native Azure Service Bus transport and AWS SQS transport (Wolverine + AWS SQS does work with LocalStack just fine). In the cloud native cases, the credentials from the Wolverine application will have to have the necessary rights to create queues, topics, and subscriptions. In the case of the cloud native transports, there is an option to prefix all the names of the queues, topics, and subscriptions to still create an isolated environment per developer for a better local development story even when relying on cloud native technologies.

I think I’ll add another post to this series where I switch the messaging to one of the cloud native approaches.

As for what’s next in this increasingly long series, I think we still have logging, open telemetry and metrics, resiliency, and maybe a post on Wolverine’s middleware support. That list is somewhat driven by recency bias around questions I’ve been asked here or there about Wolverine.

Building a Critter Stack Application: Integration Testing Harness

Hey, did you know that JasperFx Software is ready for formal support plans for Marten and Wolverine? Not only are we trying to make the “Critter Stack” tools be viable long term options for your shop, we’re also interested in hearing your opinions about the tools and how they should change.

The older parts of the JasperFx / Critter Stack projects are named after itty bitty small towns in SW Missouri, including Alba.

Let’s build a small web service application using the whole “Critter Stack” and their friends, one small step at a time. For right now, the “finished” code is at CritterStackHelpDesk on GitHub.

The posts in this series are:

  1. Event Storming
  2. Marten as Event Store
  3. Marten Projections
  4. Integrating Marten into Our Application
  5. Wolverine as Mediator
  6. Web Service Query Endpoints with Marten
  7. Dealing with Concurrency
  8. Wolverine’s Aggregate Handler Workflow FTW!
  9. Command Line Diagnostics with Oakton
  10. Integration Testing Harness (this post)
  11. Marten as Document Database
  12. Asynchronous Processing with Wolverine
  13. Durable Outbox Messaging and Why You Care!
  14. Wolverine HTTP Endpoints
  15. Easy Unit Testing with Pure Functions
  16. Vertical Slice Architecture
  17. Messaging with Rabbit MQ
  18. The “Stateful Resource” Model
  19. Resiliency

Before I go on with anything else in this series, I think we should establish some automated testing infrastructure for our incident tracking, help desk service. While we’re absolutely going to talk about how to structure code with Wolverine to make isolated unit testing as easy as possible for our domain logic, there are some elements of your system’s behavior that are best tested with automated integration tests that use the system’s infrastructure.

In this post I’m going to show you how I like to set up an integration testing harness for a “Critter Stack” service. I’m going to use xUnit.Net in this post, and while the mechanics would be a little different, I think the basic concepts should be easily transferable to other testing libraries like NUnit or MSTest. I’m also going to bring in the Alba library that we’ll use for testing HTTP calls through our system in memory, but in this first step, all you need to understand is that Alba is helping to set up the system under test in our testing harness.

Heads up a little bit, I’m skipping to the “finished” state of the help desk API code in this post, so there’s some Marten and Wolverine concepts sneaking in that haven’t been introduced until now.

First, let’s start our new testing project with:

dotnet new xunit

Then add some additional Nuget references:

dotnet add package Shouldly
dotnet add package Alba

That gives us a skeleton of the testing project. Before going on, we need to add a project reference from our new testing project to the entry point project of our help desk API. As we are worried about integration testing right now, we’re going to want the testing project to be able to start the system under test project up by calling the normal Program.Main() entrypoint so that we’re running the application the way that the system is normally configured — give or take a few overrides.

Let’s stop and talk about this a little bit because I think this is an important point. I think integration tests are more “valid” (i.e. less prone to false positives or false negatives) as they more closely reflect the actual system. I don’t want completely separate bootstrapping for the test harness that may or may not reflect the application’s production bootstrapping (don’t blow that point off, I’ve seen countless teams do partial IoC configuration for testing that can vary quite a bit from the application’s configuration).

So if you’ll accept my argument that we should be bootstrapping the system under test with its own Program.Main() entry point, our next step is to add this code to the main service to enable the test project to access that entry point:

using System.Runtime.CompilerServices;

// You have to do this in order to reference the Program
// entry point in the test harness
[assembly:InternalsVisibleTo("Helpdesk.Api.Tests")]

Switching finally to our testing project, I like to create a class I usually call AppFixture that manages the lifetime of the system under test running in our test project like so:

public class AppFixture : IAsyncLifetime
{
    public IAlbaHost Host { get; private set; }

    // This is a one time initialization of the
    // system under test before the first usage
    public async Task InitializeAsync()
    {
        // Sorry folks, but this is absolutely necessary if you 
        // use Oakton for command line processing and want to 
        // use WebApplicationFactory and/or Alba for integration testing
        OaktonEnvironment.AutoStartHost = true;

        // This is bootstrapping the actual application using
        // its implied Program.Main() set up
        // This is using a library named "Alba". See https://jasperfx.github.io/alba for more information
        Host = await AlbaHost.For<Program>(x =>
        {
            x.ConfigureServices(services =>
            {
                // We'll be using Rabbit MQ messaging later...
                services.DisableAllExternalWolverineTransports();
                
                // We're going to establish some baseline data
                // for testing
                services.InitializeMartenWith<BaselineData>();
            });
        }, new AuthenticationStub());
    }

    public Task DisposeAsync()
    {
        if (Host != null)
        {
            return Host.DisposeAsync().AsTask();
        }

        return Task.CompletedTask;
    }
}

A few notes about the code above:

  • Alba is using the WebApplicationFactory under the covers to bootstrap our help desk API service using the in memory TestServer in place of Kestrel. WebApplicationFactory does allow us to modify the IoC service registrations for our system and override parts of the system’s normal configuration
  • In this case, I’m telling Wolverine to effectively stub out all external transports. In later posts we’ll use Rabbit MQ for example to publish messages to an external process, but in this test harness we’re going to turn that off and simple have Wolverine be able to “catch” the outgoing messages in our tests. See Wolverine’s test automation support documentation for more information about this.
  • More on this later, but Marten has a built in facility to establish baseline data sets that can be used in test automation to effectively rewind the database to an initial state with one command
  • The DisposeAsync() method is very important. If you want to make your integration tests be repeatable and run smoothly as you iterate, you need the tests to clean up after themselves and not leave locks on resources like ports or files that could stop the next test run from functioning correctly
  • Pay attention to the `OaktonEnvironment.AutoStartHost = true;` call, that’s 100% necessary if your application is using Oakton for command parsing. Sorry.
  • As will be inevitably necessary, I’m using Alba’s facility for stubbing out web authentication that allows us to both sidestep pesky authentication infrastucture in functional testing while also happily letting us pass along user claims as test inputs in individual tests
  • Bootstrapping the IHost for your application can be expensive, so I prefer to share that host across tests whenever possible, and I generally rely on having individual tests establish their inputs at beginning of each test. See the xUnit.Net documentation on sharing fixtures between tests for more context about the xUnit mechanics.

For the Marten baseline data, right now I’m just making sure there’s at least one valid Customer document that we’ll need later:

public class BaselineData : IInitialData
{
    public static Guid Customer1Id { get; } = Guid.NewGuid();
    
    public async Task Populate(IDocumentStore store, CancellationToken cancellation)
    {
        await using var session = store.LightweightSession();
        session.Store(new Customer
        {
            Id = Customer1Id,
            Region = "West Cost",
            Duration = new ContractDuration(DateOnly.FromDateTime(DateTime.Today.Subtract(100.Days())), DateOnly.FromDateTime(DateTime.Today.Add(100.Days())))
        });

        await session.SaveChangesAsync(cancellation);
    }
}

To simplify the usage a little bit, I like to have a base class for integration tests that I like to call IntegrationContext:

[Collection("integration")]
public abstract class IntegrationContext : IAsyncLifetime
{
    private readonly AppFixture _fixture;

    protected IntegrationContext(AppFixture fixture)
    {
        _fixture = fixture;
    }
    
    // more....

    public IAlbaHost Host => _fixture.Host;

    public IDocumentStore Store => _fixture.Host.Services.GetRequiredService<IDocumentStore>();

    async Task IAsyncLifetime.InitializeAsync()
    {
        // Using Marten, wipe out all data and reset the state
        // back to exactly what we described in BaselineData
        await Store.Advanced.ResetAllData();
    }

    // This is required because of the IAsyncLifetime 
    // interface. Note that I do *not* tear down database
    // state after the test. That's purposeful
    public Task DisposeAsync()
    {
        return Task.CompletedTask;
    }

    // This is just delegating to Alba to run HTTP requests
    // end to end
    public async Task<IScenarioResult> Scenario(Action<Scenario> configure)
    {
        return await Host.Scenario(configure);
    }

    // This method allows us to make HTTP calls into our system
    // in memory with Alba, but do so within Wolverine's test support
    // for message tracking to both record outgoing messages and to ensure
    // that any cascaded work spawned by the initial command is completed
    // before passing control back to the calling test
    protected async Task<(ITrackedSession, IScenarioResult)> TrackedHttpCall(Action<Scenario> configuration)
    {
        IScenarioResult result = null;

        // The outer part is tying into Wolverine's test support
        // to "wait" for all detected message activity to complete
        var tracked = await Host.ExecuteAndWaitAsync(async () =>
        {
            // The inner part here is actually making an HTTP request
            // to the system under test with Alba
            result = await Host.Scenario(configuration);
        });

        return (tracked, result);
    }
}

The first thing I want to draw your attention to is the call to await Store.Advanced.ResetAllData(); in the InitializeAsync() method that will be called before each of our integration tests executing. In my approach, I strongly prefer to reset the state of the database before each test in order to start from a known system state. I’m also assuming that each test if necessary, will add additional state to the system’s Marten database as necessary for the test. This philosophically is what I’ve long called “Self-Contained Tests.” I also think it’s important to have the tests leave the database state alone after a test run so that if you are running tests one at a time you can use the left over database state to help troubleshoot why a test might have failed.

Other folks will try to spin up a separate database (maybe with TestContainers) per test or even a completely separate IHost per test, but I think that the cost of doing it that way is just too slow. I’d rather reset the system between tests and not incur the cost of recycling database containers and/or the system’s IHost. This comes at the cost of forcing your test suite to run tests in serial order, but I also think that xUnit.Net is not the best possible tool at parallel test runs, so I’m not sure you lose out on anything there.

And now for an actual test. We have an HTTP endpoint in our system we built early on that can process a LogIncident command, and create a new event stream for this new Incident with a single IncidentLogged event. I’ve skipped ahead a little bit and added a requirement that we capture a user id from an expected Claim on the ClaimsPrincipal for the current request that you’ll see reflected in the test below:

public class log_incident : IntegrationContext
{
    public log_incident(AppFixture fixture) : base(fixture)
    {
    }

    [Fact]
    public async Task create_a_new_incident()
    {
        // We'll need a user
        var user = new User(Guid.NewGuid());
        
        // Log a new incident by calling the HTTP
        // endpoint in our system
        var initial = await Scenario(x =>
        {
            var contact = new Contact(ContactChannel.Email);
            x.Post.Json(new LogIncident(BaselineData.Customer1Id, contact, "It's broken")).ToUrl("/api/incidents");
            x.StatusCodeShouldBe(201);
            
            x.WithClaim(new Claim("user-id", user.Id.ToString()));
        });

        var incidentId = initial.ReadAsJson<NewIncidentResponse>().IncidentId;

        using var session = Store.LightweightSession();
        var events = await session.Events.FetchStreamAsync(incidentId);
        var logged = events.First().ShouldBeOfType<IncidentLogged>();

        // This deserves more assertions, but you get the point...
        logged.CustomerId.ShouldBe(BaselineData.Customer1Id);
    }
}

Summary and What’s Next

The “Critter Stack” core team and our community care very deeply about effective testing, so we’ve invested from the very beginning in making integration testing as easy as possible with both Marten and Wolverine.

Alba is another little library from the JasperFx family that just makes it easier to write integration tests at the HTTP layer. Alba is perfect for doing integration testing of your web services. I definitely find it advantageous to be able to quickly bootstrap a web service project and run tests completely in memory on demand. That’s a much easier and quicker feedback cycle than trying to deploy the service and write tests that remotely interact with the web service through HTTP. And I shouldn’t even have to mention how absurdly slow it is in comparison to try to test the same web service functionality through the actual user interface with something like Selenium.

From the Marten side of things, PostgreSQL has a pretty small Docker image size, so it’s pretty painless to spin up on development boxes. Especially contrasted with situations where development teams share a centralized development database (shudder, hope not many folks still do that), having an isolated database for each developer that they can also tear down and rebuild at will certainly helps make it a lot easier to succeed with automated integration testing.

I think that document databases in general are a lot easier to deal with in automated testing than using a relational database with an ORM as the persistence tooling as it’s much less friction in setting up database schemas or to tear down database state. Marten goes a step farther than most persistence tools by having built in APIs to tear down database state or reset to baseline data sets in between tests.

We’ll dig deeper into Wolverine’s integration testing support later in this series with message handler testing, testing handlers that in turn spawn other messages, and dealing with external messaging in tests.

I think the next post is just going to be a quick survey of “Marten as Document Database” before I get back to Wolverine’s HTTP endpoint model.

Building a Critter Stack Application: Command Line Tools with Oakton

Hey, did you know that JasperFx Software is ready for formal support plans for Marten and Wolverine? Not only are we trying to make the “Critter Stack” tools be viable long term options for your shop, we’re also interested in hearing your opinions about the tools and how they should change. We’re also certainly open to help you succeed with your software development projects on a consulting basis whether you’re using any part of the Critter Stack or any other .NET server side tooling.

Let’s build a small web service application using the whole “Critter Stack” and their friends, one small step at a time. For right now, the “finished” code is at CritterStackHelpDesk on GitHub.

The posts in this series are:

  1. Event Storming
  2. Marten as Event Store
  3. Marten Projections
  4. Integrating Marten into Our Application
  5. Wolverine as Mediator
  6. Web Service Query Endpoints with Marten
  7. Dealing with Concurrency
  8. Wolverine’s Aggregate Handler Workflow FTW!
  9. Command Line Diagnostics with Oakton (this post)
  10. Integration Testing Harness
  11. Marten as Document Database
  12. Asynchronous Processing with Wolverine
  13. Durable Outbox Messaging and Why You Care!
  14. Wolverine HTTP Endpoints
  15. Easy Unit Testing with Pure Functions
  16. Vertical Slice Architecture
  17. Messaging with Rabbit MQ
  18. The “Stateful Resource” Model
  19. Resiliency

Hey folks, I’m deviating a little bit from the planned order and taking a side trip while we’re finishing up a bug fix release to address some OpenAPI generation hiccups before I go on to Wolverine HTTP endpoints.

Admittedly, Wolverine and to a lesser extent Marten have a bit of a “magic” conventional approach. They also depend on external configuration items, external infrastructural tools like databases or message brokers that require their own configuration, and there’s always the possibility of assembly mismatches from users doing who knows what with their Nuget dependency tree.

To help unwind potential problems with diagnostic tools and to facilitate environment setup, the “Critter Stack” uses the Oakton library to integrate command line utilities right into your application.

Applying Oakton to Your Application

To get started, I’m going right back to the Program entry point of our incident tracking help desk application and adding just a couple lines of code. First, Oakton is a dependency of Wolverine, so there’s no additional dependency to add, but we’ll add a using statement:

using Oakton;

This is optional, but we’ll possibly want the extra diagnostics, so I’ll add this line of code near the top:

// This opts Oakton into trying to discover diagnostics 
// extensions in other assemblies. Various Critter Stack
// libraries expose extra diagnostics, so we want this
builder.Host.ApplyOaktonExtensions();

and finally, I’m going to drop down to the last line of Program and replace the typical app.Run(); code with the command line parsing with Oakton:

// This is important for Wolverine/Marten diagnostics 
// and environment management
return await app.RunOaktonCommands(args);

Do note that it’s important to return the exit code of the command line runner up above. If you choose to use Oakton commands in a build script, returning a non zero exit code signals the caller that the command failed.

Command Line Mechanics

Next, I’m going to open a command prompt to the root directory of the HelpDesk.Api project, and use this to get a preview of the command line options we now have:

dotnet run -- help

That should render some help text like this:

  Alias           Description                                                                                                             
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
  check-env       Execute all environment checks against the application                                                                  
  codegen         Utilities for working with JasperFx.CodeGeneration and JasperFx.RuntimeCompiler                                         
  db-apply        Applies all outstanding changes to the database(s) based on the current configuration                                   
  db-assert       Assert that the existing database(s) matches the current configuration                                                  
  db-dump         Dumps the entire DDL for the configured Marten database                                                                 
  db-patch        Evaluates the current configuration against the database and writes a patch and drop file if there are any differences  
  describe        Writes out a description of your running application to either the console or a file                                    
  help            List all the available commands                                                                                         
  marten-apply    Applies all outstanding changes to the database based on the current configuration                                      
  marten-assert   Assert that the existing database matches the current Marten configuration                                              
  marten-dump     Dumps the entire DDL for the configured Marten database                                                                 
  marten-patch    Evaluates the current configuration against the database and writes a patch and drop file if there are any differences  
  projections     Marten's asynchronous projection and projection rebuilds                                                                
  resources       Check, setup, or teardown stateful resources of this system                                                             
  run             Start and run this .Net application                                                                                     
  storage         Administer the Wolverine message storage                                                                                

So that’s a lot, but let’s just start by explaining the basics of the command line for .NET applications. You can both pass arguments and flags to the dotnet application itself, and also to the application’s Program.Main(params string[] args) command. The key thing to know is that dotnet arguments and flags are segregated from the application’s arguments and flags by a double dash “–” separator. So for example the command, dotnet run --framework net8.0 -- codegen write is sending the framework flag to dotnet run, and the codegen write arguments to the application itself.

Stateful Resource Setup

Skipping a little bit to the end state of our help desk API project, we’ll have dependencies on:

  • Marten schema objects in the PostgreSQL database
  • Wolverine schema objects in PostgreSQL database (for the transactional inbox/outbox we’ll introduce later in this series)
  • Rabbit MQ exchanges for Wolverine to broadcast to later

One of the guiding philosophies of the Critter Stack is to minimize the “Time to Login Screen” (hat tip to Chad Myers) quality of your codebase. What this means is that we really want a new developer to our system (or a developer coming back after a long, well deserved vacation) to do a clean clone of our codebase, and very quickly be able to run the application and any integration tests end to end. To that end, Oakton exposes its “Stateful Resource” model as an adapter for tools like Marten and Wolverine to set up their resources to match their configuration.

Pretend just for a minute that you have all the necessary rights and permissions to configure database schemas and Rabbit MQ exchanges, queues, and bindings on whatever your Rabbit MQ broker is for development. Assuming that, you can have your copy of the help desk API completely up and ready to run through these steps at the command prompt starting at wherever you want the code to be:

git clone https://github.com/JasperFx/CritterStackHelpDesk.git
cd CritterStackHelpDesk
docker compose up -d
cd HelpDesk.Api
dotnet run -- resources setup

At the end of those calls, you should see this output:

The dotnet run -- resources setup command is able to do Marten database migrations for its event store storage and any document types it knows about upfront, the Wolverine envelope storage tables we’ll configure later, and the known Rabbit MQ exchange where we’ll configure for broadcasting integration events later.

The resources command has other options as shown below from dotnet run -- help resources:

You may need to pause a little bit between the call to docker compose and dotnet run to let Docker catch up!

Environment Checks

Years ago I worked on an early .NET system that still had a lot of COM dependencies that needed to be correctly registered outside of our application and used a shared database that was indifferently maintained as was common way back then. Needless to say, our deployments were chaotic as we never knew what shape the server was in when we deployed. We finally beat our deployment woes by implementing “environment tests” to our deployment scripts that would test out the environment dependencies (is the COM server there? can we connect to the database? is the expected XML file there?) and fail fast with descriptive messages when the server was in a crap state as we tried to deploy.

To that end, Oakton has its environment check model that both Marten and Wolverine utilize. In our help desk application, we already have a Marten dependency, so we know the application will not function correctly if either the database is unavailable or the connection string in the configuration just happens to be wrong or there’s a security set up issue or you get the picture.

So, picking up our application with every bit of infrastructure purposely turned off, I’ll run this command:

dotnet run -- check-env

and the result is a huge blob of exception text and the command will fail — allowing you to abort a build script that might be delegating to this command:

Next, I’m going to turn on all the infrastructure (and set up everything to match our application’s configuration with the second command) with a quick call to:

docker compose up -d
dotnet run -- resources setup

Now, I can run the environment checks again and get a green bill of health for our system:

Oakton’s environment check model predates the new .NET IHealthCheck model. Oakton will also support that model soon, and you can track that work here.

“Describe” Our System

Oakton’s describe command can give you some insights into your application, and tools like Marten or Wolverine can expose extensions to this model for further output. By typing this command at the project root:

dotnet run -- describe

We’ll get some basic information about our system like this preview of the configuration:

The loaded assemblies because you will occasionally get burned by unexpected Nuget behavior pulling in the wrong versions:

And sigh, because folks have frequently had some trouble understanding how Wolverine does its automatic handler discovery, we have this preview:

And quite a bit more information including:

  • Wolverine messaging endpoints
  • Wolverine’s local queues
  • Wolverine message routing
  • Wolverine exception handling policy configuration

Summary and What’s Next

Oakton is yet another command line parsing tool in .NET, of which there are at least dozens that are perfectly competent. What makes Oakton special though is its ability to add command line tools directly to the entry point of your application where you already have all your infrastructure configuration available. The main point I hope you take away from this is that the command line tooling in the “Critter Stack” can help your team development faster through the diagnostics and environment management features.

The “Critter Stack” is heavily utilizing Oakton’s extensibility model for:

  1. The static description of the application configuration that may frequently be helpful for troubleshooting or just understanding your system
  2. Stateful resource management of development dependencies like databases and message brokers. So far this is supported for Marten, both PostgreSQL and Sql Server dependencies of Wolverine, Rabbit MQ, Kafka, Azure Service Bus, and AWS SQS
  3. Environment checks to test out the validity of your system and its ability to connect to external resources during deployment or during development
  4. Any other utility you care to add to your system like resetting a baseline database state, adding users, or anything you care to do through Oakton’s command extensibility

As for what’s next, you’ll have to let me see when some bug fix releases get in place before I promise what exactly is going to be next in this series. I expect this series to at least go to 15-20 entries as I introduce more Wolverine scenarios, messaging, and quite a bit about automated testing. And also, I take requests!

If you’re curious, the JasperFx GitHub organization was originally conceived of as the reboot of the previous FubuMVC ecosystem, with the main project being “Jasper” and the smaller ancillary tools ripped out of the flotsam and jetsam of StructureMap and FubuMVC arranged around what was then called “Jasper,” which was named for my hometown. The smaller tools like Oakton, Alba, and Lamar are named after other small towns close to the titular Jasper, MO. As Marten took off and became by far and away the most important tool in our stable, we adopted the “Critter Stack” naming them as we pulled out Weasel into its own library and completely rebooted and renamed “Jasper” as Wolverine to be a natural complement to Marten.

And lastly, I’m not even sure that Oakton, MO will even show up on maps because it’s effectively a Methodist Church, a cemetery, the ruins of the general store, and a couple farm houses at a cross roads. In Missouri at least, towns cease to exist when they lose their post office. The area I grew up in is littered with former towns that fizzled out as the farm economy changed and folks moved to bigger towns later.

Integration Testing an HTTP Service that Publishes a Wolverine Message

As long term Agile practitioners, the folks behind the whole JasperFx / “Critter Stack” ecosystem explicitly design our tools around the quality of “testability.” Case in point, Wolverine has quite a bit of integration test helpers for testing through message handler execution.

However, while helping a Wolverine user last week, they told me that they were bypassing those built in tools because they wanted to do an integration test of an HTTP service call that publishes a message to Wolverine. That’s certainly going to be a common scenario, so let’s talk about a strategy for reliably writing integration tests that both invoke an HTTP request and can observe the ongoing Wolverine activity to “know” when the “act” part of a typical “arrange, act, assert” test is complete.

In the Wolverine codebase itself, there’s a couple projects that we use to test the Wolverine.Http library:

  1. WolverineWebApi — a web api project that has a lot of fake endpoints that tries to cover the whole gamut of usage scenarios for Wolverine.Http, including a couple use cases of publishing messages directly from HTTP endpoint handlers to asynchronous message handling inside of Wolverine core
  2. Wolverine.Http.Tests — an xUnit.Net project that contains a mix of unit tests and integration tests through WolverineWebApi and Wolverine.Http itself

Back to the need to write integration tests that span work from HTTP service invocations through to Wolverine message processing, Wolverine.Http uses the Alba library (another JasperFx project!) to execute and run assertions against HTTP services. At least at the moment, xUnit.Net is my goto test runner library, so Wolverine.Http.Tests has this shared fixture that is shared between test classes:

public class AppFixture : IAsyncLifetime
{
    public IAlbaHost Host { get; private set; }

    public async Task InitializeAsync()
    {
        // Sorry folks, but this is absolutely necessary if you 
        // use Oakton for command line processing and want to 
        // use WebApplicationFactory and/or Alba for integration testing
        OaktonEnvironment.AutoStartHost = true;

        // This is bootstrapping the actual application using
        // its implied Program.Main() set up
        Host = await AlbaHost.For<Program>(x => { });
    }

A couple notes on this approach:

  • I think it’s very important to use the actual application bootstrapping for the integration testing rather than trying to have a parallel IoC container configuration for test automation as I frequently see out in the wild. That doesn’t preclude customizing that bootstrapping a little bit to substitute in fake, stand in services for problematic external infrastructure.
  • The approach I’m showing here with xUnit.Net does have the effect of making the tests execute serially, which might not be what you want in very large test suites
  • I think the xUnit.Net shared fixture approach is somewhat confusing and I always have to review the documentation on it when I try to use it

There’s also a shared base class for integrated HTTP tests called IntegrationContext, with a little bit of that shown below:

[CollectionDefinition("integration")]
public class IntegrationCollection : ICollectionFixture<AppFixture>
{
}

[Collection("integration")]
public abstract class IntegrationContext : IAsyncLifetime
{
    private readonly AppFixture _fixture;

    protected IntegrationContext(AppFixture fixture)
    {
        _fixture = fixture;
    }
    
    // more....

More germane to this particular post, here’s a helper method inside of IntegrationContext I wrote specifically to do integration testing that has to span an HTTP request through to asynchronous Wolverine message handling:

    // This method allows us to make HTTP calls into our system
    // in memory with Alba, but do so within Wolverine's test support
    // for message tracking to both record outgoing messages and to ensure
    // that any cascaded work spawned by the initial command is completed
    // before passing control back to the calling test
    protected async Task<(ITrackedSession, IScenarioResult)> TrackedHttpCall(Action<Scenario> configuration)
    {
        IScenarioResult result = null;

        // The outer part is tying into Wolverine's test support
        // to "wait" for all detected message activity to complete
        var tracked = await Host.ExecuteAndWaitAsync(async () =>
        {
            // The inner part here is actually making an HTTP request
            // to the system under test with Alba
            result = await Host.Scenario(configuration);
        });

        return (tracked, result);
    }

Now, for a sample usage of that test helpers, here’s a fake endpoint from WolverineWebApi that I used to prove that Wolverine.Http endpoints can publish messages through Wolverine’s cascading message approach:

    // This would have a string response and a 200 status code
    [WolverinePost("/spawn")]
    public static (string, OutgoingMessages) Post(SpawnInput input)
    {
        var messages = new OutgoingMessages
        {
            new HttpMessage1(input.Name),
            new HttpMessage2(input.Name),
            new HttpMessage3(input.Name),
            new HttpMessage4(input.Name)
        };

        return ("got it", messages);
    }

Psst. Notice how the endpoint method’s signature up above is a synchronous pure function which is cleaner and easier to unit test than the equivalent functionality would be in other .NET frameworks that would have required you to call asynchronous methods on some kind of IMessageBus interface.

To test this thing, I want to run an HTTP POST to the “/span” Url in our application, then prove that there were four matching messages published through Wolverine. Here’s the test for that functionality using our earlier TrackedHttpCall() testing helper:

    [Fact]
    public async Task send_cascaded_messages_from_tuple_response()
    {
        // This would fail if the status code != 200 btw
        // This method waits until *all* detectable Wolverine message
        // processing has completed
        var (tracked, result) = await TrackedHttpCall(x =>
        {
            x.Post.Json(new SpawnInput("Chris Jones")).ToUrl("/spawn");
        });

        result.ReadAsText().ShouldBe("got it");

        // "tracked" is a Wolverine ITrackedSession object that lets us interrogate
        // what messages were published, sent, and handled during the testing perioc
        tracked.Sent.SingleMessage<HttpMessage1>().Name.ShouldBe("Chris Jones");
        tracked.Sent.SingleMessage<HttpMessage2>().Name.ShouldBe("Chris Jones");
        tracked.Sent.SingleMessage<HttpMessage3>().Name.ShouldBe("Chris Jones");
        tracked.Sent.SingleMessage<HttpMessage4>().Name.ShouldBe("Chris Jones");
    }

There you go. In one fell swoop, we’ve got a reliable way to do integration testing against asynchronous behavior in our system that’s triggered by an HTTP service call — including any and all configured ASP.Net Core or Wolverine.Http middleware that’s part of the execution pipeline.

By “reliable” here in regards to integration testing, I want you to think about any reasonably complicated Selenium test suite and how infuriatingly often you get “blinking” tests that are caused by race conditions between some kind of asynchronous behavior and the test harness trying to make test assertions against the browser state. Wolverine’s built in integration test support can eliminate that kind of inconsistent test behavior by removing the race condition as it tracks all ongoing work for completion.

Oh, and here’s Chris Jones sacking Joe Burrow in the AFC Championship game to seal the Chiefs win that was fresh in my mind when I originally wrote that code above:

Command Line Diagnostics in Wolverine

Wolverine 0.9.12 just went up on Nuget with new bug fixes, documentation improvements, improved Rabbit MQ usage of topics, local queue usage, and a lot of new functionality around the command line diagnostics. See the whole release notes here.

In this post, I want to zero into “command line diagnostics.” Speaking from a mix of concerns about being both a user of Wolverine and one of the people needing to support other people using Wolverine online, here’s a not exhaustive list of real challenges I’ve already seen or anticipate as Wolverine gets out into the wild more in the near future:

  • How is Wolverine configured? What extensions are found?
  • What middleware is registered, and is it hooked up correctly?
  • How is Wolverine handling a specific message exactly?
  • How is Wolverine HTTP handling an HTTP request for a specific route?
  • Is Wolverine finding all the handlers? Where is it looking?
  • Where is Wolverine trying to send each message?
  • Are we missing any configuration items? Is the database reachable? Is the url for a web service proxy in our application valid?
  • When Wolverine has to interact with databases or message brokers, are those servers configured correctly to run the application?

That’s a big list of potentially scary issues, so let’s run down a list of command line diagnostic tools that come out of the box with Wolverine to help developers be more productive in real world development. First off, Wolverine’s command line support is all through the Oakton library, and you’ll want to enable Oakton command handling directly in your main application through this line of code at the very end of a typical Program file:

// This is an extension method within Oakton
// And it's important to relay the exit code
// from Oakton commands to the command line
// if you want to use these tools in CI or CD
// pipelines to denote success or failure
return await app.RunOaktonCommands(args);

You’ll know Oakton is configured correctly if you’ll just go to the command line terminal of your preference at the root of your project and type:

dotnet run -- help

In a simple Wolverine application, you’d get these options out of the box:

The available commands are:
                                                                                                    
  Alias       Description                                                                           
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
  check-env   Execute all environment checks against the application                                
  codegen     Utilities for working with JasperFx.CodeGeneration and JasperFx.RuntimeCompiler       
  describe    Writes out a description of your running application to either the console or a file  
  help        List all the available commands                                                       
  resources   Check, setup, or teardown stateful resources of this system                           
  run         Start and run this .Net application                                                   
  storage     Administer the Wolverine message storage                                                       
                                                                                                    

Use dotnet run -- ? [command name] or dotnet run -- help [command name] to see usage help about a specific command

Let me admit that there’s a little bit of “magic” in the way that Wolverine uses naming or type conventions to “know” how to call into your application code. It’s great (in my opinion) that Wolverine doesn’t force you to pollute your code with framework concerns or require you to shape your code around Wolverine’s APIs the way most other .NET frameworks do.

Cool, so let’s move on to…

Describe the Configured Application

The annoying –framework flag is only necessary if your application targets multiple .NET frameworks, but no sane person would ever do that for a real application.

Partially for my own sanity, there’s a lot more support for Wolverine in the describe command. To see this in usage, consider the sample DiagnosticsApp from the Wolverine codebase. If I use the dotnet run --framework net7.0 -- describe command from that project, I get this copious textual output.

Just to summarize, what you’ll see in the command line report is:

  • “Wolverine Options” – the basics properties as configured, including what Wolverine thinks is the application assembly and any registered extensions
  • “Wolverine Listeners” – a tabular list of all the configured listening endpoints, including local queues, within the system and information about how they are configured
  • “Wolverine Message Routing” – a tabular list of all the message routing for known messages published within the system
  • “Wolverine Sending Endpoints” – a tabular list of all known, configured endpoints that send messages externally
  • “Wolverine Error Handling” – a preview of the active message failure policies active within the system
  • “Wolverine Http Endpoints” – shows all Wolverine HTTP endpoints. This is only active if WolverineFx.HTTP is used within the system

The latest Wolverine did add some optional message type discovery functionality specifically to make this describe command be more usable by letting Wolverine know about more message types that will be sent at runtime, but cannot be easily recognized as such strictly from configuration using a mix of marker interface types and/or attributes:

// These are all published messages that aren't
// obvious to Wolverine from message handler endpoint
// signatures
public record InvoiceShipped(Guid Id) : IEvent;
public record CreateShippingLabel(Guid Id) : ICommand;

[WolverineMessage]
public record AddItem(Guid Id, string ItemName);

Environment Checks

Have you ever made a deployment to production just to find out that a database connection string was wrong? Or the credentials to a message broker were wrong? Or your service wasn’t running under an account that had read access to a file share your application needed to scan? Me too!

Wolverine adds several environment checks so that you can use Oakton’s Environment Check functionality to self-diagnose potential configuration issues with:

dotnet run -- check-env

You could conceivably use this as part of your continuous delivery pipeline to quickly verify the application configuration for an application and fail fast & roll back if the checks fail.

How is Wolverine calling my message handlers?!?

Wolverine admittedly involves some “magic” about how it calls into your message handlers, and it’s not unlikely you may be confused about whether or how some kind of registered middleware is working within your system. Or maybe you’re just mildly curious about how Wolverine works at all.

To that end, you can preview — or just generate ahead of time for better “cold starts” — the dynamic source code that Wolverine generates for your message handlers or HTTP handlers with:

dotnet run -- codegen preview

Or just write the code to the file system so you can look at it to your heart’s content with your IDE with:

dotnet run -- codegen write

Which should write the source code files to /Internal/Generated/WolverineHandlers. Here’s a sample from the same diagnostics app sample:

// <auto-generated/>
#pragma warning disable

namespace Internal.Generated.WolverineHandlers
{
    public class CreateInvoiceHandler360502188 : Wolverine.Runtime.Handlers.MessageHandler
    {


        public override System.Threading.Tasks.Task HandleAsync(Wolverine.Runtime.MessageContext context, System.Threading.CancellationToken cancellation)
        {
            var createInvoice = (IntegrationTests.CreateInvoice)context.Envelope.Message;
            var outgoing1 = IntegrationTests.CreateInvoiceHandler.Handle(createInvoice);
            // Outgoing, cascaded message
            return context.EnqueueCascadingAsync(outgoing1);

        }

    }
}

Database or Message Broker Setup

Your application will require some configuration of external resources if you’re using any mix of Wolverine’s transactional inbox/outbox support which targets Postgresql or Sql Server or message brokers like Rabbit MQ, Amazon SQS, or Azure Service Bus. Not to worry (too much), Wolverine exposes some command line support for making any necessary configuration setup in these resources with the Oakton resources command.

In the diagnostics app, we could ensure that our connected Postgresql database has all the necessary schema tables and the Rabbit MQ broker has all the necessary queues, exchanges, and bindings that out application needs to function with:

dotnet run -- resources setup

In testing or normal development work, I may also want to reset the state of these resources to delete now obsolete messages in either the database or the Rabbit MQ queues, and we can fortunately do that with:

dotnet run -- resources clear

There are also resource options for:

  • teardown — remove all the database objects or message broker objects that the Wolverine application placed there
  • statistics — glean some information about the number of records or messages in the stateful resources
  • check — do environment checks on the configuration of the stateful resources. This is purely a diagnostic function
  • list — just show you information about the known, stateful resources

Summary

Is any of this wall of textual reports being spit out at the command line sexy? Not in the slightest. Will this functionality help development teams be more productive with Wolverine? Will this functionality help myself and other Wolverine team members support remote users in the future? I’m hopeful that the answer to the first question is “yes” and pretty confident that it’s a “hell, yes” to the second question.

I would also hope that folks see this functionality and agree with my assessment that Wolverine (and Marten) are absolutely appropriate for real life usage and well beyond the toy project phase.

Anyway, more on Wolverine next week starting with an exploration of Wolverine’s local queuing support for asynchronous processing.

Wolverine meets EF Core and Sql Server

Heads up, you will need at least Wolverine 0.9.7 for these samples!

I’ve mostly been writing about Wolverine samples that involve its “critter stack” compatriot Marten as the persistence tooling. I’m obviously deeply invested in making that “critter stack” the highest productivity combination for server side development basically anywhere.

Today though, let’s go meet potential Wolverine users where they actually live and finally talk about how to integrate Entity Framework Core (EF Core) and SQL Server into Wolverine applications.

All of the samples in this post are from the EFCoreSample project in the Wolverine codebase. There’s also some newly published documentation about integrating EF Core with Wolverine now too.

Alright, let’s say that we’re building a simplistic web service to capture information about Item entities (so original) and we’ve decided to use SQL Server as the backing database and use EF Core as our ORM for persistence — and also use Wolverine as an in memory mediator because why not?

I’m going to start by creating a brand new project with the dotnet new webapi template. Next I’m going to add some Nuget references for:

  1. Microsoft.EntityFrameworkCore.SqlServer
  2. WolverineFx.SqlServer
  3. WolverineFx.EntityFrameworkCore

Now, let’s say that I have a simplistic DbContext class to define my EF Core mappings like so:

public class ItemsDbContext : DbContext
{
    public ItemsDbContext(DbContextOptions<ItemsDbContext> options) : base(options)
    {
    }

    public DbSet<Item> Items { get; set; }

    protected override void OnModelCreating(ModelBuilder modelBuilder)
    {
        // Your normal EF Core mapping
        modelBuilder.Entity<Item>(map =>
        {
            map.ToTable("items");
            map.HasKey(x => x.Id);
            map.Property(x => x.Name);
        });
    }
}

Now let’s switch to the Program file that holds all our application bootstrapping and configuration:

using ItemService;
using Microsoft.EntityFrameworkCore;
using Oakton;
using Oakton.Resources;
using Wolverine;
using Wolverine.EntityFrameworkCore;
using Wolverine.SqlServer;

var builder = WebApplication.CreateBuilder(args);

// Just the normal work to get the connection string out of
// application configuration
var connectionString = builder.Configuration.GetConnectionString("sqlserver");

#region sample_optimized_efcore_registration

// If you're okay with this, this will register the DbContext as normally,
// but make some Wolverine specific optimizations at the same time
builder.Services.AddDbContextWithWolverineIntegration<ItemsDbContext>(
    x => x.UseSqlServer(connectionString));

builder.Host.UseWolverine(opts =>
{
    // Setting up Sql Server-backed message storage
    // This requires a reference to Wolverine.SqlServer
    opts.PersistMessagesWithSqlServer(connectionString);

    // Enrolling all local queues into the
    // durable inbox/outbox processing
    opts.Policies.UseDurableLocalQueues();
});

// This is rebuilding the persistent storage database schema on startup
builder.Host.UseResourceSetupOnStartup();

builder.Services.AddControllers();
builder.Services.AddEndpointsApiExplorer();
builder.Services.AddSwaggerGen();

var app = builder.Build();

// Make sure the EF Core db is set up
await app.Services.GetRequiredService<ItemsDbContext>().Database.EnsureCreatedAsync();

// Configure the HTTP request pipeline.
if (app.Environment.IsDevelopment())
{
    app.UseSwagger();
    app.UseSwaggerUI();
}

app.MapControllers();

app.MapPost("/items/create", (CreateItemCommand command, IMessageBus bus) => bus.InvokeAsync(command));

// Opt into using Oakton for command parsing
await app.RunOaktonCommands(args);

In the code above, I’ve:

  1. Added a service registration for the new ItemsDbContext EF Core class, but I did so with a special Wolverine wrapper that adds some optimizations for us, quietly adds some mapping to the ItemsDbContext at runtime for the Wolverine message storage, and also enables Wolverine’s transactional middleware and stateful saga support for EF Core.
  2. I added Wolverine to the application, and used the PersistMessagesWithSqlServer() extension method to tell Wolverine to add message storage for SQL Server in the default dbo schema (that can be overridden). This also adds Wolverine’s durable agent for its transactional outbox and inbox running as a background service in an IHostedService
  3. I directed the application to build out any missing database schema objects on application startup through the call to builder.Host.UseResourceSetupOnStartup(); If you’re curious, this is using Oakton’s stateful resource model.
  4. For the sake of testing this little bugger, I’m having the application build the implied database schema from the ItemsDbContext as well

Moving on, let’s build a simple message handler that creates a new Item, persists that with EF Core, and raises a new ItemCreated event message:

    public static ItemCreated Handle(
        // This would be the message
        CreateItemCommand command,

        // Any other arguments are assumed
        // to be service dependencies
        ItemsDbContext db)
    {
        // Create a new Item entity
        var item = new Item
        {
            Name = command.Name
        };

        // Add the item to the current
        // DbContext unit of work
        db.Items.Add(item);

        // This event being returned
        // by the handler will be automatically sent
        // out as a "cascading" message
        return new ItemCreated
        {
            Id = item.Id
        };
    }

Simple enough, but a couple notes about that code:

  • I didn’t explicitly call the SaveChangesAsync() method on our ItemsDbContext to commit the changes, and that’s because Wolverine sees that the handler has a dependency on an EF Core DbContext type, so it automatically wraps its EF Core transactional middleware around the handler
  • The ItemCreated object returned from the message handler is a Wolverine cascaded message, and will be sent out upon successful completion of the original CreateItemCommand message — including the transactional middleware that wraps the handler.
  • And oh, by the way, we want the ItemCreated message to be persisted in the underlying Sql Server database as part of the transaction being committed so that Wolverine’s transactional outbox functionality makes sure that message gets processed (eventually) even if the process somehow fails between publishing the new message and that message being successfully completed.

I should also note that as a potentially significant performance optimization, Wolverine is able to persist the ItemCreated message when ItemsDbContext.SaveChangesAsync() is called to enroll in EF Core’s ability to batch changes to the database rather than incurring the cost of extra network hops if we’d used raw SQL.

Hopefully that’s all pretty easy to follow, even though there’s some “magic” there. If you’re curious, here’s the actual code that Wolverine is generating to handle the CreateItemCommand message (just remember that auto-generated code tends to be ugly as sin):

// <auto-generated/>
#pragma warning disable
using Microsoft.EntityFrameworkCore;

namespace Internal.Generated.WolverineHandlers
{
    // START: CreateItemCommandHandler1452615242
    public class CreateItemCommandHandler1452615242 : Wolverine.Runtime.Handlers.MessageHandler
    {
        private readonly Microsoft.EntityFrameworkCore.DbContextOptions<ItemService.ItemsDbContext> _dbContextOptions;

        public CreateItemCommandHandler1452615242(Microsoft.EntityFrameworkCore.DbContextOptions<ItemService.ItemsDbContext> dbContextOptions)
        {
            _dbContextOptions = dbContextOptions;
        }



        public override async System.Threading.Tasks.Task HandleAsync(Wolverine.Runtime.MessageContext context, System.Threading.CancellationToken cancellation)
        {
            await using var itemsDbContext = new ItemService.ItemsDbContext(_dbContextOptions);
            var createItemCommand = (ItemService.CreateItemCommand)context.Envelope.Message;
            var outgoing1 = ItemService.CreateItemCommandHandler.Handle(createItemCommand, itemsDbContext);
            // Outgoing, cascaded message
            await context.EnqueueCascadingAsync(outgoing1).ConfigureAwait(false);

        }

    }

So that’s EF Core within a Wolverine handler, and using SQL Server as the backing message store. One of the weaknesses of some of the older messaging tools in .NET is that they’ve long lacked a usable outbox feature outside of the context of their message handlers (both NServiceBus and MassTransit have just barely released “real” outbox features), but that’s a frequent need in the applications at my own shop and we’ve had to work around these limitations. Fortunately though, Wolverine’s outbox functionality is usable outside of message handlers.

As an example, let’s implement basically the same functionality we did in the message handler, but this time in an ASP.Net Core Controller method:

    [HttpPost("/items/create2")]
    public async Task Post(
        [FromBody] CreateItemCommand command,
        [FromServices] IDbContextOutbox<ItemsDbContext> outbox)
    {
        // Create a new Item entity
        var item = new Item
        {
            Name = command.Name
        };

        // Add the item to the current
        // DbContext unit of work
        outbox.DbContext.Items.Add(item);

        // Publish a message to take action on the new item
        // in a background thread
        await outbox.PublishAsync(new ItemCreated
        {
            Id = item.Id
        });

        // Commit all changes and flush persisted messages
        // to the persistent outbox
        // in the correct order
        await outbox.SaveChangesAndFlushMessagesAsync();
    }  

In the sample above I’m using the Wolverine IDbContextOutbox<T> service to wrap the ItemsDbContext and automatically enroll the EF Core service in Wolverine’s outbox. This service exposes all the possible ways to publish messages through Wolverine’s normal IMessageBus entrypoint.

Here’s a slightly different possible usage where I directly inject ItemsDbContext, but also a Wolverine IDbContextOutbox service:

    [HttpPost("/items/create3")]
    public async Task Post3(
        [FromBody] CreateItemCommand command,
        [FromServices] ItemsDbContext dbContext,
        [FromServices] IDbContextOutbox outbox)
    {
        // Create a new Item entity
        var item = new Item
        {
            Name = command.Name
        };

        // Add the item to the current
        // DbContext unit of work
        dbContext.Items.Add(item);

        // Gotta attach the DbContext to the outbox
        // BEFORE sending any messages
        outbox.Enroll(dbContext);
        
        // Publish a message to take action on the new item
        // in a background thread
        await outbox.PublishAsync(new ItemCreated
        {
            Id = item.Id
        });

        // Commit all changes and flush persisted messages
        // to the persistent outbox
        // in the correct order
        await outbox.SaveChangesAndFlushMessagesAsync();
    }   

That’s about all there is, but to sum it up:

  • Wolverine is able to use SQL Server as its persistent message store for durable messaging
  • There’s a ton of functionality around managing the database schema for you so you can focus on just getting stuff done
  • Wolverine has transactional middleware that can be applied automatically around your handlers as a way to simplify your message handlers while also getting the durable outbox messaging
  • EF Core is absolutely something that’s supported by Wolverine

My OSS Plans for 2023

Before I start, I am lucky to be part of a great group of OSS collaborators across the board. In particular, thanks to Oskar, Babu, Khalid, Hawxy, and Eric Smith for helping make 2022 a hugely productive and satisfying year in OSS work for me. I’m looking forward to working with y’all more in the times ahead.

In recent years I’ve kicked off my side project work with an overly optimistic and hopelessly unrealistic list of ambitions for my OSS projects. You can find the 2022 and 2021 versions still hanging around, only somewhat fulfilled. I’m going to put down my markers for what I hope to accomplish in 2023 — and because I’m the kind of person who obsesses more about the list of things to do rather than looking back at accomplishments, I’ll take some time to review what was done in many of these projects in 2022. Onward.

Marten is going gang busters, and 2022 was a very encouraging year for the Marten core team & I. The sizable V5.0 release dropped in March with some significant usability improvements, multi-tenancy with a database per tenant(s) support, and other goodness specifically to deal with apparent flaws in the gigantic V4.0 release from late 2021.

For 2023, the V6 release will come soon, mostly with changes to underlying dependencies.

Beyond that, I think that V7 will be a massively ambitious release in terms of important new features — hopefully in time for Event Sourcing Live 2023. If I had a magic wand that would magically give us all enough bandwidth to pull it off, my big hopes for Marten V7 are:

  • The capability to massively scale the Event Store functionality in Marten to much, much larger systems
  • Improved throughput and capacity with asynchronous projections
  • A formal, in the box subscription model
  • The ability to shard document database entities
  • Dive into the Linq support again, but this time use Postgresql V15 specific functionality to make the generated queries more efficient — especially for any possible query that goes through child collections. I haven’t done the slightest bit of detailed analysis on that one yet though
  • The ability to rebuild projections with zero downtime and/or faster projection rebuilds

Marten will also be impacted by the work being done with…

After a couple years of having almost given up on it, I restarted work pretty heavily on what had been called Jasper. While building a sample application for a conference talk, Oskar & I realized there was some serious opportunity for combining Marten and the then-Jasper for very low ceremony CQRS architectures. Now, what’s the best way to revitalize an OSS project that was otherwise languishing and basically a failure in terms of adoption? You guessed it, rename the project with an obvious theme related to an already successful OSS project and get some new, spiffier graphics and better website! And basically all new internals, new features, quite a few performance improvements, better instrumentation capabilities, more robust error handling, and a unique runtime model that I very sincerely believe will lead to better developer productivity and better application performance than existing tools in the .NET space.

Hence, Wolverine is the new, improved message bus and local mediator (I like to call that a “command bus” so as to not suffer the obvious comparisons to MediatR which I feel shortchanges Wolverine’s much greater ambitions). Right now I’m very happy with the early feedback from Wolverine’s JetBrains webinar (careful, the API changed a bit since then) and its DotNetRocks episode.

Right now the goal is to make it to 1.0 by the end of January — with the proviso that Marten V6 has to go first. The remaining work is mostly to finish the documentation website and a handful of tactical feature items mostly to prove out some of the core abstractions before minting 1.0.

Luckily for me, a small group of us at work have started a proof of concept for rebuilding/converting/migrating a very large system currently using NHibernate, Sql Server, and NServiceBus to Wolverine + Marten. That’s going to be an absolutely invaluable learning experience that will undoubtedly shape the short term work in both tools.

Beyond 1.0, I’m hoping to effectively use Wolverine to level up on a lot of technologies by adding:

  • Some other transport options (Kafka? Kinesis? EventBridge?)
  • Additional persistence options with Cosmos Db and Dynamo Db being the likely candidates so far
  • A SignalR transport
  • First class serverless support using Wolverine’s runtime model, with some way of optimizing the cold start
  • An option to use Wolverine’s runtime model for ASP.Net Core API endpoints. I think there’s some opportunity to allow for a low ceremony, high performance alternative for HTTP API creation while still being completely within the ASP.Net Core ecosystem

I hope that Wolverine is successful by itself, but the real goal of Wolverine is to allow folks to combine it with Marten to form the….

“Critter Stack”

The hope with Marten + Wolverine is to create a very effective platform for server-side .NET development in general. More specifically, the goal of the “critter stack” combination is to become the acknowledged industry leader for building systems with a CQRS plus Event Sourcing architectural model. And I mean across all development platforms and programming languages.

Pride goeth before destruction, and an haughty spirit before a fall.

Proverbs 16:18 KJV

And let me just more humbly say that there’s a ways to go to get there, but I’m feeling optimistic right now and want to set out sights pretty high. I especially feel good about having unintentionally made a huge career bet on Postgresql.

Lamar recently got its 10.0 release to add first class .NET 7.0 support (while also dropping anything < .NET 6) and a couple performance improvements and bug fixes. There hasn’t been any new functionality added in the last year except for finally getting first class support for IAsyncDisposable. It’s unlikely that there will be much development in the new year for Lamar, but we use it at work, I still think it has advantages over the built in DI container from .NET, and it’s vital for Wolverine. Lamar is here to stay.

Alba

Alba 7.0 (and a couple minor releases afterward) added first class .NET 7 support, much better support for testing Minimal API routes that accept and/or return JSON, and other tactical fixes (mostly by Hawxy).

See Alba for Effective ASP.Net Core Integration Testing for more information on how Alba improved this year.

I don’t have any specific plans for Alba this year, but I use Alba to test pieces of Marten and Wolverine and we use it at work. If I manage to get my way, we’ll be converting as many slow, unreliable Selenium based tests to fast running Alba tests against HTTP endpoints in 2023 at work. Alba is here to stay.

Not that this is germane to this post, but the very lightly traveled road behind that sign has a straightaway section where you can see for a couple miles at a time. I may or may not have tried to find out exactly how fast my first car could really go on that stretch of road at one point.

Oakton had a significant new feature set around the idea of “stateful resources” added in 2022, specifically meant for supporting both Marten and Wolverine. We also cleaned up the documentation website. The latest version 6.0 brought Oakton up to .NET 7 while also using shared dependencies with the greater JasperFx family (Marten, Wolverine, Lamar, etc.). I don’t exactly remember when, but it also got better “help” presentation by leveraging Spectre.Console more.

I don’t have any specific plans for Oakton, but it’s the primary command line parser and command line utility library for both Marten, Wolverine, and Lamar, so it’s going to be actively maintained.

And finally, I’ve registered my own company called “Jasper Fx Software.” It’s going much slower than I’d hoped, but at some point early in 2023 I’ll have my shingle out to provide support contracts, consulting, and custom development with the tools above. It’s just a side hustle for now, but we’ll see if that can become something viable over time.

To be clear about this, the Marten core team & I are very serious about building a paid, add-on model to Marten + Wolverine and some of the new features I described up above are likely to fall under that umbrella. I’m sneaking that in at the end of this, but that’s probably the main ambition for me personally in the new year.

What about?…

If it’s not addressed in this post, it’s either dead (StructureMap) or something I consider just to be a supporting player (Weasel). Storyteller alas, is likely not coming back. Unless it does as something renamed to “Bobcat” as a tool specifically designed to help automate tests for Marten or Wolverine where xUnit.Net by itself doesn’t do so hot. And if Bobcat does end up existing, it’ll leverage existing tools as much as possible.

Command Line Support for Marten Projections

Marten 5.7 was published earlier this week with mostly bug fixes. The one, big new piece of functionality was an improved version of the command line support for event store projections. Specifically, Marten added support for multi-tenancy through multiple databases and the ability to use separate document stores in one application as part of our V5 release earlier this year, but the projections command didn’t really catch up and support that — but now it can with Marten v5.7.0.

From a sample project in Marten we use to test this functionality, here’s part of the Marten setup that has a mix of asynchronous and inline projections, as well as uses the database per tenant strategy:

services.AddMarten(opts =>
{
    opts.AutoCreateSchemaObjects = AutoCreate.All;
    opts.DatabaseSchemaName = "cli";

    // Note this app uses multiple databases for multi-tenancy
 opts.MultiTenantedWithSingleServer(ConnectionSource.ConnectionString)
        .WithTenants("tenant1", "tenant2", "tenant3");

    // Register all event store projections ahead of time
    opts.Projections
        .Add(new TripAggregationWithCustomName(), ProjectionLifecycle.Async);
    
    opts.Projections
        .Add(new DayProjection(), ProjectionLifecycle.Async);
    
    opts.Projections
        .Add(new DistanceProjection(), ProjectionLifecycle.Async);

    opts.Projections
        .Add(new SimpleAggregate(), ProjectionLifecycle.Inline);

    // This is actually important to register "live" aggregations too for the code generation
    opts.Projections.SelfAggregate<SelfAggregatingTrip>(ProjectionLifecycle.Live);
}).AddAsyncDaemon(DaemonMode.Solo);

At this point, let’s introduce the Marten.CommandLine Nuget dependency to the system just to add Marten related command line options directly to our application for typical database management utilities. Marten.CommandLine brings with it a dependency on Oakton that we’ll actually use as the command line parser for our built in tooling. Using the now “old-fashioned” pre-.NET 6 manner of running a console application, I add Oakton to the system like this:

public static Task<int> Main(string[] args)
{
    // Use Oakton for running the command line
    return CreateHostBuilder(args).RunOaktonCommands(args);
}

When you use the dotnet command line options, just keep in mind that the “–” separator you’re seeing me here is used to separate options directly to the dotnet executable itself on the left from arguments being passed to the application itself on the right of the “–” separator.

Now, turning to the command line at the root of our project, I’m going to type out this command to see the Oakton options for our application:

dotnet run -- help

Which gives us this output:

If you’re wondering, the commands db-apply and marten-apply are synonyms that’s there as to not break older users when we introduced the now, more generic “db” commands.

And next I’m going to see the usage for the projections command with dotnet run -- help projections, which gives me this output:

For the simplest usage, I’m just going to list off the known projections for the entire system with dotnet run -- projections --list:

Which will show us the four registered projections in the main IDocumentStore, and tells us that there are no registered projections in the separate IOtherStore.

Now, I’m just going to continuously run the asynchronous projections for the entire application — while another process is constantly pumping random events into the system so there’s always new work to be doing — with dotnet run -- projections, which will spit out this continuously updating table (with an assist from Spectre.Console):

What I hope you can tell here is that every asynchronous projection is actively running for each separate tenant database. The blue “High Water Mark” is telling us where the current event store for each database is at.

And finally, for the main reason why I tackled the projections command line overhaul last week, folks needed a way to rebuild projections for every database when using a database per tenant strategy.

While the new projections command will happily let you rebuild any combination of database, store, and projection name by flags or even an interactive mode, we can quickly trigger a full rebuild of all the asynchronous projections with dotnet run -- projections --rebuild, which is going to loop through every store and database like so:

For the moment, the rebuild works on all the projections for a single database at a time. I’m sure we’ll attempt some optimizations of the rebuilding process and try to understand how much we can really parallelize more, but for right now, our users have an out of the box way to rebuild projections across separate databases or separate stores.

This *might* be a YouTube video soon just to kick off my new channel for Marten/Jasper/Oakton/Alba/Lamar content.

A Vision for Stateful Resources at Development or Deployment Time

As is not atypical, I found a couple little issues with both Oakton and Jasper in the course of writing this post. To that end, if you want to use the functionality shown here yourself, just make sure you’re on at least Oakton 4.6.1 and Jasper 2.0-alpha-3.

I’ve spit out quite a bit of blogging content the past several weeks on both Marten and Jasper:

I’ve been showing some new integration between Jasper, Marten, and Rabbit MQ. This time out, I want to show the new “stateful resource” model in a third tool named Oakton to remove development and deployment time friction when using these tools on a software project. Oakton itself is a command line processing tool, that more importantly, can be used to quickly add command line utilities directly to your .Net executable.

Drawing from a sample project in the Jasper codebase, here’s the configuration for an issue tracking application that uses Jasper, Marten, and RabbitMQ with Jasper’s inbox/outbox integration using Postgresql:

using IntegrationTests;
using Jasper;
using Jasper.Persistence.Marten;
using Jasper.RabbitMQ;
using Marten;
using MartenAndRabbitIssueService;
using MartenAndRabbitMessages;
using Oakton;
using Oakton.Resources;

var builder = WebApplication.CreateBuilder(args);

builder.Host.ApplyOaktonExtensions();

builder.Host.UseJasper(opts =>
{
    // I'm setting this up to publish to the same process
    // just to see things work
    opts.PublishAllMessages()
        .ToRabbitExchange("issue_events", exchange => exchange.BindQueue("issue_events"))
        .UseDurableOutbox();

    opts.ListenToRabbitQueue("issue_events").UseInbox();

    opts.UseRabbitMq(factory =>
    {
        // Just connecting with defaults, but showing
        // how you *could* customize the connection to Rabbit MQ
        factory.HostName = "localhost";
        factory.Port = 5672;
    });
});

// This is actually important, this directs
// the app to build out all declared Postgresql and
// Rabbit MQ objects on start up if they do not already
// exist
builder.Services.AddResourceSetupOnStartup();

// Just pumping out a bunch of messages so we can see
// statistics
builder.Services.AddHostedService<Worker>();

builder.Services.AddMarten(opts =>
{
    // I think you would most likely pull the connection string from
    // configuration like this:
    // var martenConnectionString = builder.Configuration.GetConnectionString("marten");
    // opts.Connection(martenConnectionString);

    opts.Connection(Servers.PostgresConnectionString);
    opts.DatabaseSchemaName = "issues";

    // Just letting Marten know there's a document type
    // so we can see the tables and functions created on startup
    opts.RegisterDocumentType<Issue>();

    // I'm putting the inbox/outbox tables into a separate "issue_service" schema
}).IntegrateWithJasper("issue_service");

var app = builder.Build();

app.MapGet("/", () => "Hello World!");

// Actually important to return the exit code here!
return await app.RunOaktonCommands(args);

Just to describe what’s going on up above, the .NET code above is going to depend on:

  1. A Postgresql database with the necessary tables and functions that Marten needs to be able to persist issue data
  2. Additional tables in the Postgresql database for persisting the outgoing and incoming messages in the inbox/outbox usage
  3. A Rabbit MQ broker with the necessary exchanges, queues, and bindings for the issue application as it’s configured

In a perfect world, scratch that, in an acceptable world, a developer should be able to start from a fresh clone of this issue tracking codebase and be able to run the system and/or any integration tests locally almost immediately with very minimal friction along the way.

At this point, I’m a big fan of trying to run development infrastructure in Docker where it’s easy to spin things up on demand, and just as quickly shut it all down when you no longer need it. To that end, let’s just say we’ve got a docker-compose.yml file for both Postgresql and Rabbit MQ. Having that, I’ll type docker compose up -d from the command line to spin up both infrastructure elements.

Cool, but now I need to have the database schemas built out with Marten tables and the Jasper inbox/outbox tables plus the Rabbit MQ queues for the application. This is where Oakton and its new “stateful resource” model comes into play. Jasper’s Rabbit MQ plugin and the inbox/outbox storage both expose Oakton’s IStatefulResource interface for easy setup. Likewise, Marten has support for this model as well (in this case it’s just a very slim wrapper around Marten’s longstanding database schema management functionality).

If you’re not familiar with this, the double dash “–” argument in dotnet run helps .NET to know which arguments (“run”) apply to the dotnet executable and the arguments to the right of the “–” that are passed into the application itself.

Opening up the command line terminal of your preference to the root of the project, I type dotnet run -- help to see what options are available in our Jasper application through the usage of Oakton:

There’s a couple commands up there that will help us out with the database management, but I want to focus on the resources command. To that end, I’m going to type dotnet run -- resources list just to see what resources our issue tracker application has:

Just through the configuration up above, the various Jasper elements have registered “stateful resource” adapters to for Oakton for the underlying Marten database, the inbox/outbox data (Envelope Storage above), and Rabbit MQ.

In the next case, I’m going to use dotnet run -- resources check to see if all our infrastructure is configured the way our application needs — and I’m going to do this without first starting the database or the message broker, so this should fail spectacularly!

Here’s the summary output:

If you were to scroll up a bit, you’d see a lot of exceptions thrown describing what’s wrong (helpfully color coded by Spectre.Console) including this one explaining that an expected Rabbit MQ queue is missing:

So that’s not good. No worries though, I’ll start up the docker containers, then go back to the command line and type:

dotnet run -- resources setup

And here’s some of the output:

Forget the command line…

If you’ll notice the single line of code `builder.Services.AddResourceSetupOnStartup();` in the bootstrapping code above, that’s adding a hosted service to our application from Oakton that will verify and apply all configured set up to the known Marten database, the inbox/outbox storage, and the required Rabbit MQ objects. No command line chicanery necessary. I’m hopeful that this will enable developers to be more productive by dealing with this kind of environmental setup directly inside the application itself rather than recreating the definition of what’s necessary in external scripts.

This was a fair amount of work, so I’d be very welcome to any kind of feedback here.

A Vision for Low Ceremony CQRS with Event Sourcing

Let me just put this stake into the ground. The combination of Marten and Jasper will quickly become the most productive and lowest ceremony tooling for event sourcing and CQRS architectures on the .NET stack.

And for any George Strait fans out there, this song may be relevant to the previous paragraph:)

CQRS and Event Sourcing

Just to define some terminology, the Command Query Responsibility Separation (CQRS) pattern was first described by Greg Young as an approach to software architecture where you very consciously separate “writes” that change the state of the system from “reads” against the system state. The key to CQRS is to use separate models of the system state for writing versus querying. That leads to something I think of as the “scary view of CQRS”:

The “scary” view of CQRS

The “query database” in my diagram is meant to be an optimized storage for the queries needed by the system’s clients. The “domain model storage” is some kind of persisted storage that’s optimized for writes — both capturing new data and presenting exactly the current system state that the command handlers would need in order to process or validate incoming commands.

Many folks have a visceral, negative reaction to this kind of CQRS diagram as it does appear to be more complexity than the more traditional approach where you have a single database model — almost inevitably in a relational database of some kind — and both reads and writes work off the same set of database tables. Hell, I walked out of an early presentation by Greg Young about what later became to be known as CQRS at QCon 2008 shaking my head thinking this was all nuts.

Here’s the deal though, when we’re writing systems against the one, true database model, we’re potentially spending a lot of time mapping incoming messages to the stored model, and probably even more time transforming the raw database model into a shape that’s appropriate for our system’s clients in a way that also creates some decoupling between clients and the ugly, raw details of our underlying database. The “scary” CQRS architecture arguably just brings that sometimes hidden work and complexity into the light of day.

Now let’s move on to Event Sourcing. Event sourcing is a style of system persistence where each state change is captured and stored as an explicit event object. There’s all sorts of value from keeping the raw change events around for later, but you of course do need to compile the events into derived, or “projected” models that represent the current system state. When combining event sourcing into a CQRS architecture, some projected models from the raw events can serve as both the “write model” inside of incoming commands, while others can be built as the “query model” that clients will use to query our system.

At the end of this section, I want to be clear that event sourcing and CQRS can be used independently of each other, but to paraphrase Forrest Gump, event sourcing and CQRS go together like peas and carrots.

Now, the event sourcing side of this especially might sound a little scary if you’re not familiar with some of the existing tooling, so let’s move on first to Marten…

Marten has a mature feature set for event sourcing already, and we’re grown quite a bit in capability toward our support for projections (read-only views of the raw events). If you’ll peek back at the “scary” view of CQRS above, Marten’s projection support solves the problem of keeping the raw events synchronized to both any kind of “write model” for incoming commands and the richer “query model” views for system clients within a single database. In my opinion, Marten removes the “scary” out of event sourcing and we’re on our way to being the best possible “event sourcing in a box” solution for .NET.

As that support has gotten more capable and robust though, our users have frequently been asking about how to subscribe to events being captured in Marten and how to relay those events reliably to some other kind of infrastructure — which could be a completely separate database, outgoing message queues, or some other kind of event streaming scenario.

Oskar Dudycz and I have been discussing how to solve this in Marten, and we’ve come up with these three main mechanisms for supporting subscriptions to event data:

  1. Some or all of the events being captured by Marten should be forwarded automatically at the time of event capture (IDocumentSession.SaveChangesAsync())
  2. When strict ordering is required, we’ll need some kind of integration into Marten’s async daemon to relay events to external infrastructure
  3. For massive amounts of data with ambitious performance targets, use something like Debezium to directly stream Marten events from Postgresql to Kafka/Pulsar/etc.

Especially for the first item in that list above, we need some kind of outbox integration with Marten sessions to reliably relay events from Marten to outgoing transports while keeping the system in a consistent state (i.e., don’t publish the events if the transaction fails).

Fortunately, there’s a functional outbox implementation for Marten in…

To be clear, Jasper as a project was pretty well mothballed by the combination of COVID and the massive slog toward Marten 4.0 (and then a smaller slog to 5.0). However, I’ve been able to get back to Jasper and yesterday kicked out a new Jasper 2.0.0-alpha-2 release and (Re) Introducing Jasper as a Command Bus.

For this post though, I want to explore the potential of the Jasper + Marten combination for CQRS architectures. To that end, a couple weeks I published Marten just got better for CQRS architectures, which showed some new APIs in Marten to simplify repetitive code around using Marten event sourcing within CQRS architectures. Part of the sample code in that post was this MVC Core controller that used some newer Marten functionality to handle an incoming command:

public async Task CompleteCharting(
    [FromBody] CompleteCharting charting, 
    [FromServices] IDocumentSession session)
{
    var stream = await session
        .Events.FetchForExclusiveWriting<ProviderShift>(charting.ShiftId);
 
    // Validation on the ProviderShift aggregate
    if (stream.Aggregate.Status != ProviderStatus.Charting)
    {
        throw new Exception("The shift is not currently charting");
    }
     
    // We "decided" to emit one new event
    stream.AppendOne(new ChartingFinished(stream.Aggregate.AppointmentId.Value, stream.Aggregate.BoardId));
 
    await session.SaveChangesAsync();
}

To review, that controller method:

  1. Takes in a command message of type CompleteCharting
  2. Loads the current state of the aggregate ProviderShift model referred to by the incoming command, and does so in a way that takes care of concurrency for us by waiting to get an exclusive lock on the particular ProviderShift
  3. Assuming that the validation against the ProviderShift succeeds, emits a new ChartingFinished event
  4. Saves the pending work with a database transaction

In that post, I pointed out that there were some potential flaws or missing functionality with this approach:

  1. We probably want some error handling to retry the operation if we hit concurrency exceptions or timeout trying to get the exclusive lock. In other words, we have to plan for concurrency exceptions
  2. It’d be good to be able to automatically publish the new ChartingFinished event to a queue to take further action within our system (or an external service if we were using messaging here)
  3. Lastly, I’d argue there’s some repetitive code up there that could be simplified

To address these points, I’m going to introduce Jasper and its integration with Marten (Jasper.Persistence.Marten) to the telehealth portal sample from my previous blog post.

I’m going to move the actual handling of the CompleteCharting to a Jasper handler shown below that is functionally equivalent to the controller method shown earlier (except I switched the concurrency protection to being optimistic):

// This is auto-discovered by Jasper
public class CompleteChartingHandler
{
    [MartenCommandWorkflow] // this opts into some Jasper middlware 
    public ChartingFinished Handle(CompleteCharting charting, ProviderShift shift)
    {
        if (shift.Status != ProviderStatus.Charting)
        {
            throw new Exception("The shift is not currently charting");
        }

        return new ChartingFinished(charting.AppointmentId, shift.BoardId);
    }
}

And the controller method gets simplified down to just relaying the command to Jasper:

    public Task CompleteCharting(
        [FromBody] CompleteCharting charting, 
        [FromServices] ICommandBus bus)
    {
        // Just delegating to Jasper here
        return bus.InvokeAsync(charting, HttpContext.RequestAborted);
    }

There’s some opportunity for some mechanisms to make the code above be a little less repetitive and efficient. Maybe by riding on Minimal APIs. That’s for a later date though:)

By using the new [MartenCommandWorkflow] attribute, we’re directing Jasper to surround the command handler with middleware that handles much of the Marten mechanics by:

  1. Loading the aggregate ProviderShift for the incoming CompleteCharting command (I’m omitting some details here for brevity, but there’s a naming convention that can be explicitly overridden to pluck the aggregate identity off the incoming command)
  2. Passing that ProviderShift aggregate into the Handle() method above
  3. Applying the returned event to the event stream for the ProviderShift
  4. Committing the outstanding changes in the active Marten session

The Handle() code above becomes an example of a Decider function. Even better yet, it’s completely decoupled from any kind of infrastructure and fully synchronous. I’m going to argue that this approach will make command handlers much easier to unit test, definitely easier to write, and easier to read later just because you’re only focused on the business logic.

So that covers the repetitive code problem, but let’s move on to automatically publishing the ChartingCompleted event and some error handling. I’m going to add Jasper through the application’s bootstrapping code as shown below:

builder.Host.UseJasper(opts =>
{
    // I'm choosing to process any ChartingFinished event messages
    // in a separate, local queue with persistent messages for the inbox/outbox
    opts.PublishMessage<ChartingFinished>()
        .ToLocalQueue("charting")
        .DurablyPersistedLocally();
    
    // If we encounter a concurrency exception, just try it immediately 
    // up to 3 times total
    opts.Handlers.OnException<ConcurrencyException>().RetryNow(3); 
    
    // It's an imperfect world, and sometimes transient connectivity errors
    // to the database happen
    opts.Handlers.OnException<NpgsqlException>()
        .RetryWithCooldown(50.Milliseconds(), 100.Milliseconds(), 250.Milliseconds());
});

Jasper comes with some pretty strong exception handling policy capabilities you’ll need to do grown up development. In this case, I’m just setting up some global policies in the application to retry message failures on either Marten concurrency exceptions or the inevitable, transient Postgresql connectivity hiccups. In the case of the concurrency exception, you may just need to start the work over to ensure you’re starting from the most recent aggregate changes. I used globally applied policies here, but Jasper will also allow you to override that on a message type by message type basis.

Lastly, let’s add the Jasper outbox integration for Marten and opt into automatic event publishing with this bit of configuration chained to the standard AddMarten() usage:

builder.Services.AddMarten(opts =>
{
    // Marten configuration...
})
    // I added this to enroll Marten in the Jasper outbox
    .IntegrateWithJasper()
    
    // I also added this to opt into events being forward to
    // the Jasper outbox during SaveChangesAsync()
    .EventForwardingToJasper();

And that’s actually that. The configuration above will add the Jasper outbox tables to the Marten database for us, and let Marten’s database schema management manage those extra database objects.

Back to the command handler (mildly elided):

public class CompleteChartingHandler
{
    [MartenCommandWorkflow] 
    public ChartingFinished Handle(CompleteCharting charting, ProviderShift shift)
    {
        // validation code goes here!

        return new ChartingFinished(charting.AppointmentId, shift.BoardId);
    }
}

By opting into the outbox integration and event forwarding to Jasper from Marten, when this command handler is executed, the ChartingFinished events will be published — in this case just to an in-memory queue, but it could also be to an external transport — with Jasper’s outbox implementation that guarantees that the message will be delivered at least once as long as the database transaction to save the new event succeeds.

Conclusion and What’s Next?

There’s a tremendous amount of work in the rear window to get to the functionality that I demonstrated here, and a substantial amount of ambition in the future to drive this forward. I would love any possible feedback both positive and negative. Marten is a team effort, but Jasper’s been mostly my baby for the past 3-4 years, and I’d be happy for anybody who would want to get involved with that. I’m way behind in documentation for Jasper and somewhat for Marten, but that’s in flight.

My next couple posts to follow up on this are to:

  • Do a deeper dive into Jasper’s outbox and explain why it’s different and arguably more useful than the outbox implementations in other leading .NET tools
  • Introduce the usage of Rabbit MQ with Jasper for external messaging
  • Take a detour into the development and deployment time command line utilities built into Jasper & Marten through Oakton