Building a Critter Stack Application: Command Line Tools with Oakton

Hey, did you know that JasperFx Software is ready for formal support plans for Marten and Wolverine? Not only are we trying to make the “Critter Stack” tools be viable long term options for your shop, we’re also interested in hearing your opinions about the tools and how they should change. We’re also certainly open to help you succeed with your software development projects on a consulting basis whether you’re using any part of the Critter Stack or any other .NET server side tooling.

Let’s build a small web service application using the whole “Critter Stack” and their friends, one small step at a time. For right now, the “finished” code is at CritterStackHelpDesk on GitHub.

The posts in this series are:

  1. Event Storming
  2. Marten as Event Store
  3. Marten Projections
  4. Integrating Marten into Our Application
  5. Wolverine as Mediator
  6. Web Service Query Endpoints with Marten
  7. Dealing with Concurrency
  8. Wolverine’s Aggregate Handler Workflow FTW!
  9. Command Line Diagnostics with Oakton (this post)
  10. Integration Testing Harness
  11. Marten as Document Database
  12. Asynchronous Processing with Wolverine
  13. Durable Outbox Messaging and Why You Care!
  14. Wolverine HTTP Endpoints
  15. Easy Unit Testing with Pure Functions
  16. Vertical Slice Architecture
  17. Messaging with Rabbit MQ
  18. The “Stateful Resource” Model
  19. Resiliency

Hey folks, I’m deviating a little bit from the planned order and taking a side trip while we’re finishing up a bug fix release to address some OpenAPI generation hiccups before I go on to Wolverine HTTP endpoints.

Admittedly, Wolverine and to a lesser extent Marten have a bit of a “magic” conventional approach. They also depend on external configuration items, external infrastructural tools like databases or message brokers that require their own configuration, and there’s always the possibility of assembly mismatches from users doing who knows what with their Nuget dependency tree.

To help unwind potential problems with diagnostic tools and to facilitate environment setup, the “Critter Stack” uses the Oakton library to integrate command line utilities right into your application.

Applying Oakton to Your Application

To get started, I’m going right back to the Program entry point of our incident tracking help desk application and adding just a couple lines of code. First, Oakton is a dependency of Wolverine, so there’s no additional dependency to add, but we’ll add a using statement:

using Oakton;

This is optional, but we’ll possibly want the extra diagnostics, so I’ll add this line of code near the top:

// This opts Oakton into trying to discover diagnostics 
// extensions in other assemblies. Various Critter Stack
// libraries expose extra diagnostics, so we want this
builder.Host.ApplyOaktonExtensions();

and finally, I’m going to drop down to the last line of Program and replace the typical app.Run(); code with the command line parsing with Oakton:

// This is important for Wolverine/Marten diagnostics 
// and environment management
return await app.RunOaktonCommands(args);

Do note that it’s important to return the exit code of the command line runner up above. If you choose to use Oakton commands in a build script, returning a non zero exit code signals the caller that the command failed.

Command Line Mechanics

Next, I’m going to open a command prompt to the root directory of the HelpDesk.Api project, and use this to get a preview of the command line options we now have:

dotnet run -- help

That should render some help text like this:

  Alias           Description                                                                                                             
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
  check-env       Execute all environment checks against the application                                                                  
  codegen         Utilities for working with JasperFx.CodeGeneration and JasperFx.RuntimeCompiler                                         
  db-apply        Applies all outstanding changes to the database(s) based on the current configuration                                   
  db-assert       Assert that the existing database(s) matches the current configuration                                                  
  db-dump         Dumps the entire DDL for the configured Marten database                                                                 
  db-patch        Evaluates the current configuration against the database and writes a patch and drop file if there are any differences  
  describe        Writes out a description of your running application to either the console or a file                                    
  help            List all the available commands                                                                                         
  marten-apply    Applies all outstanding changes to the database based on the current configuration                                      
  marten-assert   Assert that the existing database matches the current Marten configuration                                              
  marten-dump     Dumps the entire DDL for the configured Marten database                                                                 
  marten-patch    Evaluates the current configuration against the database and writes a patch and drop file if there are any differences  
  projections     Marten's asynchronous projection and projection rebuilds                                                                
  resources       Check, setup, or teardown stateful resources of this system                                                             
  run             Start and run this .Net application                                                                                     
  storage         Administer the Wolverine message storage                                                                                

So that’s a lot, but let’s just start by explaining the basics of the command line for .NET applications. You can both pass arguments and flags to the dotnet application itself, and also to the application’s Program.Main(params string[] args) command. The key thing to know is that dotnet arguments and flags are segregated from the application’s arguments and flags by a double dash “–” separator. So for example the command, dotnet run --framework net8.0 -- codegen write is sending the framework flag to dotnet run, and the codegen write arguments to the application itself.

Stateful Resource Setup

Skipping a little bit to the end state of our help desk API project, we’ll have dependencies on:

  • Marten schema objects in the PostgreSQL database
  • Wolverine schema objects in PostgreSQL database (for the transactional inbox/outbox we’ll introduce later in this series)
  • Rabbit MQ exchanges for Wolverine to broadcast to later

One of the guiding philosophies of the Critter Stack is to minimize the “Time to Login Screen” (hat tip to Chad Myers) quality of your codebase. What this means is that we really want a new developer to our system (or a developer coming back after a long, well deserved vacation) to do a clean clone of our codebase, and very quickly be able to run the application and any integration tests end to end. To that end, Oakton exposes its “Stateful Resource” model as an adapter for tools like Marten and Wolverine to set up their resources to match their configuration.

Pretend just for a minute that you have all the necessary rights and permissions to configure database schemas and Rabbit MQ exchanges, queues, and bindings on whatever your Rabbit MQ broker is for development. Assuming that, you can have your copy of the help desk API completely up and ready to run through these steps at the command prompt starting at wherever you want the code to be:

git clone https://github.com/JasperFx/CritterStackHelpDesk.git
cd CritterStackHelpDesk
docker compose up -d
cd HelpDesk.Api
dotnet run -- resources setup

At the end of those calls, you should see this output:

The dotnet run -- resources setup command is able to do Marten database migrations for its event store storage and any document types it knows about upfront, the Wolverine envelope storage tables we’ll configure later, and the known Rabbit MQ exchange where we’ll configure for broadcasting integration events later.

The resources command has other options as shown below from dotnet run -- help resources:

You may need to pause a little bit between the call to docker compose and dotnet run to let Docker catch up!

Environment Checks

Years ago I worked on an early .NET system that still had a lot of COM dependencies that needed to be correctly registered outside of our application and used a shared database that was indifferently maintained as was common way back then. Needless to say, our deployments were chaotic as we never knew what shape the server was in when we deployed. We finally beat our deployment woes by implementing “environment tests” to our deployment scripts that would test out the environment dependencies (is the COM server there? can we connect to the database? is the expected XML file there?) and fail fast with descriptive messages when the server was in a crap state as we tried to deploy.

To that end, Oakton has its environment check model that both Marten and Wolverine utilize. In our help desk application, we already have a Marten dependency, so we know the application will not function correctly if either the database is unavailable or the connection string in the configuration just happens to be wrong or there’s a security set up issue or you get the picture.

So, picking up our application with every bit of infrastructure purposely turned off, I’ll run this command:

dotnet run -- check-env

and the result is a huge blob of exception text and the command will fail — allowing you to abort a build script that might be delegating to this command:

Next, I’m going to turn on all the infrastructure (and set up everything to match our application’s configuration with the second command) with a quick call to:

docker compose up -d
dotnet run -- resources setup

Now, I can run the environment checks again and get a green bill of health for our system:

Oakton’s environment check model predates the new .NET IHealthCheck model. Oakton will also support that model soon, and you can track that work here.

“Describe” Our System

Oakton’s describe command can give you some insights into your application, and tools like Marten or Wolverine can expose extensions to this model for further output. By typing this command at the project root:

dotnet run -- describe

We’ll get some basic information about our system like this preview of the configuration:

The loaded assemblies because you will occasionally get burned by unexpected Nuget behavior pulling in the wrong versions:

And sigh, because folks have frequently had some trouble understanding how Wolverine does its automatic handler discovery, we have this preview:

And quite a bit more information including:

  • Wolverine messaging endpoints
  • Wolverine’s local queues
  • Wolverine message routing
  • Wolverine exception handling policy configuration

Summary and What’s Next

Oakton is yet another command line parsing tool in .NET, of which there are at least dozens that are perfectly competent. What makes Oakton special though is its ability to add command line tools directly to the entry point of your application where you already have all your infrastructure configuration available. The main point I hope you take away from this is that the command line tooling in the “Critter Stack” can help your team development faster through the diagnostics and environment management features.

The “Critter Stack” is heavily utilizing Oakton’s extensibility model for:

  1. The static description of the application configuration that may frequently be helpful for troubleshooting or just understanding your system
  2. Stateful resource management of development dependencies like databases and message brokers. So far this is supported for Marten, both PostgreSQL and Sql Server dependencies of Wolverine, Rabbit MQ, Kafka, Azure Service Bus, and AWS SQS
  3. Environment checks to test out the validity of your system and its ability to connect to external resources during deployment or during development
  4. Any other utility you care to add to your system like resetting a baseline database state, adding users, or anything you care to do through Oakton’s command extensibility

As for what’s next, you’ll have to let me see when some bug fix releases get in place before I promise what exactly is going to be next in this series. I expect this series to at least go to 15-20 entries as I introduce more Wolverine scenarios, messaging, and quite a bit about automated testing. And also, I take requests!

If you’re curious, the JasperFx GitHub organization was originally conceived of as the reboot of the previous FubuMVC ecosystem, with the main project being “Jasper” and the smaller ancillary tools ripped out of the flotsam and jetsam of StructureMap and FubuMVC arranged around what was then called “Jasper,” which was named for my hometown. The smaller tools like Oakton, Alba, and Lamar are named after other small towns close to the titular Jasper, MO. As Marten took off and became by far and away the most important tool in our stable, we adopted the “Critter Stack” naming them as we pulled out Weasel into its own library and completely rebooted and renamed “Jasper” as Wolverine to be a natural complement to Marten.

And lastly, I’m not even sure that Oakton, MO will even show up on maps because it’s effectively a Methodist Church, a cemetery, the ruins of the general store, and a couple farm houses at a cross roads. In Missouri at least, towns cease to exist when they lose their post office. The area I grew up in is littered with former towns that fizzled out as the farm economy changed and folks moved to bigger towns later.

Building a Critter Stack Application: Wolverine’s Aggregate Handler Workflow FTW!

TL;DR: The full critter stack combo can make CQRS command handler code much simpler and easier to test than any other framework on the planet. Fight me.

Hey, did you know that JasperFx Software is ready for formal support plans for Marten and Wolverine? Not only are we trying to make the “Critter Stack” tools be viable long term options for your shop, we’re also interested in hearing your opinions about the tools and how they should change. We’re also certainly open to help you succeed with your software development projects on a consulting basis whether you’re using any part of the Critter Stack or any other .NET server side tooling.

Let’s build a small web service application using the whole “Critter Stack” and their friends, one small step at a time. For right now, the “finished” code is at CritterStackHelpDesk on GitHub.

The posts in this series are:

  1. Event Storming
  2. Marten as Event Store
  3. Marten Projections
  4. Integrating Marten into Our Application
  5. Wolverine as Mediator
  6. Web Service Query Endpoints with Marten
  7. Dealing with Concurrency
  8. Wolverine’s Aggregate Handler Workflow FTW! (this post)
  9. Command Line Diagnostics with Oakton
  10. Integration Testing Harness
  11. Marten as Document Database
  12. Asynchronous Processing with Wolverine
  13. Durable Outbox Messaging and Why You Care!
  14. Wolverine HTTP Endpoints
  15. Easy Unit Testing with Pure Functions
  16. Vertical Slice Architecture
  17. Messaging with Rabbit MQ
  18. The “Stateful Resource” Model
  19. Resiliency

This series has been written partially in response to some constructive criticism that my writings on the “Critter Stack” suffered from introducing too many libraries or concepts all at once. As a reaction to that, this series is trying to only introduce one new capability or library at a time — which brought on some constructive criticism from someone else that the series isn’t making it obvious why anyone should care about the “Critter Stack” in the first place. So especially for Rob Conery, I give you:

Last time out we talked using Marten’s facilities for optimistic concurrency or exclusive locking to protect our system from inconsistencies due to concurrent commands being processed against the same incident event stream. In the process of that post, I showed the code for a command handler for the CategoriseIncident command shown below that I purposely wrote in a long hand form as explicitly as possible to avoid introducing too many new concepts at once:

public static class LongHandCategoriseIncidentHandler
{
    public static async Task Handle(
        CategoriseIncident command, 
        IDocumentSession session, 
        CancellationToken cancellationToken)
    {
        var stream = await session
            .Events
            .FetchForWriting<IncidentDetails>(command.Id, cancellationToken);

        // Don't worry, we're going to clean this up later
        if (stream.Aggregate == null)
        {
            throw new ArgumentOutOfRangeException(nameof(command), "Unknown incident id " + command.Id);
        }
        
        // We need to validate whether this command actually 
        // should do anything
        if (stream.Aggregate.Category != command.Category)
        {
            var categorised = new IncidentCategorised
            {
                Category = command.Category,
                UserId = SystemId
            };

            stream.AppendOne(categorised);
            
            await session.SaveChangesAsync(cancellationToken);
        }
    }

Hopefully that code is relatively easy to follow, but it’s still pretty busy and there’s a mixture of business logic and fiddling with infrastructure code that’s not particularly helpful when the code inevitably gets more complicated than that as the requirements grow. As we’ll learn about later in this series, both Marten and Wolverine have some built in tooling to enable effective automated integration testing and do so much more effectively than just about any other tool out there. All the same though, you just don’t want to be testing the business logic by trudging through integration tests if you don’t have to (see my only rule of testing).

So let’s definitely look at how Wolverine plays nicely with Marten using its aggregate handler workflow recipe to simplify our handler for easier unit testing and just flat out cleaner code.

First off, I’m going to add the WolverineFx.Marten Nuget to our application:

dotnet add package WolverineFx.Marten

Next, break into our application’s Program file and add one call to the Marten configuration to incorporate some Wolverine goodness into Marten in our application:

builder.Services.AddMarten(opts =>
{
    // Existing Marten configuration...
})
    // This is a mild optimization
    .UseLightweightSessions()

    // Use this directive to add Wolverine transactional middleware for Marten
    // and the Wolverine transactional outbox support as well
    .IntegrateWithWolverine();

And now, let’s rewrite our CategoriseIncident command handler with a completely equivalent implementation using the “aggregate handler workflow” recipe:

public static class CategoriseIncidentHandler
{
    // Kinda faked, don't pay any attention to this please!
    public static readonly Guid SystemId = Guid.Parse("4773f679-dcf2-4f99-bc2d-ce196815dd29");

    // This Wolverine handler appends an IncidentCategorised event to an event stream
    // for the related IncidentDetails aggregate referred to by the CategoriseIncident.IncidentId
    // value from the command
    [AggregateHandler]
    public static IEnumerable<object> Handle(CategoriseIncident command, IncidentDetails existing)
    {
        if (existing.Category != command.Category)
        {
            // This event will be appended to the incident
            // stream after this method is called
            yield return new IncidentCategorised
            {
                Category = command.Category,
                UserId = SystemId
            };
        }
    }
}

In the handler method above, the presence of the[AggregateHandler]attribute directs Wolverine to wrap some middleware around the execution of our Handle() method that:

  • “Knows” the aggregate type in question is the second argument to the handler method, so in this case, IncidentDetails
  • Scans the CategoriseIncident type looking for a property that identifies the IncidentDetails (which will make it utilize the Id property in this case, but the docs spell this convention in detail)
  • Does all the work to delegate and coordinate work in the logical command flow between the Marten infrastructure and our little bitty Handle() method

To visualize this, Wolverine is generating its own internal message handler for CategoriseIncident that has this simplified workflow:

And as a preview to a topic I’ll dive into in much more detail in a later post, here’s part of the (admittedly ugly in the way that only auto-generated code can be) C# code that Wolverine generates around our handler method:

public override async System.Threading.Tasks.Task HandleAsync(Wolverine.Runtime.MessageContext context, System.Threading.CancellationToken cancellation)
{
    // The actual message body
    var categoriseIncident = (Helpdesk.Api.CategoriseIncident)context.Envelope.Message;

    await using var documentSession = _outboxedSessionFactory.OpenSession(context);
    var eventStore = documentSession.Events;
    
    // Loading Marten aggregate
    var eventStream = await eventStore.FetchForWriting<Helpdesk.Api.IncidentDetails>(categoriseIncident.Id, categoriseIncident.Version, cancellation).ConfigureAwait(false);

    
    // The actual message execution
    var outgoing1 = Helpdesk.Api.CategoriseIncidentHandler.Handle(categoriseIncident, eventStream.Aggregate);

    if (outgoing1 != null)
    {
        
        // Capturing any possible events returned from the command handlers
        eventStream.AppendMany(outgoing1);

    }

    await documentSession.SaveChangesAsync(cancellation).ConfigureAwait(false);
}

And lastly, we’ve now reduced our CategoriseIncident command handler to the point where the code that we are actually having to write is a pure function, meaning that it’s a simple matter of inputs and outputs with no dependency on any kind of stateful infrastructure. You absolutely care about isolating any kind of business logic into pure functions because that code becomes much easier to unit test.

And to prove that last statement, here’s what the unit tests for our Handle(CategoriseIncident, IncidentDetails) could look like using xUnit.Net and Shouldly:

public class CategoriseIncidentTests
{
    [Fact]
    public void raise_categorized_event_if_changed()
    {
        // Arrange
        var command = new CategoriseIncident
        {
            Category = IncidentCategory.Database
        };

        var details = new IncidentDetails(
            Guid.NewGuid(), 
            Guid.NewGuid(), 
            IncidentStatus.Closed, 
            new IncidentNote[0],
            IncidentCategory.Hardware);

        // Act
        var events = CategoriseIncidentEndpoint.Post(command, details);

        // Assert
        var categorised = events.Single().ShouldBeOfType<IncidentCategorised>();
        categorised
            .Category.ShouldBe(IncidentCategory.Database);
    }

    [Fact]
    public void do_not_raise_event_if_the_category_would_not_change()
    {
        // Arrange
        var command = new CategoriseIncident
        {
            Category = IncidentCategory.Database
        };

        var details = new IncidentDetails(Guid.NewGuid(), Guid.NewGuid(), IncidentStatus.Closed, new IncidentNote[0],
            IncidentCategory.Database);

        // Act
        var events = CategoriseIncidentEndpoint.Post(command, details);
        
        // Assert no events were appended
        events.ShouldBeEmpty();
    }
}

In the unit test code above, we were able to exercise the decision about what events (if any) should be appended to the incident event stream without any dependency whatsoever on any kind of infrastructure. The easiest kind of unit test to write and to read later is a test that has a clear relationship between the test inputs and outputs with minimal noise code for setting up state — and that’s exactly what we have up above. No message mock object set up, no need to setup database state, nothing. Just, “here’s the existing state and this command, now tell me what events should be appended.”

Summary and What’s Next

The full Critter Stack “aggregate handler workflow” recipe leads to very low ceremony code to implement command handlers within a CQRS style architecture. This recipe also leads to a code structure where your business logic is relatively easy to test with fast running unit testing. And we arrived at that point without having to watch umpteen hours of “Clean Architecture” YouTube snake oil videos, introducing a ton of “Ports and Adapter” style abstractions to clutter up our code, or scattering our code for the single CategoriseIncident message handler across 3-4 “Onion Architecture” projects within a massive .NET solution.

This approach was heavily inspired by the Decider pattern that originated for Event Sourcing within the F# community. But whereas the F# approach uses language tricks (and I don’t mean that pejoratively here), Wolverine is getting to a lower ceremony approach by doing that runtime code generation around our code.

If you look back to the sequence diagram up above that tries to explain the control flow, Wolverine is purposely using Jim Shore’s idea of the “A-Frame Architecture” (it’s not really an architectural style despite the name, so don’t even try to do an apples to apples comparison between it and something more prescriptive like the Clean Architecture). In this approach, Wolverine is purposely decoupling the Marten infrastructure away from the CategoriseIncident handler logic that is implementing the business logic that “decides” what to do next by mediating between Marten and the handler. The “A-Frame” name comes from visualizing that mediation like this (Wolverine calls into the infrastructure services like Marten and the business logic so the domain logic doesn’t have to):

Now, there’s a lot more stuff that our command handlers may very well need to implement, including:

  • Message input validation
  • Instrumentation and observability
  • Error handling and resiliency protections ’cause it’s an imperfect world!
  • Publishing the new events to some other internal message handler that will take additional actions after our first command has “decided” what to do next
  • Publishing the new events as some kind of external message to another process
  • Enrolling in a transactional outbox of some sort or another to keep the system in a consistent state — and you really need to care about this capability!!!

And oh, yeah, do all that with minimal code ceremony, be testable with unit tests as much as possible, and be feasible to do automated integration testing when we have to.

We’ll get to all the items in that list above in this series, but I think in the next post I’d like to introduce Wolverine’s HTTP handler recipe and build out more aggregate command handlers, but this time with an HTTP endpoint. Until next time…

Building a Critter Stack Application: Wolverine as Mediator

Hey, did you know that JasperFx Software is ready for formal support plans for Marten and Wolverine? Not only are we trying to make the “Critter Stack” tools be viable long term options for your shop, we’re also interested in hearing your opinions about the tools and how they should change. We’re also certainly open to help you succeed with your software development projects on a consulting basis whether you’re using any part of the Critter Stack or any other .NET server side tooling.

Let’s build a small web service application using the whole “Critter Stack” and their friends, one small step at a time. For right now, the “finished” code is at CritterStackHelpDesk on GitHub.

The posts in this series are:

  1. Event Storming
  2. Marten as Event Store
  3. Marten Projections
  4. Integrating Marten into Our Application
  5. Wolverine as Mediator (this post)
  6. Web Service Query Endpoints with Marten
  7. Dealing with Concurrency
  8. Wolverine’s Aggregate Handler Workflow FTW!
  9. Command Line Diagnostics with Oakton
  10. Integration Testing Harness
  11. Marten as Document Database
  12. Asynchronous Processing with Wolverine
  13. Durable Outbox Messaging and Why You Care!
  14. Wolverine HTTP Endpoints
  15. Easy Unit Testing with Pure Functions
  16. Vertical Slice Architecture
  17. Messaging with Rabbit MQ
  18. The “Stateful Resource” Model
  19. Resiliency

In the previous posts I’ve been focused on Marten as a persistence tool. Today I want to introduce Wolverine into the mix, but strictly as a “Mediator” tool within the commonly used MVC Core or Minimal API tools for web service development.

While Wolverine does much, much more than what we’re going to use today, let’s stay with the them of keeping these posts short and just dip our toes into the Wolverine water with a simple usage.

Using our web service project from previous posts, I’m going to add a reference to the main Wolverine nuget through:

dotnet add package WolverineFx

Next, let’s add Wolverine to our application with this one line of code within our Program file:

builder.Host.UseWolverine(opts =>
{
    // We'll add more here later, but the defaults are all
    // good enough for now
});

As a quick aside, Wolverine is added directly to the IHostBuilder instead of IServiceCollection through a “Use****()” method because it’s also quietly sliding in Lamar as the underlying IoC container. Some folks have been upset at that, so let’s be upfront with that right now. While I may talk about Lamar diagnostics as part of this series, it’s unlikely that that will ever be an issue for most users in any way. Lamar has some specific functionality that was built specifically for Wolverine and utilized quite heavily.

This time out, let’s move into the “C(ommand)” part of our CQRS architecture and build some handling for the CategoriseIncident command we’d initially discovered in our Event Storming session:

public class CategoriseIncident
{
    public Guid Id { get; set; }
    public IncidentCategory Category { get; set; }
    public int Version { get; set; }
}

And next, let’s build our very first ever Wolverine message handler for this command that will load the existing IncidentDetails for the designated incident, decide if the category is being changed, and add a new event to the event stream using Marten’s IDocumentSession service. That handler in code done purposely in an explicit, “long hand” style could be this — but in later posts we will use other Wolverine capabilities to make this code much simpler while even introducing a lot more robust set of validations:

public static class CategoriseIncidentHandler
{
    public static async Task Handle(
        CategoriseIncident command, 
        IDocumentSession session, 
        CancellationToken cancellationToken)
    {
        // Find the existing state of the referenced Incident
        var existing = await session
            .Events
            .AggregateStreamAsync<IncidentDetails>(command.Id, token: cancellationToken);

        // Don't worry, we're going to clean this up later
        if (existing == null)
        {
            throw new ArgumentOutOfRangeException(nameof(command), "Unknown incident id " + command.Id);
        }
        
        // We need to validate whether this command actually 
        // should do anything
        if (existing.Category != command.Category)
        {
            var categorised = new IncidentCategorised
            {
                Category = command.Category,
                UserId = SystemId
            };

            session.Events.Append(command.Id, categorised);
            await session.SaveChangesAsync(cancellationToken);
        }
    }
    
    // This is kinda faked out, nothing to see here!
    public static readonly Guid SystemId = Guid.NewGuid();
}

There’s a couple things I want you to note about the handler class above:

  • We’re not going to make any kind of explicit configuration to help Wolverine discover and use that handler class. Instead, Wolverine is going to discover that within our main service assembly because it’s a public, concrete class suffixed with the name “Handler” (there are other alternatives for this discovery if you don’t like that approach).
  • Wolverine “knows” that the Handle() method is a handler for the CategoriseIncident command because the method is named “Handle” and the first argument is that command type
  • Note that this handler is a static type. It doesn’t have to be, but doing so helps Wolverine shave off some object allocations at runtime.
  • Also note that Wolverine message handlers happily support “method injection” and allow you to inject IoC service dependencies like the Marten IDocumentSession through method arguments. You can also do the more traditional .NET approach of pulling everything through a constructor and setting instance fields, but hey, why not write simpler code?
  • While it’s perfectly legal to handle multiple message types in the same handler class, I typically recommend making that a one to one relationship in most cases

And next, let’s put this into context by having an MVC Core controller expose an HTTP route for this command type, then pass the command on to Wolverine where it will mediate between the HTTP outer world and the inner world of the application services like Marten:

// I'm doing it this way for now because this is 
// a common usage, but we'll move away from 
// this later into more of a "vertical slice"
// approach of organizing code
public class IncidentController : ControllerBase
{
    [HttpPost("/api/incidents/categorize")]
    public Task Categorize(
        [FromBody] CategoriseIncident command,
        [FromServices] IMessageBus bus)

        // IMessageBus is the main entry point into
        // using Wolverine
        => bus.InvokeAsync(command);
}

Summary and Next Time

In this post we looked at the very simplest usage of Wolverine, how to integrate that into your codebase, and how to get started writing command handlers with Wolverine. What I’d like you to take away is that Wolverine is a very different animal from “IHandler of T” frameworks like MediatR, NServiceBus, MassTransit, or Brighter that require mandatory interface signatures and/or base classes. Even when writing long hand code as I did, I hope you can notice already how much lower code ceremony Wolverine requires compared to more typical .NET frameworks that solve similar problems to Wolverine.

I very purposely wrote the message handlers in a very explicit way, and left out some significant use cases like concurrency protection, user input validation, and cross cutting concerns. I’m not 100% sure where I want to go next, but in this next week we’ll look at concurrency protections with Marten, highly efficient GET HTTP endpoints with Marten and ASP.Net Core, and start getting into Wolverine’s HTTP endpoint model.

Why you might ask are all the Wolverine nugets suffixed with “Fx?” The Marten core team and some of our closest collaborators really liked the name “Wolverine” for this project and instantly came up with the project graphics, but when we tried to start publishing Nuget packages, we found out that someone is squatting on the name “Wolverine” in Nuget and we weren’t able to get the rights to that name. Rather than change course, we stubbornly went full speed ahead with the “WolverineFx” naming scheme just for the published Nugets.

Let’s Get Controversial for (only) a Minute!

When my wife and I watched the Silicon Valley show, I think she was bemused when I told her there was a pretty heated debate in development circles over “tabs vs spaces.”

I don’t want this to detract too much from the actual content of this series, but I have very mixed feelings about ASP.Net MVC Core as a framework and the whole idea of using a “mediator” as popularized by the MediatR library within an MVC Core application.

I’ve gone back and forth on both ASP.Net MVC in its various incarnations and also on MediatR both alone and as a complement to MVC Core. Where I’ve landed at right now is the opinion that MVC Core used by itself is a very flawed framework that can easily lead to unmaintainable code over time as an enterprise system grows over time as typical interpretations of the “Clean Architecture” style in concert with MVC Core’s routing rules lead unwary developers to creating bloated MVC controller classes.

While I was admittedly unimpressed with MediatR as I first encountered it on its own merits in isolation, what I will happily admit is that the usage of MediatR is helpful within MVC Core controllers as a way to offload operation specific code into more manageable pieces as opposed to the bloated controllers that frequently result from using MVC Core. I have since occasionally recommended the usage of MediatR within MVC Core codebases to my consulting clients as a way to help make their code easier to maintain over time.

If you’re interested, I touched on this theme somewhat in my talk A Contrarian View of Software Architecture from NDC Oslo 2023. And yes, I absolutely think you can build maintainable systems with MVC Core over time even without the MediatR crutch, but I think you have to veer away from the typical usage of MVC Core to do so and be very mindful of how you’re using the framework. In other words, MVC Core does not by itself lead teams to a “pit of success” for maintainable code in the long run. I think that MediatR or Wolverine with MVC Core can help, but I think we can do better in the long run by moving away from MVC Core. a

By the time this series is over, I will be leaning very hard into organizing code in a vertical slice architecture style and seeing how to use the Critter Stack to create maintainability and testability without the typically complex “Ports and Adapter” style architecture that well meaning server side development teams have been trying to use in the past decade or two.

While I introduced Wolverine today as a “mediator” tool within MVC Core, by the time this series is done we’ll move away from MVC Core with or without MediatR or “Wolverine as MediatR” and use Wolverine’s HTTP endpoint model by itself as simpler alternative with less code ceremony — and I’m going to try hard to make the case that that simpler model is a superior way to build systems.

Building a Critter Stack Application: Event Storming

Hey, did you know that JasperFx Software is ready for formal support plans for Marten and Wolverine? Not only are we trying to make the “Critter Stack” tools be viable long term options for your shop, we’re also interested in hearing your opinions about the tools and how they should change.

I did a series of presentations a couple weeks ago showing off the usage of Wolverine and Marten to build a small service using CQRS and Event Sourcing and you can see the video above from .NET Conf 2023. Great, but I thought the talk was way too dense and I’m going to rebuild the talk from scratch before CodeMash. So I’ve got a relatively complete sample application, and we get a lot of feedback that there needs to be a single sample application that’s more realistic to show off what Marten and Wolverine can actually do. Based on other feedback, I know there’s some value in having a series of short, focused posts that build up a sample application one little concept at a time.

To that end, this post will be the start of a multi-part series showing how to use Marten and Wolverine for a CQRS architecture in an ASP.Net Core web service that also uses event sourcing as a persistence strategy.

The series so far:

I blatantly stole (with permission) this sample application idea from Oskar Dudycz. His version of the app is also on GitHub.

If you’re reading this post, it’s very likely you’re a software professional and you’re already familiar with online incident tracking applications — but hey, let’s build yet another one for a help desk company just because it’s a problem domain you’re likely (all too) familiar with!

Let’s say that you’re magically able to get your help desk business experts and stakeholders in a room (or virtual meeting) with the development team all at one time. Crazy, I know, but bear with me. Since you’re altogether, this is a fantastic opportunity to get the new system started is a very collaborative approach called Event Storming that works very well for both event sourcing and CQRS approaches.

The format is pretty simple. Go to any office supply company and get the typical pack of sticky notes like these:

Start by asking the business experts to describe events within the desired workflow that would lead to a change in state or a milestone in the business process. Try to record their terminology on orange sticky notes with a short name that generally implies a past event. In the case of an incident service, those events might be:

  • IncidentLogged
  • IncidentCategorised
  • IncidentResolved

This isn’t waterfall, so you can happily jump back and forth between steps here, but the next general step is to try to identify the actions or “commands” in the system that would cause each of our previously identified events. Jot these commands down on blue sticky notes with a short name in an imperative form like “LogIncident” or “CategoriseIncident”. Create some record of cause and effect by putting the blue sticky command notes just to the left of the orange sticky notes for the related events.

It’s also helpful to organize the sticky notes roughly left to right to give some context to what commands or events happen in what order (which I did not do in my crude diagram in a second).

Even though my graphic below doesn’t do this, it’s perfectly possible for the relationship between commands and events to be one command to many events.

In the course of executing these newly discovered commands, we can start to call out possible “views” of the raw event data that we might need as necessary context. We’ll record these views with a short descriptive name on green sticky notes.

After some time, our wall should be covered in sticky notes in a manner something like this:

Right off the bat, we’re learning what the DDD folks call the ubiquitous language for our business domain that can be shared between us technical folks and the business domain experts. Moreover, as we’ll see in later posts, these names from is ostensibly a requirements gathering session can translate directly to actual code artifact names.

My experience with Event Storming has been very positive, but I’d guess that it depends on how cooperative and collaborative your business partners are with this format. I found it to be a great format to talk through a system’s requirements in a way that provides actual traceability to code implementation details. In other words, when you talk with the business folks and speak in terms of an IncidentLogged, there will actually be a type in your codebase like this:

public record IncidentLogged(
    Guid CustomerId,
    Contact Contact,
    string Description,
    Guid LoggedBy
);

or LogIncident:

public record LogIncident(
    Guid CustomerId,
    Contact Contact,
    string Description
);

Help Desk API

Just for some context, I’m going to step through the creation of a web service with a handful of web service endpoints to create, read, or alter a help desk incident. In much later posts, I’ll talk about publishing internal events to take action asynchronously within the web service, and also to publish other events externally to completely different systems through Rabbit MQ queues.

The “final” code is the CritterStackHelpDesk application under my GitHub profile.

I’m not going to go near a user interface for now, but someone is working up improvements to this service to put a user interface on top with this service as a Backend for Frontend (BFF).

Summary

Event Storming can be a very effective technique for collaboratively discovering system requirements and understanding the system’s workflow with your domain experts. As developers and testers, it can also help create traceability between the requirements and the actual code artifacts without manually intensive traceability matrix documentation.

Next time…

In the next post in this new series, I’ll introduce the event sourcing functionality with just Marten completely outside of any application just to get comfortable with Marten mechanics before we go on.

Tell Us What You Want in Marten and Wolverine!

I can’t prove this conclusively, but the cure for getting the “tell me what you want, what you really, really want” out of your head is probably to go fill out the linked survey on Marten and Wolverine!

As you may know, JasperFx Software is now up and able to offer formal support contracts to help users be successful with the open source Marten and Wolverine tools (the “Critter Stack”). As the next step in our nascent plan to create a sustainable business model around the Critter Stack tools, we’d really like to elicit some feedback from our users or potential users about what features your team would be most interested in next. And to be clear, we’re specifically thinking about complex features that would be part of a paid add on model to the Critter Stack for advanced usages.

We’d love to get any feedback for us you might have in this Google Form.

Some existing ideas for paid features include:

  • A module for GDPR compliance
  • A dead letter queue browser application for Wolverine that would also help you selectively replay messages
  • The ability to dynamically add new tenant databases for Marten + Wolverine at runtime with no downtime
  • Improved asynchronous projection support in Marten, including better throughput overall, the ability to load balance the projections across running nodes
  • Zero downtime projection rebuilds with asynchronous Marten event store projections
  • The capability to do blue/green deployments with Marten event store projections
  • A virtual actor capability for Wolverine
  • A management and monitoring user interface for Wolverine + Marten that would give you insights about running nodes, active event store projections, messaging endpoint health, node assignments
  • DevOps recipes for the Critter Stack?

Publishing Events from Marten through Wolverine

Aren’t martens really cute?

By the way, JasperFx Software is up and running for formal support plans for both Marten and Wolverine!

Wolverine 1.11.0 was released this week (here’s the release notes) with a small improvement to its ability to subscribe to Marten events captured within Wolverine message handlers or HTTP endpoints. Since Wolverine 1.0, users have been able to opt into having Marten forward events captured within Wolverine handlers to any known Wolverine subscribers for that event with the EventForwardingToWolverine() option.

The latest Wolverine release adds the ability to automatically publish an event as a different message using the event data and its metadata as shown in the sample code below:

builder.Services.AddMarten(opts =>
{
    var connectionString = builder.Configuration.GetConnectionString("marten");
    opts.Connection(connectionString);
})
    // Adds Wolverine transactional middleware for Marten
    // and the Wolverine transactional outbox support as well
    .IntegrateWithWolverine()
    
    .EventForwardingToWolverine(opts =>
    {
        // Setting up a little transformation of an event with its event metadata to an internal command message
        opts.SubscribeToEvent<IncidentCategorised>().TransformedTo(e => new TryAssignPriority
        {
            IncidentId = e.StreamId,
            UserId = e.Data.UserId
        });
    });

This isn’t a general purpose outbox, but rather immediately publishes captured events based on normal Wolverine publishing rules immediately at the time the Marten transaction is committed.

So in this sample handler:

public static class CategoriseIncidentHandler
{
    public static readonly Guid SystemId = Guid.NewGuid();
    
    // This Wolverine handler appends an IncidentCategorised event to an event stream
    // for the related IncidentDetails aggregate referred to by the CategoriseIncident.IncidentId
    // value from the command
    [AggregateHandler]
    public static IEnumerable<object> Handle(CategoriseIncident command, IncidentDetails existing)
    {
        if (existing.Category != command.Category)
        {
            // Wolverine will transform this event to a TryAssignPriority message
            // on the successful commit of the transaction wrapping this handler call
            yield return new IncidentCategorised
            {
                Category = command.Category,
                UserId = SystemId
            };
        }
    }
}

To try to close the loop, when Wolverine handles the CategoriseIncident message, it will:

  1. Potentially append an IncidentCategorised event to the referenced event stream
  2. Try to transform that event to a new TryAssignPriority message
  3. Commit the changes queued up to the underlying Marten IDocumentSession unit of work
  4. If the transaction is successful, publish the TryAssignPriority message — which in this sample case would be routed to a local queue within the Wolverine application and handled in a different thread later

That’s a lot of text and gibberish, but all I’m trying to say is that you can make Wolverine reliably react to events captured in the Marten event store.

Critter Stack at .NET Conf 2023

JasperFx Software will be shortly announcing the availability of official support plans for Marten, Wolverine, and other JasperFx open source tools. We’re working hard to build a sustainable ecosystem around these tools so that companies can feel confident in making a technical bet on these high productivity tools for .NET server side development.

I’ll be presenting a short talk at .NET Conf 2023 entitled “CQRS with Event Sourcing using the Critter Stack.” It’s going to be a quick dive into how to use Marten and Wolverine to build a very small system utilizing a CQRS Architecture with Event Sourcing as the persistence strategy.

Hopefully, I’ll be showing off:

  • How Wolverine’s runtime architecture is significantly different than other .NET tools and why its approach leads to much lower code ceremony and potentially higher performance
  • Marten and PostgreSQL providing a great local developer story both in development and in integration testing
  • How the Wolverine + Marten integration makes your domain logic easily unit testable without resorting to complicated Clean/Onion/Hexagonal Architectures
  • Wolverine’s built in integration testing support that you’ll wish you had today in other .NET messaging tools
  • The built in tooling for unraveling Wolverine or Marten’s “conventional magic”

Here’s the talk abstract:

CQRS with Event Sourcing using the “Critter Stack”

Do you have a system where you think would be a good fit for a CQRS architecture that also uses Event Sourcing for at least part of its persistence strategy? Are you intimidated by the potential complexity of that kind of approach? Fear not, using a combination of the PostgreSQL-backed Marten library for event sourcing and its newer friend Wolverine for command handling and asynchronous messaging, I’ll show you how you can quickly get started with both CQRS and Event Sourcing. Once we get past the quick start, I’ll show you how the Critter Stack’s unique approach to the “Decider” pattern will help you create robust command handlers with very little code ceremony while still enjoying easy testability. Moving beyond basic command handling, I’ll show you how to reliably subscribe to and publish the events or other messages created by your command handlers through Wolverine’s durable outbox and direct subscriptions to Marten’s event storage.

Low Ceremony Web Service Development with the Critter Stack

You can’t really get Midjourney to create an image of a wolverine without veering into trademark violations, so look at the weasel and marten up there working on a website application together!

Wolverine 1.10 was released earlier this week (here’s the release notes), and one of the big additions this time around was some new recipes for combining Marten and Wolverine for very low ceremony web service development.

Before I show the new functionality, let’s imagine that you have a simple web service for invoicing where you’re using Marten as a document database for persistence. You might have a very simplistic web service for exposing a single Invoice like this (and yes, I know you’d probably want to do some kind of transformation to a view model but put that aside for a moment):

    [WolverineGet("/invoices/longhand/id")]
    [ProducesResponseType(404)] 
    [ProducesResponseType(200, Type = typeof(Invoice))]
    public static async Task<IResult> GetInvoice(
        Guid id, 
        IQuerySession session, 
        CancellationToken cancellationToken)
    {
        var invoice = await session.LoadAsync<Invoice>(id, cancellationToken);
        if (invoice == null) return Results.NotFound();

        return Results.Ok(invoice);
    }

It’s not that much code, but there’s still some repetitive boilerplate code. Especially if you’re going to care or be completist about your OpenAPI metadata. The design and usability aesthetic of Wolverine is to reduce code ceremony as much as possible without sacrificing performance or observability, so let’s look at a newer alternative.

Next, I’m going to install the new WolverineFx.Http.Marten Nuget to our web service project, and write this new endpoint using the [Document] attribute:

    [WolverineGet("/invoices/{id}")]
    public static Invoice Get([Document] Invoice invoice)
    {
        return invoice;
    }

The code up above is an exact functional equivalent to the first code sample, and even produces the exact same OpenAPI metadata (or at least tries to, OpenAPI has been a huge bugaboo for Wolverine because so much of the support inside of AspNetCore is hard wired for MVC Core). Notice though, how much less you have to do. You have a synchronous method, so that’s a little less ceremony. It’s a pure function, so even if there was code to transform the invoice data to an API specific shape, you could unit test this method without any infrastructure involved or using something like Alba. Heck, that is setting you up so that Wolverine itself is handling the “return 404 if the Invoice is not found” behavior as shown in the unit test from Wolverine itself (using Alba):

    [Fact]
    public async Task returns_404_on_id_miss()
    {
        // Using Alba to run a request for a non-existent
        // Invoice document
        await Scenario(x =>
        {
            x.Get.Url("/invoices/" + Guid.NewGuid());
            x.StatusCodeShouldBe(404);
        });
    }

Simple enough, but now let’s look at a new HTTP-centric mechanism for the Wolverine + Marten “Aggregate Handler” workflow for writing CQRS “Write” handlers using Marten’s event sourcing. You might want to glance at the previous link for more context before proceeding, or refer back to it later at least.

The main change here is that folks asked to provide the aggregate identity through a route parameter, and then to enforce a 404 response code if the aggregate does not exist.

Using an “Order Management” problem domain, here’s what an endpoint method to ship an existing order could look like:

    [WolverinePost("/orders/{orderId}/ship2"), EmptyResponse]
    // The OrderShipped return value is treated as an event being posted
    // to a Marten even stream
    // instead of as the HTTP response body because of the presence of 
    // the [EmptyResponse] attribute
    public static OrderShipped Ship(ShipOrder2 command, [Aggregate] Order order)
    {
        if (order.HasShipped) 
            throw new InvalidOperationException("This has already shipped!");
        
        return new OrderShipped();
    }

Notice the new [Aggregate] attribute on the Order argument. At runtime, this code is going to:

  1. Take the “orderId” route argument, parse that to a Guid (because that’s the identity type for an Order)
  2. Use that identity — and any version information on the request body or a “version” route argument — to use Marten’s FetchForWriting() mechanism to both load the latest version of the Order aggregate and to opt into optimistic concurrency protections against that event stream.
  3. Return a 404 response if the aggregate does not already exist
  4. Pass the Order aggregate into the actual endpoint method
  5. Take the OrderShipped event returned from the method, and apply that to the Marten event stream for the order
  6. Commit the Marten unit of work

As always, the goal of this workflow is to turn Wolverine endpoint methods into low ceremony, synchronous pure functions that are easily testable with unit tests.

Wolverine and Serverless

I’ve recently fielded some user problems with Wolverine’s transactional inbox/outbox subsystem going absolutely haywire. After asking a plethora of questions, I finally realized that the underlying issue was using Wolverine within AWS Lambda or Azure Function functions where the process is short lived.

Wolverine heretofore is optimized for running in multiple, long lived process nodes because that’s typical for asynchronous messaging architectures. By not getting a chance to cleanly shut down its background processing, users were getting a ton of junk data in Wolverine’s durable message tables that was causing all kinds of aggravation.

To nip that problem in the bud, Wolverine 1.10 introduced a new concept of durability modes to allow you to optimize Wolverine for different types of basic usage:

public enum DurabilityMode
{
    /// <summary>
    /// The durability agent will be optimized to run in a single node. This is very useful
    /// for local development where you may be frequently stopping and restarting the service
    ///
    /// All known agents will automatically start on the local node. The recovered inbox/outbox
    /// messages will start functioning immediately
    /// </summary>
    Solo,
    
    /// <summary>
    /// Normal mode that assumes that Wolverine is running on multiple load balanced nodes
    /// with messaging active
    /// </summary>
    Balanced,
    
    /// <summary>
    /// Disables all message persistence to optimize Wolverine for usage within serverless functions
    /// like AWS Lambda or Azure Functions. Requires that all endpoints be inline
    /// </summary>
    Serverless,
    
    /// <summary>
    /// Optimizes Wolverine for usage as strictly a mediator tool. This completely disables all node
    /// persistence including the inbox and outbox 
    /// </summary>
    MediatorOnly
}

Focusing on just the serverless scenario, you want to turn off all of Wolverine’s durable node tracking, leader election, agent assignment, and long running background processes of all types — and now you can do that just fine like so:

using var host = await Host.CreateDefaultBuilder()
    .UseWolverine(opts =>
    {
        opts.Services.AddMarten("some connection string")

            // This adds quite a bit of middleware for 
            // Marten
            .IntegrateWithWolverine();
        
        // You want this maybe!
        opts.Policies.AutoApplyTransactions();
        
        
        // But wait! Optimize Wolverine for usage within Serverless
        // and turn off the heavy duty, background processes
        // for the transactional inbox/outbox
        opts.Durability.Mode = DurabilityMode.Serverless;
    }).StartAsync();

There’s also some further documentation online about optimizing Wolverine within serverless functions.

Wolverine now does Event Streaming with Kafka or MQTT

As part of an ongoing JasperFx client engagement, Wolverine (1.9.0) just added some new options for event streaming from Wolverine applications. The immediate need was to support messaging with the MQTT protocol for usage inside of a new system in the “Internet of Things” problem space. Knowing that a different JasperFx client is going to need to support event subscriptions with Apache Kafka, it was also convenient to finally add the much requested option for Kafka support within Wolverine while the similar MQTT work was still fresh in my mind.

While the new MQTT transport option is documented, the Kafka transport documentation is still on the way, so I’m going to focus on that first.

To get started with Kafka within a Wolverine application, add the WolverineFx.Kafka Nuget to your project. Next, add the Kafka transport option, any messaging subscription rules, and the topics you want your application to listen to with code like this:

using var host = await Host.CreateDefaultBuilder()
    .UseWolverine(opts =>
    {
        opts.UseKafka("localhost:29092");

        // Just publish all messages to Kafka topics
        // based on the message type (or message attributes)
        // This will get fancier in the near future
        opts.PublishAllMessages().ToKafkaTopics();
        
        // Or explicitly make subscription rules
        opts.PublishMessage<ColorMessage>()
            .ToKafkaTopic("colors");
        
        // Listen to topics
        opts.ListenToKafkaTopic("red")
            .ProcessInline();

        opts.ListenToKafkaTopic("green")
            .BufferedInMemory();
        

        // This will direct Wolverine to try to ensure that all
        // referenced Kafka topics exist at application start up 
        // time
        opts.Services.AddResourceSetupOnStartup();
    }).StartAsync();

I’m very sure that these two transports (and shortly a third option for Apache Pulsar) will need to be enhanced when they meet real users and unexpected use cases, but I think there’s a solid foundation ready to go.

In the near future, JasperFx Software will be ready to start offering official support contracts and relationships for both Marten and Wolverine. In the slightly longer term, we’re hoping to create some paid add on products (with support!) for Wolverine for “big, serious enterprise usage.” One of the first use cases I’d like us to tackle with that initiative will be a more robust event subscription capability from Marten’s event sourcing through Wolverine’s messaging capabilities. Adding options especially for Kafka messaging and also for MQTT, Pulsar, and maybe SignalR is an obvious foundational piece to make that a reality.