Building a Critter Stack Application: Asynchronous Processing with Wolverine

Hey, did you know that JasperFx Software is ready for formal support plans for Marten and Wolverine? Not only are we trying to make the “Critter Stack” tools be viable long term options for your shop, we’re also interested in hearing your opinions about the tools and how they should change. We’re also certainly open to help you succeed with your software development projects on a consulting basis whether you’re using any part of the Critter Stack or any other .NET server side tooling.

Let’s build a small web service application using the whole “Critter Stack” and their friends, one small step at a time. For right now, the “finished” code is at CritterStackHelpDesk on GitHub.

The posts in this series are:

  1. Event Storming
  2. Marten as Event Store
  3. Marten Projections
  4. Integrating Marten into Our Application
  5. Wolverine as Mediator
  6. Web Service Query Endpoints with Marten
  7. Dealing with Concurrency
  8. Wolverine’s Aggregate Handler Workflow FTW!
  9. Command Line Diagnostics with Oakton
  10. Integration Testing Harness
  11. Marten as Document Database
  12. Asynchronous Processing with Wolverine (this post)
  13. Durable Outbox Messaging and Why You Care!
  14. Wolverine HTTP Endpoints
  15. Easy Unit Testing with Pure Functions
  16. Vertical Slice Architecture
  17. Messaging with Rabbit MQ
  18. The “Stateful Resource” Model
  19. Resiliency

As we continue to add new functionality to our incident tracking, help desk system, we have been using Marten for persistence and Wolverine for command execution within MVC Core controllers (with cameos from Alba for testing support and Oakton for command line utilities).

In the workflow we’ve built out so far for the little system shown below, we’ve created a command called CategorizeIncident that for the moment is only sent to the system through HTTP calls from a user interface.

Let’s say that in our system that we may have some domain logic rules based on customer data that we could use to try to prioritize an incident automatically once the incident is categorized. To that end, let’s create a new command named `TryAssignPriority` like this:

public class TryAssignPriority
{
    public Guid IncidentId { get; set; }
}

We’d like to kick off this work any time an incident is categorized, but we might not want to necessarily do that work within the scope of the web request that’s capturing the CategorizeIncident command. Partially this would be a potential scalability issue to potentially offload work from the web server, partially to make the user interface as responsive as possible by not making it wait for slower web service responses, but mostly because I want an excuse to introduce Wolverine’s ability to asynchronously process work through local, in memory queues.

Most of the code in this post is an intermediate form that I’m using just to introduce concepts in the simplest way I can think of. In later posts I’ll show more idiomatic Wolverine ways to do things to arrive at the final version that is in GitHub.

Alright, now that we’ve got our new command class above, let’s publish that locally through Wolverine by breaking into our earlier CategoriseIncidentHandler that I’ll show here in a “before” state:

public static class CategoriseIncidentHandler
{
    public static readonly Guid SystemId = Guid.NewGuid();
    
    [AggregateHandler]
    public static IEnumerable<object> Handle(CategoriseIncident command, IncidentDetails existing)
    {
        if (existing.Category != command.Category)
        {
            yield return new IncidentCategorised
            {
                Category = command.Category,
                UserId = SystemId
            };
        }
    }
}

In this next version, I’m going to add a single call to Wolverine’s main IMessageBus entry point to publish the new TryAssignPriority command message:

public static class CategoriseIncidentHandler
{
    public static readonly Guid SystemId = Guid.NewGuid();
    
    [AggregateHandler]
    // The object? as return value will be interpreted
    // by Wolverine as appending one or zero events
    public static async Task<object?> Handle(
        CategoriseIncident command, 
        IncidentDetails existing,
        IMessageBus bus)
    {
        if (existing.Category != command.Category)
        {
            // Send the message to any and all subscribers to this message
            await bus.PublishAsync(new TryAssignPriority { IncidentId = existing.Id });
            return new IncidentCategorised
            {
                Category = command.Category,
                UserId = SystemId
            };
        }

        // Wolverine will interpret this as "do no work"
        return null;
    }
}

I didn’t do anything that is necessarily out of order here. We haven’t built a message handler for TryAssignPriority or done anything to register subscribers, but that can come later because the PublishAsync() call up above will quietly do nothing if there are no known subscribers for the message.

For asynchronous messaging veterans out there, I will discuss Wolverine’s support for a transactional outbox for a later post. For now, just know that there’s at the very least an in-memory outbox around any message handler that will not send out any pending published messages until after the original message is successfully handled. If you’re not familiar with the “transactional outbox” pattern, please come back to read the follow up post on that later because you absolutely need to understand that to use asynchronous messaging infrastructure like Wolverine.

Next, let’s just add a skeleton message handler for our TryAssignPriority command message in the root API projection:

public static class TryAssignPriorityHandler
{
    public static void Handle(TryAssignPriority command)
    {
        Console.WriteLine("Hey, somebody wants me to prioritize incident " + command.IncidentId);
    }
}

Switching to the command line (you may need to have the PostgreSQL database running for this next thing to work #sadtrombone), I’m going to call dotnet run -- describe to preview my help desk API a little bit.

Under the section of the textual output with the header “Wolverine Message Routing”, you’ll see the message routing tree for Wolverine’s known message types:

┌─────────────────────────────────┬──────────────────────────────────────────┬──────────────────┐
│ Message Type                    │ Destination                              │ Content Type     │
├─────────────────────────────────┼──────────────────────────────────────────┼──────────────────┤
│ Helpdesk.Api.CategoriseIncident │ local://helpdesk.api.categoriseincident/ │ application/json │
│ Helpdesk.Api.TryAssignPriority  │ local://helpdesk.api.tryassignpriority/  │ application/json │
└─────────────────────────────────┴──────────────────────────────────────────┴──────────────────┘

As you can hopefully see in that table up above, just by the fact that Wolverine “knows” there is a handler in the local application for the TryAssignPriority message type, it’s going to route messages of that type to a local queue where it will be executed later in a separate thread.

Don’t worry, this conventional routing, the parallelization settings, and just about anything you can think of is configurable, but let’s mostly stay with defaults for right now.

Switching to the Wolverine configuration in the Program file, here’s a little taste of some of the ways we could control the exact parameters of the asynchronous processing for this local, in memory queue:

builder.Host.UseWolverine(opts =>
{
    // more configuration...

    // Adding a single Rabbit MQ messaging rule
    opts.PublishMessage<RingAllTheAlarms>()
        .ToRabbitExchange("notifications");

    opts.LocalQueueFor<TryAssignPriority>()
        // By default, local queues allow for parallel processing with a maximum
        // parallel count equal to the number of processors on the executing
        // machine, but you can override the queue to be sequential and single file
        .Sequential()

        // Or add more to the maximum parallel count!
        .MaximumParallelMessages(10);
    
    // Or if so desired, you can route specific messages to 
    // specific local queues when ordering is important
    opts.Policies.DisableConventionalLocalRouting();
    opts.Publish(x =>
    {
        x.Message<TryAssignPriority>();
        x.Message<CategoriseIncident>();

        x.ToLocalQueue("commands").Sequential();
    });
});

Summary and What’s Next

Through its local queues function, Wolverine has very strong support for managing asynchronous work within a local process. Any of Wolverine’s message handling capability is usable within these local queues. You also have complete control over the parallelization of the messages being handled in these local queues.

This functionality does raise a lot of questions that I will try to answer in subsequent posts in this series:

  • For the sake of system consistency, we absolutely have to talk about Wolverine’s transactional outbox support
  • How we can use Wolverine’s integration testing support to test our system even when it is spawning additional messages that may be handled asynchronously
  • Wolverine’s ability to automatically forward captured events in Marten to message handlers for side effects
  • How to utilize Wolverine’s “special sauce” to craft message handlers as pure functions that are more easily unit tested than what we have so far
  • Wolverine’s built in Open Telemetry support to trace the asynchronous work end to end
  • Wolverine’s error handling policies to make our system as resilient as possible

Thanks for reading! I’ve been pleasantly surprised how well this series has been received so far. I think this will be the last entry until after Christmas, but I think I will write at least 7-8 more just to keep introducing bits of Critter Stack capabilities in small bites. In the meantime, Merry Christmas and Happy Holidays to you all!

Building a Critter Stack Application: Marten as Document Database

Hey, did you know that JasperFx Software is ready for formal support plans for Marten and Wolverine? Not only are we trying to make the “Critter Stack” tools be viable long term options for your shop, we’re also interested in hearing your opinions about the tools and how they should change. We’re also certainly open to help you succeed with your software development projects on a consulting basis whether you’re using any part of the Critter Stack or any other .NET server side tooling.

Let’s build a small web service application using the whole “Critter Stack” and their friends, one small step at a time. For right now, the “finished” code is at CritterStackHelpDesk on GitHub.

The posts in this series are:

  1. Event Storming
  2. Marten as Event Store
  3. Marten Projections
  4. Integrating Marten into Our Application
  5. Wolverine as Mediator
  6. Web Service Query Endpoints with Marten
  7. Dealing with Concurrency
  8. Wolverine’s Aggregate Handler Workflow FTW!
  9. Command Line Diagnostics with Oakton
  10. Integration Testing Harness
  11. Marten as Document Database (this post)
  12. Asynchronous Processing with Wolverine
  13. Durable Outbox Messaging and Why You Care!
  14. Wolverine HTTP Endpoints
  15. Easy Unit Testing with Pure Functions
  16. Vertical Slice Architecture
  17. Messaging with Rabbit MQ
  18. The “Stateful Resource” Model
  19. Resiliency

So far, we’ve been completely focused on using Marten as an Event Store. While the Marten team is very committed to the event sourcing feature set, it’s pretty likely that you’ll have other data persistence needs in your system that won’t fit the event sourcing paradigm. Not to worry though, because Marten also has a very robust “PostgreSQL as Document Database” feature set that’s perfect for low friction data persistence outside of the event storage. We’ve even used it in earlier posts as Marten projections utilize Marten’s document database features when projections are running Inline or Async (i.e., not Live).

Since we’ve already got Marten integrated into our help desk application at this point, let’s just start with a document to represent customers:

public class Customer
{
    public Guid Id { get; set; }

    // We'll use this later for some "logic" about how incidents
    // can be automatically prioritized
    public Dictionary<IncidentCategory, IncidentPriority> Priorities { get; set; }
        = new();
    
    public string? Region { get; set; }
    
    public ContractDuration Duration { get; set; } 
}

public record ContractDuration(DateOnly Start, DateOnly End);

To be honest, I’m guessing at what a Customer might involve in the end, but it’s okay that I don’t know that upfront per se because as we’ll see soon, Marten makes it very easy to evolve your persisted documents.

Having built the integration test harness for our application in the last post, let’s drop right into an integration test that persists a new Customer document object, and reloads a copy from the persisted data:

public class using_customer_document : IntegrationContext
{
    public using_customer_document(AppFixture fixture) : base(fixture)
    {
    }

    [Fact]
    public async Task persist_and_load_customer_data()
    {
        var customer = new Customer
        {
            Duration = new ContractDuration(new DateOnly(2023, 12, 1), new DateOnly(2024, 12, 1)),
            Region = "West Coast",
            Priorities = new Dictionary<IncidentCategory, IncidentPriority>
            {
                { IncidentCategory.Database, IncidentPriority.High }
            }
        };
        
        // As a convenience just because you'll use it so often in tests,
        // I made a property named "Store" on the base class for quick access to
        // the DocumentStore for the application
        // ALWAYS remember to dispose any sessions you open in tests!
        await using var session = Store.LightweightSession();
        
        // Tell Marten to save the new document
        session.Store(customer);

        // commit any pending changes
        await session.SaveChangesAsync();

        // Marten is assigning an Id for you when one doesn't already
        // exist, so that's where that value comes from
        var copy = await session.LoadAsync<Customer>(customer.Id);
        
        // Just proving to you that it's not the same object
        copy.ShouldNotBeSameAs(customer);
        
        copy.Duration.ShouldBe(customer.Duration);
    }
}

As long as the configured database for our help desk API is available, the test above will happily pass. I’d like to draw your attention to a couple things about that test above:

  • Notice that I didn’t have to make any changes to our application’s AddMarten() configuration in the Program file first because Marten is able to create storage for the new Customer document type on the fly when it first encounters it with its default settings
  • Marten is able to infer that the Id property of the new Customer type is the identity (that can be overridden), and when you add a new Customer document to the session that has an empty Guid as its Id, Marten will quickly assign and set a sequential Guid value for its identity. If you’re wondering, Marten can do this even if the property is scoped as private.
  • The Store() method is effectively an “upsert,” that takes advantage of PostgreSQL’s very efficient, built in upsert syntax. Marten does also support Insert and Update operations, but Store is just an easy default

Behind the scenes, Marten is just serializing our document to JSON and storing that data in a PostgreSQL JSONB column type that will allow for efficient querying within the JSON body later (if you’re immediately asking “why isn’t this thing supporting Sql Server?!?, it’s because only PostgreSQL has the JSONB type). If your document type can be round-tripped by either the venerable Newtonsoft.Json library or the newer System.Text.Json library, that document type can be persisted by Marten with zero explicit mapping.

In many cases, Marten’s approach to object persistence can lead to far less friction and boilerplate code than the equivalent functionality using EF Core, the .NET developer tool of choice. Moreover, using Marten requires a lot fewer database migrations as you change and evolve your document structure, giving developers far more ability to iterate over the shape of their persisted types as opposed to an ORM + Relational Database combination.

And of course, this is .NET, so Marten does come with LINQ support, so we can do queries like this:

        var results = await session.Query<Customer>()
            .Where(x => x.Region == "West Coast")
            .OrderByDescending(x => x.Duration.End)
            .ToListAsync();

As you’ll already know if you happen to follow me on Mastodon, we’re hopefully nearing the end of some very substantial improvements to the LINQ support for the forthcoming Marten v7 release.

While the document database feature set in Marten is pretty deep, the last thing I want to show in this post is that yes, you can create indexes within the JSON body for faster querying as needed. This time, I am going to the AddMarten() configuration in the Program file and add a little bit of code to index the Customer document on its Region field:

builder.Services.AddMarten(opts =>
{
    // other configuration...

    // This will create a btree index within the JSONB data
    opts.Schema.For<Customer>().Index(x => x.Region);
});

Summary and What’s Next

Once upon a time, Marten started with a pressing need to have a reliable, ACID-compliant document database feature set, and we originally chose PostgreSQL because of its unique JSON feature set. Almost on a lark, I added a nascent event sourcing capability before the original Marten 1.0 release. To my surprise, the event sourcing feature set is the main driver of Marten adoption by far, but Marten still has its original feature set to make the rock solid PostgreSQL database engine function as a document database for .NET developers.

Even in a system using event sourcing, there’s almost always some kind of relatively static reference data that’s better suited for Marten’s document database feature set or even going back to using PostgreSQL as the outstanding relational database engine that it is.

In the next post, now that we also know how to store and retrieve customer documents with Marten, we’re going to introduce Wolverine’s “compound handler” capability and see how that can help us factor our code into being very testable.

Building a Critter Stack Application: Integration Testing Harness

Hey, did you know that JasperFx Software is ready for formal support plans for Marten and Wolverine? Not only are we trying to make the “Critter Stack” tools be viable long term options for your shop, we’re also interested in hearing your opinions about the tools and how they should change.

The older parts of the JasperFx / Critter Stack projects are named after itty bitty small towns in SW Missouri, including Alba.

Let’s build a small web service application using the whole “Critter Stack” and their friends, one small step at a time. For right now, the “finished” code is at CritterStackHelpDesk on GitHub.

The posts in this series are:

  1. Event Storming
  2. Marten as Event Store
  3. Marten Projections
  4. Integrating Marten into Our Application
  5. Wolverine as Mediator
  6. Web Service Query Endpoints with Marten
  7. Dealing with Concurrency
  8. Wolverine’s Aggregate Handler Workflow FTW!
  9. Command Line Diagnostics with Oakton
  10. Integration Testing Harness (this post)
  11. Marten as Document Database
  12. Asynchronous Processing with Wolverine
  13. Durable Outbox Messaging and Why You Care!
  14. Wolverine HTTP Endpoints
  15. Easy Unit Testing with Pure Functions
  16. Vertical Slice Architecture
  17. Messaging with Rabbit MQ
  18. The “Stateful Resource” Model
  19. Resiliency

Before I go on with anything else in this series, I think we should establish some automated testing infrastructure for our incident tracking, help desk service. While we’re absolutely going to talk about how to structure code with Wolverine to make isolated unit testing as easy as possible for our domain logic, there are some elements of your system’s behavior that are best tested with automated integration tests that use the system’s infrastructure.

In this post I’m going to show you how I like to set up an integration testing harness for a “Critter Stack” service. I’m going to use xUnit.Net in this post, and while the mechanics would be a little different, I think the basic concepts should be easily transferable to other testing libraries like NUnit or MSTest. I’m also going to bring in the Alba library that we’ll use for testing HTTP calls through our system in memory, but in this first step, all you need to understand is that Alba is helping to set up the system under test in our testing harness.

Heads up a little bit, I’m skipping to the “finished” state of the help desk API code in this post, so there’s some Marten and Wolverine concepts sneaking in that haven’t been introduced until now.

First, let’s start our new testing project with:

dotnet new xunit

Then add some additional Nuget references:

dotnet add package Shouldly
dotnet add package Alba

That gives us a skeleton of the testing project. Before going on, we need to add a project reference from our new testing project to the entry point project of our help desk API. As we are worried about integration testing right now, we’re going to want the testing project to be able to start the system under test project up by calling the normal Program.Main() entrypoint so that we’re running the application the way that the system is normally configured — give or take a few overrides.

Let’s stop and talk about this a little bit because I think this is an important point. I think integration tests are more “valid” (i.e. less prone to false positives or false negatives) as they more closely reflect the actual system. I don’t want completely separate bootstrapping for the test harness that may or may not reflect the application’s production bootstrapping (don’t blow that point off, I’ve seen countless teams do partial IoC configuration for testing that can vary quite a bit from the application’s configuration).

So if you’ll accept my argument that we should be bootstrapping the system under test with its own Program.Main() entry point, our next step is to add this code to the main service to enable the test project to access that entry point:

using System.Runtime.CompilerServices;

// You have to do this in order to reference the Program
// entry point in the test harness
[assembly:InternalsVisibleTo("Helpdesk.Api.Tests")]

Switching finally to our testing project, I like to create a class I usually call AppFixture that manages the lifetime of the system under test running in our test project like so:

public class AppFixture : IAsyncLifetime
{
    public IAlbaHost Host { get; private set; }

    // This is a one time initialization of the
    // system under test before the first usage
    public async Task InitializeAsync()
    {
        // Sorry folks, but this is absolutely necessary if you 
        // use Oakton for command line processing and want to 
        // use WebApplicationFactory and/or Alba for integration testing
        OaktonEnvironment.AutoStartHost = true;

        // This is bootstrapping the actual application using
        // its implied Program.Main() set up
        // This is using a library named "Alba". See https://jasperfx.github.io/alba for more information
        Host = await AlbaHost.For<Program>(x =>
        {
            x.ConfigureServices(services =>
            {
                // We'll be using Rabbit MQ messaging later...
                services.DisableAllExternalWolverineTransports();
                
                // We're going to establish some baseline data
                // for testing
                services.InitializeMartenWith<BaselineData>();
            });
        }, new AuthenticationStub());
    }

    public Task DisposeAsync()
    {
        if (Host != null)
        {
            return Host.DisposeAsync().AsTask();
        }

        return Task.CompletedTask;
    }
}

A few notes about the code above:

  • Alba is using the WebApplicationFactory under the covers to bootstrap our help desk API service using the in memory TestServer in place of Kestrel. WebApplicationFactory does allow us to modify the IoC service registrations for our system and override parts of the system’s normal configuration
  • In this case, I’m telling Wolverine to effectively stub out all external transports. In later posts we’ll use Rabbit MQ for example to publish messages to an external process, but in this test harness we’re going to turn that off and simple have Wolverine be able to “catch” the outgoing messages in our tests. See Wolverine’s test automation support documentation for more information about this.
  • More on this later, but Marten has a built in facility to establish baseline data sets that can be used in test automation to effectively rewind the database to an initial state with one command
  • The DisposeAsync() method is very important. If you want to make your integration tests be repeatable and run smoothly as you iterate, you need the tests to clean up after themselves and not leave locks on resources like ports or files that could stop the next test run from functioning correctly
  • Pay attention to the `OaktonEnvironment.AutoStartHost = true;` call, that’s 100% necessary if your application is using Oakton for command parsing. Sorry.
  • As will be inevitably necessary, I’m using Alba’s facility for stubbing out web authentication that allows us to both sidestep pesky authentication infrastucture in functional testing while also happily letting us pass along user claims as test inputs in individual tests
  • Bootstrapping the IHost for your application can be expensive, so I prefer to share that host across tests whenever possible, and I generally rely on having individual tests establish their inputs at beginning of each test. See the xUnit.Net documentation on sharing fixtures between tests for more context about the xUnit mechanics.

For the Marten baseline data, right now I’m just making sure there’s at least one valid Customer document that we’ll need later:

public class BaselineData : IInitialData
{
    public static Guid Customer1Id { get; } = Guid.NewGuid();
    
    public async Task Populate(IDocumentStore store, CancellationToken cancellation)
    {
        await using var session = store.LightweightSession();
        session.Store(new Customer
        {
            Id = Customer1Id,
            Region = "West Cost",
            Duration = new ContractDuration(DateOnly.FromDateTime(DateTime.Today.Subtract(100.Days())), DateOnly.FromDateTime(DateTime.Today.Add(100.Days())))
        });

        await session.SaveChangesAsync(cancellation);
    }
}

To simplify the usage a little bit, I like to have a base class for integration tests that I like to call IntegrationContext:

[Collection("integration")]
public abstract class IntegrationContext : IAsyncLifetime
{
    private readonly AppFixture _fixture;

    protected IntegrationContext(AppFixture fixture)
    {
        _fixture = fixture;
    }
    
    // more....

    public IAlbaHost Host => _fixture.Host;

    public IDocumentStore Store => _fixture.Host.Services.GetRequiredService<IDocumentStore>();

    async Task IAsyncLifetime.InitializeAsync()
    {
        // Using Marten, wipe out all data and reset the state
        // back to exactly what we described in BaselineData
        await Store.Advanced.ResetAllData();
    }

    // This is required because of the IAsyncLifetime 
    // interface. Note that I do *not* tear down database
    // state after the test. That's purposeful
    public Task DisposeAsync()
    {
        return Task.CompletedTask;
    }

    // This is just delegating to Alba to run HTTP requests
    // end to end
    public async Task<IScenarioResult> Scenario(Action<Scenario> configure)
    {
        return await Host.Scenario(configure);
    }

    // This method allows us to make HTTP calls into our system
    // in memory with Alba, but do so within Wolverine's test support
    // for message tracking to both record outgoing messages and to ensure
    // that any cascaded work spawned by the initial command is completed
    // before passing control back to the calling test
    protected async Task<(ITrackedSession, IScenarioResult)> TrackedHttpCall(Action<Scenario> configuration)
    {
        IScenarioResult result = null;

        // The outer part is tying into Wolverine's test support
        // to "wait" for all detected message activity to complete
        var tracked = await Host.ExecuteAndWaitAsync(async () =>
        {
            // The inner part here is actually making an HTTP request
            // to the system under test with Alba
            result = await Host.Scenario(configuration);
        });

        return (tracked, result);
    }
}

The first thing I want to draw your attention to is the call to await Store.Advanced.ResetAllData(); in the InitializeAsync() method that will be called before each of our integration tests executing. In my approach, I strongly prefer to reset the state of the database before each test in order to start from a known system state. I’m also assuming that each test if necessary, will add additional state to the system’s Marten database as necessary for the test. This philosophically is what I’ve long called “Self-Contained Tests.” I also think it’s important to have the tests leave the database state alone after a test run so that if you are running tests one at a time you can use the left over database state to help troubleshoot why a test might have failed.

Other folks will try to spin up a separate database (maybe with TestContainers) per test or even a completely separate IHost per test, but I think that the cost of doing it that way is just too slow. I’d rather reset the system between tests and not incur the cost of recycling database containers and/or the system’s IHost. This comes at the cost of forcing your test suite to run tests in serial order, but I also think that xUnit.Net is not the best possible tool at parallel test runs, so I’m not sure you lose out on anything there.

And now for an actual test. We have an HTTP endpoint in our system we built early on that can process a LogIncident command, and create a new event stream for this new Incident with a single IncidentLogged event. I’ve skipped ahead a little bit and added a requirement that we capture a user id from an expected Claim on the ClaimsPrincipal for the current request that you’ll see reflected in the test below:

public class log_incident : IntegrationContext
{
    public log_incident(AppFixture fixture) : base(fixture)
    {
    }

    [Fact]
    public async Task create_a_new_incident()
    {
        // We'll need a user
        var user = new User(Guid.NewGuid());
        
        // Log a new incident by calling the HTTP
        // endpoint in our system
        var initial = await Scenario(x =>
        {
            var contact = new Contact(ContactChannel.Email);
            x.Post.Json(new LogIncident(BaselineData.Customer1Id, contact, "It's broken")).ToUrl("/api/incidents");
            x.StatusCodeShouldBe(201);
            
            x.WithClaim(new Claim("user-id", user.Id.ToString()));
        });

        var incidentId = initial.ReadAsJson<NewIncidentResponse>().IncidentId;

        using var session = Store.LightweightSession();
        var events = await session.Events.FetchStreamAsync(incidentId);
        var logged = events.First().ShouldBeOfType<IncidentLogged>();

        // This deserves more assertions, but you get the point...
        logged.CustomerId.ShouldBe(BaselineData.Customer1Id);
    }
}

Summary and What’s Next

The “Critter Stack” core team and our community care very deeply about effective testing, so we’ve invested from the very beginning in making integration testing as easy as possible with both Marten and Wolverine.

Alba is another little library from the JasperFx family that just makes it easier to write integration tests at the HTTP layer. Alba is perfect for doing integration testing of your web services. I definitely find it advantageous to be able to quickly bootstrap a web service project and run tests completely in memory on demand. That’s a much easier and quicker feedback cycle than trying to deploy the service and write tests that remotely interact with the web service through HTTP. And I shouldn’t even have to mention how absurdly slow it is in comparison to try to test the same web service functionality through the actual user interface with something like Selenium.

From the Marten side of things, PostgreSQL has a pretty small Docker image size, so it’s pretty painless to spin up on development boxes. Especially contrasted with situations where development teams share a centralized development database (shudder, hope not many folks still do that), having an isolated database for each developer that they can also tear down and rebuild at will certainly helps make it a lot easier to succeed with automated integration testing.

I think that document databases in general are a lot easier to deal with in automated testing than using a relational database with an ORM as the persistence tooling as it’s much less friction in setting up database schemas or to tear down database state. Marten goes a step farther than most persistence tools by having built in APIs to tear down database state or reset to baseline data sets in between tests.

We’ll dig deeper into Wolverine’s integration testing support later in this series with message handler testing, testing handlers that in turn spawn other messages, and dealing with external messaging in tests.

I think the next post is just going to be a quick survey of “Marten as Document Database” before I get back to Wolverine’s HTTP endpoint model.

Building a Critter Stack Application: Command Line Tools with Oakton

Hey, did you know that JasperFx Software is ready for formal support plans for Marten and Wolverine? Not only are we trying to make the “Critter Stack” tools be viable long term options for your shop, we’re also interested in hearing your opinions about the tools and how they should change. We’re also certainly open to help you succeed with your software development projects on a consulting basis whether you’re using any part of the Critter Stack or any other .NET server side tooling.

Let’s build a small web service application using the whole “Critter Stack” and their friends, one small step at a time. For right now, the “finished” code is at CritterStackHelpDesk on GitHub.

The posts in this series are:

  1. Event Storming
  2. Marten as Event Store
  3. Marten Projections
  4. Integrating Marten into Our Application
  5. Wolverine as Mediator
  6. Web Service Query Endpoints with Marten
  7. Dealing with Concurrency
  8. Wolverine’s Aggregate Handler Workflow FTW!
  9. Command Line Diagnostics with Oakton (this post)
  10. Integration Testing Harness
  11. Marten as Document Database
  12. Asynchronous Processing with Wolverine
  13. Durable Outbox Messaging and Why You Care!
  14. Wolverine HTTP Endpoints
  15. Easy Unit Testing with Pure Functions
  16. Vertical Slice Architecture
  17. Messaging with Rabbit MQ
  18. The “Stateful Resource” Model
  19. Resiliency

Hey folks, I’m deviating a little bit from the planned order and taking a side trip while we’re finishing up a bug fix release to address some OpenAPI generation hiccups before I go on to Wolverine HTTP endpoints.

Admittedly, Wolverine and to a lesser extent Marten have a bit of a “magic” conventional approach. They also depend on external configuration items, external infrastructural tools like databases or message brokers that require their own configuration, and there’s always the possibility of assembly mismatches from users doing who knows what with their Nuget dependency tree.

To help unwind potential problems with diagnostic tools and to facilitate environment setup, the “Critter Stack” uses the Oakton library to integrate command line utilities right into your application.

Applying Oakton to Your Application

To get started, I’m going right back to the Program entry point of our incident tracking help desk application and adding just a couple lines of code. First, Oakton is a dependency of Wolverine, so there’s no additional dependency to add, but we’ll add a using statement:

using Oakton;

This is optional, but we’ll possibly want the extra diagnostics, so I’ll add this line of code near the top:

// This opts Oakton into trying to discover diagnostics 
// extensions in other assemblies. Various Critter Stack
// libraries expose extra diagnostics, so we want this
builder.Host.ApplyOaktonExtensions();

and finally, I’m going to drop down to the last line of Program and replace the typical app.Run(); code with the command line parsing with Oakton:

// This is important for Wolverine/Marten diagnostics 
// and environment management
return await app.RunOaktonCommands(args);

Do note that it’s important to return the exit code of the command line runner up above. If you choose to use Oakton commands in a build script, returning a non zero exit code signals the caller that the command failed.

Command Line Mechanics

Next, I’m going to open a command prompt to the root directory of the HelpDesk.Api project, and use this to get a preview of the command line options we now have:

dotnet run -- help

That should render some help text like this:

  Alias           Description                                                                                                             
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
  check-env       Execute all environment checks against the application                                                                  
  codegen         Utilities for working with JasperFx.CodeGeneration and JasperFx.RuntimeCompiler                                         
  db-apply        Applies all outstanding changes to the database(s) based on the current configuration                                   
  db-assert       Assert that the existing database(s) matches the current configuration                                                  
  db-dump         Dumps the entire DDL for the configured Marten database                                                                 
  db-patch        Evaluates the current configuration against the database and writes a patch and drop file if there are any differences  
  describe        Writes out a description of your running application to either the console or a file                                    
  help            List all the available commands                                                                                         
  marten-apply    Applies all outstanding changes to the database based on the current configuration                                      
  marten-assert   Assert that the existing database matches the current Marten configuration                                              
  marten-dump     Dumps the entire DDL for the configured Marten database                                                                 
  marten-patch    Evaluates the current configuration against the database and writes a patch and drop file if there are any differences  
  projections     Marten's asynchronous projection and projection rebuilds                                                                
  resources       Check, setup, or teardown stateful resources of this system                                                             
  run             Start and run this .Net application                                                                                     
  storage         Administer the Wolverine message storage                                                                                

So that’s a lot, but let’s just start by explaining the basics of the command line for .NET applications. You can both pass arguments and flags to the dotnet application itself, and also to the application’s Program.Main(params string[] args) command. The key thing to know is that dotnet arguments and flags are segregated from the application’s arguments and flags by a double dash “–” separator. So for example the command, dotnet run --framework net8.0 -- codegen write is sending the framework flag to dotnet run, and the codegen write arguments to the application itself.

Stateful Resource Setup

Skipping a little bit to the end state of our help desk API project, we’ll have dependencies on:

  • Marten schema objects in the PostgreSQL database
  • Wolverine schema objects in PostgreSQL database (for the transactional inbox/outbox we’ll introduce later in this series)
  • Rabbit MQ exchanges for Wolverine to broadcast to later

One of the guiding philosophies of the Critter Stack is to minimize the “Time to Login Screen” (hat tip to Chad Myers) quality of your codebase. What this means is that we really want a new developer to our system (or a developer coming back after a long, well deserved vacation) to do a clean clone of our codebase, and very quickly be able to run the application and any integration tests end to end. To that end, Oakton exposes its “Stateful Resource” model as an adapter for tools like Marten and Wolverine to set up their resources to match their configuration.

Pretend just for a minute that you have all the necessary rights and permissions to configure database schemas and Rabbit MQ exchanges, queues, and bindings on whatever your Rabbit MQ broker is for development. Assuming that, you can have your copy of the help desk API completely up and ready to run through these steps at the command prompt starting at wherever you want the code to be:

git clone https://github.com/JasperFx/CritterStackHelpDesk.git
cd CritterStackHelpDesk
docker compose up -d
cd HelpDesk.Api
dotnet run -- resources setup

At the end of those calls, you should see this output:

The dotnet run -- resources setup command is able to do Marten database migrations for its event store storage and any document types it knows about upfront, the Wolverine envelope storage tables we’ll configure later, and the known Rabbit MQ exchange where we’ll configure for broadcasting integration events later.

The resources command has other options as shown below from dotnet run -- help resources:

You may need to pause a little bit between the call to docker compose and dotnet run to let Docker catch up!

Environment Checks

Years ago I worked on an early .NET system that still had a lot of COM dependencies that needed to be correctly registered outside of our application and used a shared database that was indifferently maintained as was common way back then. Needless to say, our deployments were chaotic as we never knew what shape the server was in when we deployed. We finally beat our deployment woes by implementing “environment tests” to our deployment scripts that would test out the environment dependencies (is the COM server there? can we connect to the database? is the expected XML file there?) and fail fast with descriptive messages when the server was in a crap state as we tried to deploy.

To that end, Oakton has its environment check model that both Marten and Wolverine utilize. In our help desk application, we already have a Marten dependency, so we know the application will not function correctly if either the database is unavailable or the connection string in the configuration just happens to be wrong or there’s a security set up issue or you get the picture.

So, picking up our application with every bit of infrastructure purposely turned off, I’ll run this command:

dotnet run -- check-env

and the result is a huge blob of exception text and the command will fail — allowing you to abort a build script that might be delegating to this command:

Next, I’m going to turn on all the infrastructure (and set up everything to match our application’s configuration with the second command) with a quick call to:

docker compose up -d
dotnet run -- resources setup

Now, I can run the environment checks again and get a green bill of health for our system:

Oakton’s environment check model predates the new .NET IHealthCheck model. Oakton will also support that model soon, and you can track that work here.

“Describe” Our System

Oakton’s describe command can give you some insights into your application, and tools like Marten or Wolverine can expose extensions to this model for further output. By typing this command at the project root:

dotnet run -- describe

We’ll get some basic information about our system like this preview of the configuration:

The loaded assemblies because you will occasionally get burned by unexpected Nuget behavior pulling in the wrong versions:

And sigh, because folks have frequently had some trouble understanding how Wolverine does its automatic handler discovery, we have this preview:

And quite a bit more information including:

  • Wolverine messaging endpoints
  • Wolverine’s local queues
  • Wolverine message routing
  • Wolverine exception handling policy configuration

Summary and What’s Next

Oakton is yet another command line parsing tool in .NET, of which there are at least dozens that are perfectly competent. What makes Oakton special though is its ability to add command line tools directly to the entry point of your application where you already have all your infrastructure configuration available. The main point I hope you take away from this is that the command line tooling in the “Critter Stack” can help your team development faster through the diagnostics and environment management features.

The “Critter Stack” is heavily utilizing Oakton’s extensibility model for:

  1. The static description of the application configuration that may frequently be helpful for troubleshooting or just understanding your system
  2. Stateful resource management of development dependencies like databases and message brokers. So far this is supported for Marten, both PostgreSQL and Sql Server dependencies of Wolverine, Rabbit MQ, Kafka, Azure Service Bus, and AWS SQS
  3. Environment checks to test out the validity of your system and its ability to connect to external resources during deployment or during development
  4. Any other utility you care to add to your system like resetting a baseline database state, adding users, or anything you care to do through Oakton’s command extensibility

As for what’s next, you’ll have to let me see when some bug fix releases get in place before I promise what exactly is going to be next in this series. I expect this series to at least go to 15-20 entries as I introduce more Wolverine scenarios, messaging, and quite a bit about automated testing. And also, I take requests!

If you’re curious, the JasperFx GitHub organization was originally conceived of as the reboot of the previous FubuMVC ecosystem, with the main project being “Jasper” and the smaller ancillary tools ripped out of the flotsam and jetsam of StructureMap and FubuMVC arranged around what was then called “Jasper,” which was named for my hometown. The smaller tools like Oakton, Alba, and Lamar are named after other small towns close to the titular Jasper, MO. As Marten took off and became by far and away the most important tool in our stable, we adopted the “Critter Stack” naming them as we pulled out Weasel into its own library and completely rebooted and renamed “Jasper” as Wolverine to be a natural complement to Marten.

And lastly, I’m not even sure that Oakton, MO will even show up on maps because it’s effectively a Methodist Church, a cemetery, the ruins of the general store, and a couple farm houses at a cross roads. In Missouri at least, towns cease to exist when they lose their post office. The area I grew up in is littered with former towns that fizzled out as the farm economy changed and folks moved to bigger towns later.

Building a Critter Stack Application: Wolverine’s Aggregate Handler Workflow FTW!

TL;DR: The full critter stack combo can make CQRS command handler code much simpler and easier to test than any other framework on the planet. Fight me.

Hey, did you know that JasperFx Software is ready for formal support plans for Marten and Wolverine? Not only are we trying to make the “Critter Stack” tools be viable long term options for your shop, we’re also interested in hearing your opinions about the tools and how they should change. We’re also certainly open to help you succeed with your software development projects on a consulting basis whether you’re using any part of the Critter Stack or any other .NET server side tooling.

Let’s build a small web service application using the whole “Critter Stack” and their friends, one small step at a time. For right now, the “finished” code is at CritterStackHelpDesk on GitHub.

The posts in this series are:

  1. Event Storming
  2. Marten as Event Store
  3. Marten Projections
  4. Integrating Marten into Our Application
  5. Wolverine as Mediator
  6. Web Service Query Endpoints with Marten
  7. Dealing with Concurrency
  8. Wolverine’s Aggregate Handler Workflow FTW! (this post)
  9. Command Line Diagnostics with Oakton
  10. Integration Testing Harness
  11. Marten as Document Database
  12. Asynchronous Processing with Wolverine
  13. Durable Outbox Messaging and Why You Care!
  14. Wolverine HTTP Endpoints
  15. Easy Unit Testing with Pure Functions
  16. Vertical Slice Architecture
  17. Messaging with Rabbit MQ
  18. The “Stateful Resource” Model
  19. Resiliency

This series has been written partially in response to some constructive criticism that my writings on the “Critter Stack” suffered from introducing too many libraries or concepts all at once. As a reaction to that, this series is trying to only introduce one new capability or library at a time — which brought on some constructive criticism from someone else that the series isn’t making it obvious why anyone should care about the “Critter Stack” in the first place. So especially for Rob Conery, I give you:

Last time out we talked using Marten’s facilities for optimistic concurrency or exclusive locking to protect our system from inconsistencies due to concurrent commands being processed against the same incident event stream. In the process of that post, I showed the code for a command handler for the CategoriseIncident command shown below that I purposely wrote in a long hand form as explicitly as possible to avoid introducing too many new concepts at once:

public static class LongHandCategoriseIncidentHandler
{
    public static async Task Handle(
        CategoriseIncident command, 
        IDocumentSession session, 
        CancellationToken cancellationToken)
    {
        var stream = await session
            .Events
            .FetchForWriting<IncidentDetails>(command.Id, cancellationToken);

        // Don't worry, we're going to clean this up later
        if (stream.Aggregate == null)
        {
            throw new ArgumentOutOfRangeException(nameof(command), "Unknown incident id " + command.Id);
        }
        
        // We need to validate whether this command actually 
        // should do anything
        if (stream.Aggregate.Category != command.Category)
        {
            var categorised = new IncidentCategorised
            {
                Category = command.Category,
                UserId = SystemId
            };

            stream.AppendOne(categorised);
            
            await session.SaveChangesAsync(cancellationToken);
        }
    }

Hopefully that code is relatively easy to follow, but it’s still pretty busy and there’s a mixture of business logic and fiddling with infrastructure code that’s not particularly helpful when the code inevitably gets more complicated than that as the requirements grow. As we’ll learn about later in this series, both Marten and Wolverine have some built in tooling to enable effective automated integration testing and do so much more effectively than just about any other tool out there. All the same though, you just don’t want to be testing the business logic by trudging through integration tests if you don’t have to (see my only rule of testing).

So let’s definitely look at how Wolverine plays nicely with Marten using its aggregate handler workflow recipe to simplify our handler for easier unit testing and just flat out cleaner code.

First off, I’m going to add the WolverineFx.Marten Nuget to our application:

dotnet add package WolverineFx.Marten

Next, break into our application’s Program file and add one call to the Marten configuration to incorporate some Wolverine goodness into Marten in our application:

builder.Services.AddMarten(opts =>
{
    // Existing Marten configuration...
})
    // This is a mild optimization
    .UseLightweightSessions()

    // Use this directive to add Wolverine transactional middleware for Marten
    // and the Wolverine transactional outbox support as well
    .IntegrateWithWolverine();

And now, let’s rewrite our CategoriseIncident command handler with a completely equivalent implementation using the “aggregate handler workflow” recipe:

public static class CategoriseIncidentHandler
{
    // Kinda faked, don't pay any attention to this please!
    public static readonly Guid SystemId = Guid.Parse("4773f679-dcf2-4f99-bc2d-ce196815dd29");

    // This Wolverine handler appends an IncidentCategorised event to an event stream
    // for the related IncidentDetails aggregate referred to by the CategoriseIncident.IncidentId
    // value from the command
    [AggregateHandler]
    public static IEnumerable<object> Handle(CategoriseIncident command, IncidentDetails existing)
    {
        if (existing.Category != command.Category)
        {
            // This event will be appended to the incident
            // stream after this method is called
            yield return new IncidentCategorised
            {
                Category = command.Category,
                UserId = SystemId
            };
        }
    }
}

In the handler method above, the presence of the[AggregateHandler]attribute directs Wolverine to wrap some middleware around the execution of our Handle() method that:

  • “Knows” the aggregate type in question is the second argument to the handler method, so in this case, IncidentDetails
  • Scans the CategoriseIncident type looking for a property that identifies the IncidentDetails (which will make it utilize the Id property in this case, but the docs spell this convention in detail)
  • Does all the work to delegate and coordinate work in the logical command flow between the Marten infrastructure and our little bitty Handle() method

To visualize this, Wolverine is generating its own internal message handler for CategoriseIncident that has this simplified workflow:

And as a preview to a topic I’ll dive into in much more detail in a later post, here’s part of the (admittedly ugly in the way that only auto-generated code can be) C# code that Wolverine generates around our handler method:

public override async System.Threading.Tasks.Task HandleAsync(Wolverine.Runtime.MessageContext context, System.Threading.CancellationToken cancellation)
{
    // The actual message body
    var categoriseIncident = (Helpdesk.Api.CategoriseIncident)context.Envelope.Message;

    await using var documentSession = _outboxedSessionFactory.OpenSession(context);
    var eventStore = documentSession.Events;
    
    // Loading Marten aggregate
    var eventStream = await eventStore.FetchForWriting<Helpdesk.Api.IncidentDetails>(categoriseIncident.Id, categoriseIncident.Version, cancellation).ConfigureAwait(false);

    
    // The actual message execution
    var outgoing1 = Helpdesk.Api.CategoriseIncidentHandler.Handle(categoriseIncident, eventStream.Aggregate);

    if (outgoing1 != null)
    {
        
        // Capturing any possible events returned from the command handlers
        eventStream.AppendMany(outgoing1);

    }

    await documentSession.SaveChangesAsync(cancellation).ConfigureAwait(false);
}

And lastly, we’ve now reduced our CategoriseIncident command handler to the point where the code that we are actually having to write is a pure function, meaning that it’s a simple matter of inputs and outputs with no dependency on any kind of stateful infrastructure. You absolutely care about isolating any kind of business logic into pure functions because that code becomes much easier to unit test.

And to prove that last statement, here’s what the unit tests for our Handle(CategoriseIncident, IncidentDetails) could look like using xUnit.Net and Shouldly:

public class CategoriseIncidentTests
{
    [Fact]
    public void raise_categorized_event_if_changed()
    {
        // Arrange
        var command = new CategoriseIncident
        {
            Category = IncidentCategory.Database
        };

        var details = new IncidentDetails(
            Guid.NewGuid(), 
            Guid.NewGuid(), 
            IncidentStatus.Closed, 
            new IncidentNote[0],
            IncidentCategory.Hardware);

        // Act
        var events = CategoriseIncidentEndpoint.Post(command, details);

        // Assert
        var categorised = events.Single().ShouldBeOfType<IncidentCategorised>();
        categorised
            .Category.ShouldBe(IncidentCategory.Database);
    }

    [Fact]
    public void do_not_raise_event_if_the_category_would_not_change()
    {
        // Arrange
        var command = new CategoriseIncident
        {
            Category = IncidentCategory.Database
        };

        var details = new IncidentDetails(Guid.NewGuid(), Guid.NewGuid(), IncidentStatus.Closed, new IncidentNote[0],
            IncidentCategory.Database);

        // Act
        var events = CategoriseIncidentEndpoint.Post(command, details);
        
        // Assert no events were appended
        events.ShouldBeEmpty();
    }
}

In the unit test code above, we were able to exercise the decision about what events (if any) should be appended to the incident event stream without any dependency whatsoever on any kind of infrastructure. The easiest kind of unit test to write and to read later is a test that has a clear relationship between the test inputs and outputs with minimal noise code for setting up state — and that’s exactly what we have up above. No message mock object set up, no need to setup database state, nothing. Just, “here’s the existing state and this command, now tell me what events should be appended.”

Summary and What’s Next

The full Critter Stack “aggregate handler workflow” recipe leads to very low ceremony code to implement command handlers within a CQRS style architecture. This recipe also leads to a code structure where your business logic is relatively easy to test with fast running unit testing. And we arrived at that point without having to watch umpteen hours of “Clean Architecture” YouTube snake oil videos, introducing a ton of “Ports and Adapter” style abstractions to clutter up our code, or scattering our code for the single CategoriseIncident message handler across 3-4 “Onion Architecture” projects within a massive .NET solution.

This approach was heavily inspired by the Decider pattern that originated for Event Sourcing within the F# community. But whereas the F# approach uses language tricks (and I don’t mean that pejoratively here), Wolverine is getting to a lower ceremony approach by doing that runtime code generation around our code.

If you look back to the sequence diagram up above that tries to explain the control flow, Wolverine is purposely using Jim Shore’s idea of the “A-Frame Architecture” (it’s not really an architectural style despite the name, so don’t even try to do an apples to apples comparison between it and something more prescriptive like the Clean Architecture). In this approach, Wolverine is purposely decoupling the Marten infrastructure away from the CategoriseIncident handler logic that is implementing the business logic that “decides” what to do next by mediating between Marten and the handler. The “A-Frame” name comes from visualizing that mediation like this (Wolverine calls into the infrastructure services like Marten and the business logic so the domain logic doesn’t have to):

Now, there’s a lot more stuff that our command handlers may very well need to implement, including:

  • Message input validation
  • Instrumentation and observability
  • Error handling and resiliency protections ’cause it’s an imperfect world!
  • Publishing the new events to some other internal message handler that will take additional actions after our first command has “decided” what to do next
  • Publishing the new events as some kind of external message to another process
  • Enrolling in a transactional outbox of some sort or another to keep the system in a consistent state — and you really need to care about this capability!!!

And oh, yeah, do all that with minimal code ceremony, be testable with unit tests as much as possible, and be feasible to do automated integration testing when we have to.

We’ll get to all the items in that list above in this series, but I think in the next post I’d like to introduce Wolverine’s HTTP handler recipe and build out more aggregate command handlers, but this time with an HTTP endpoint. Until next time…

Building a Critter Stack Application: Dealing with Concurrency

Hey, did you know that JasperFx Software is ready for formal support plans for Marten and Wolverine? Not only are we trying to make the “Critter Stack” tools be viable long term options for your shop, we’re also interested in hearing your opinions about the tools and how they should change. We’re also certainly open to help you succeed with your software development projects on a consulting basis whether you’re using any part of the Critter Stack or any other .NET server side tooling.

Let’s build a small web service application using the whole “Critter Stack” and their friends, one small step at a time. For right now, the “finished” code is at CritterStackHelpDesk on GitHub.

The posts in this series are:

  1. Event Storming
  2. Marten as Event Store
  3. Marten Projections
  4. Integrating Marten into Our Application
  5. Wolverine as Mediator
  6. Web Service Query Endpoints with Marten
  7. Dealing with Concurrency (this post)
  8. Wolverine’s Aggregate Handler Workflow FTW!
  9. Command Line Diagnostics with Oakton
  10. Integration Testing Harness
  11. Marten as Document Database
  12. Asynchronous Processing with Wolverine
  13. Durable Outbox Messaging and Why You Care!
  14. Wolverine HTTP Endpoints
  15. Easy Unit Testing with Pure Functions
  16. Vertical Slice Architecture
  17. Messaging with Rabbit MQ
  18. The “Stateful Resource” Model
  19. Resiliency

Last time out we talked using Marten’s projection data in the context of building query services inside our CQRS architecture for our new incident tracking help desk application. Today I want to talk about how to protect our systems from concurrency and ordering issues when we start to have more users or subsystems trying to access and even modify the same issues.

Imagine some of these unfortunately likely scenarios:

  1. A user gets impatient with our user interface and clicks on a button multiple times which sends multiple requests to our back end to add the same note to an incident
  2. A technician pulls up the incident details for something new, but then gets called away (or goes to lunch). A second technician pulls up the incident and carries out some actions to change the category or priority. The first technician come back to their desk, and tries to change the priority of the incident based on the stale data about that incident they already had open on their screen
  3. Later on, we may have several automated workflows happening that could conceivably try to change an incident simultaneously. In this case it might be important that actions involving an incident only happen one at a time to prevent inconsistent system state

In later posts I’ll talk about how Wolverine can help your system be much more robust in the face of concurrency issues and works with Marten to make your code robust for concurrency while still being low ceremony. Today though, I strictly want to talk about Marten’s built in protections for concurrency before getting fancy.

To review from a couple posts ago when I introduced Wolverine command handlers, we had this code to process a CategoriseIncident in our system:

    public static async Task Handle(
        CategoriseIncident command, 
        IDocumentSession session, 
        CancellationToken cancellationToken)
    {
        // Find the existing state of the referenced Incident
        var existing = await session
            .Events
            .AggregateStreamAsync<IncidentDetails>(command.Id, token: cancellationToken);

        // Don't worry, we're going to clean this up later
        if (existing == null)
        {
            throw new ArgumentOutOfRangeException(nameof(command), "Unknown incident id " + command.Id);
        }
        
        // We need to validate whether this command actually 
        // should do anything
        if (existing.Category != command.Category)
        {
            var categorised = new IncidentCategorised
            {
                Category = command.Category,
                UserId = SystemId
            };

            session.Events.Append(command.Id, categorised);
            await session.SaveChangesAsync(cancellationToken);
        }
    }

I’m going to change this handler to introduce some concurrency protection against the single incident referred to by the CategoriseIncident command. To do that, I’m going to use Marten’s FetchForWriting() API that we introduced specifically to make Marten easier to use within CQRS command handling and rewrite this handler to use optimistic concurrency protections:

    public static async Task Handle(
        CategoriseIncident command, 
        IDocumentSession session, 
        CancellationToken cancellationToken)
    {
        // Find the existing state of the referenced Incident
        // but also set Marten up for optimistic version checking on
        // the incident upon the call to SaveChangesAsync()
        var stream = await session
            .Events
            .FetchForWriting<IncidentDetails>(command.Id, cancellationToken);

        // Don't worry, we're going to clean this up later
        if (stream.Aggregate == null)
        {
            throw new ArgumentOutOfRangeException(nameof(command), "Unknown incident id " + command.Id);
        }
        
        // We need to validate whether this command actually 
        // should do anything
        if (stream.Aggregate.Category != command.Category)
        {
            var categorised = new IncidentCategorised
            {
                Category = command.Category,
                UserId = SystemId
            };

            stream.AppendOne(categorised);
            
            // This call may throw a ConcurrencyException!
            await session.SaveChangesAsync(cancellationToken);
        }
    }

Notice the call to FetchForWriting(). That loads the current IncidentDetails aggregate data for our incident event stream. Under the covers, Marten is also loading the current revision number for that incident event stream and tracking that. When the IDocumentSession.SaveChangesAsync() is called, it will attempt to append the new event(s) to the incident event stream, but this operation will throw a Marten ConcurrencyException and roll back the underlying database transaction if the incident event stream has been revisioned between the call to FetchForWriting() and SaveChangesAsync().

Do note that the call to FetchForWriting() can happily work with aggregate projections that are configured as either “live” or persisted to the database. Our strong recommendation within your command handlers where you’re appending events is to rely on this API so that you can easily change up projection lifecycles as necessary.

While this crude protection might be helpful by itself for concurrency protection, we can go farther and avoid doing work that is just going to fail anyway by telling Marten that the current command was issued assuming that the event stream is currently at an expected revision.

Just as a reminder to close the loop here, when we write the aggregated projection for IncidentDetails document type shown below:

public record IncidentDetails(
    Guid Id,
    Guid CustomerId,
    IncidentStatus Status,
    IncidentNote[] Notes,
    IncidentCategory? Category = null,
    IncidentPriority? Priority = null,
    Guid? AgentId = null,
    
    // This is meant to be the revision number
    // of the event stream for this incident
    int Version = 1
);

Marten will “automagically” set the value of a Version property of the aggregated document to the latest revision number of the event stream. This (hopefully) makes it relatively easy for systems built with Marten to transfer the current event stream revision number to user interfaces or other clients specifically to make optimistic concurrency protection easier.

Now that our user interface “knows” what it thinks the current version of the incident data is, we’ll also transmit that version number through our command that we’re posting to the service:

public class CategoriseIncident
{
    public Guid Id { get; set; }
    public IncidentCategory Category { get; set; }

    // This is to communicate to the server that
    // this command was issued assuming that the 
    // incident is currently at this revision
    // number
    public int Version { get; set; }
}

We’re going to change our message handler one more time, but this time we want a little stronger concurrency protection upfront to disallow any work from proceeding if the incident has been revisioned past where the client knew about, but still retain the optimistic concurrency check on SaveChangesAsync(). Squint really hard at the call to FetchForWriting() where I pass in the version number from the command as that’s the only change:

    public static async Task Handle(
        CategoriseIncident command, 
        IDocumentSession session, 
        CancellationToken cancellationToken)
    {
        // Find the existing state of the referenced Incident
        // *But*, throw a ConcurrencyException if the stream has been revisioned past
        // the expected, starting version communicated by the command
        var stream = await session
            .Events
            .FetchForWriting<IncidentDetails>(command.Id, command.Version, cancellationToken);

        // Don't worry, we're going to clean this up later
        if (stream.Aggregate == null)
        {
            throw new ArgumentOutOfRangeException(nameof(command), "Unknown incident id " + command.Id);
        }
        
        // We need to validate whether this command actually 
        // should do anything
        if (stream.Aggregate.Category != command.Category)
        {
            var categorised = new IncidentCategorised
            {
                Category = command.Category,
                UserId = SystemId
            };

            stream.AppendOne(categorised);
            
            // This call may throw a ConcurrencyException!
            await session.SaveChangesAsync(cancellationToken);
        }
    }

In the previous couple revisions, I’ve strictly used “optimistic concurrency” where you work on the assumption that it’s more likely than not okay to proceed, but update the database in some way such that it will reject the changes if the expected starting revision does not match the current revision stored in the database. Marten also has the option to use exclusive database locks where only the current transaction is allowed to edit the event stream. That usage is shown below, but yet again, just squint at the changed call to FetchForExclusiveWriting():

    public static async Task Handle(
        CategoriseIncident command, 
        IDocumentSession session, 
        CancellationToken cancellationToken)
    {

        // Careful! This will try to wait until the database can grant us exclusive
        // write access to the specific incident event stream
        var stream = await session
            .Events
            .FetchForExclusiveWriting<IncidentDetails>(command.Id, cancellationToken);

        // Don't worry, we're going to clean this up later
        if (stream.Aggregate == null)
        {
            throw new ArgumentOutOfRangeException(nameof(command), "Unknown incident id " + command.Id);
        }
        
        // We need to validate whether this command actually 
        // should do anything
        if (stream.Aggregate.Category != command.Category)
        {
            var categorised = new IncidentCategorised
            {
                Category = command.Category,
                UserId = SystemId
            };

            stream.AppendOne(categorised);
            
            await session.SaveChangesAsync(cancellationToken);
        }
    }

This approach is something I think of as a “guilty until proven innocent” tool. While this is absolutely more rigid protection against making concurrent access or concurrent processing of commands to a single incident event stream, it’s comes with some drawbacks. Using the exclusive lock makes your database engine work harder and use more resources is one issue. The database might also cause timeouts on the initial call to FetchForExclusiveWriting() as it has to wait until any previous locks from an ongoing transaction finishes. In your application you may need to separately handle this kind of TimeoutException differently from the optimistic ConcurrencyException (we’ll talk about that a lot more in later posts). This usage does come with a bit of risk for deadlocks in the database.

Lastly, you can technically use serializable transactions with Marten to really, really make the data access be serialized on a single event stream like so:

    public static async Task Handle(
        CategoriseIncident command, 
        IDocumentStore store, 
        CancellationToken cancellationToken)
    {
        // This is your last resort approach!
        await using var session = 
            await store.LightweightSerializableSessionAsync(cancellationToken);
        
        var stream = await session
            .Events
            .FetchForWriting<IncidentDetails>(command.Id, cancellationToken);

        // Don't worry, we're going to clean this up later
        if (stream.Aggregate == null)
        {
            throw new ArgumentOutOfRangeException(nameof(command), "Unknown incident id " + command.Id);
        }
        
        // We need to validate whether this command actually 
        // should do anything
        if (stream.Aggregate.Category != command.Category)
        {
            var categorised = new IncidentCategorised
            {
                Category = command.Category,
                UserId = SystemId
            };

            stream.AppendOne(categorised);
            
            await session.SaveChangesAsync(cancellationToken);
        }
    }

But if the exclusive lock was a “guilty until proven innocent” approach, then serializable transactions because of their even heavier overhead are a “break glass in case of emergency” usage you should keep in your back pocket unless you really, really need it.

Summary and What’s Next?

In this post I introduced Marten’s built in concurrency protections for appending data to event streams. For the most part, I think you should assume the usage of optimistic concurrency as a default as that’s lighter on your PostgreSQL database. I also showed how to track the current event stream version through projections in CQRS queries where it can then be used by clients to pass the expected starting version in commands to be used for optimistic concurrency checks within our CQRS commands.

In the next post, I think I’m going to introduce Wolverine’s aggregate handler workflow with Marten as a way of making the message handler in this post much simpler and easier to test.

Building a Critter Stack Application: Web Service Query Endpoints with Marten

Hey, did you know that JasperFx Software is ready for formal support plans for Marten and Wolverine? Not only are we trying to make the “Critter Stack” tools be viable long term options for your shop, we’re also interested in hearing your opinions about the tools and how they should change. We’re also certainly open to help you succeed with your software development projects on a consulting basis whether you’re using any part of the Critter Stack or any other .NET server side tooling.

Let’s build a small web service application using the whole “Critter Stack” and their friends, one small step at a time. For right now, the “finished” code is at CritterStackHelpDesk on GitHub.

The posts in this series are:

  1. Event Storming
  2. Marten as Event Store
  3. Marten Projections
  4. Integrating Marten into Our Application
  5. Wolverine as Mediator
  6. Web Service Query Endpoints with Marten (this post)
  7. Dealing with Concurrency
  8. Wolverine’s Aggregate Handler Workflow FTW!
  9. Command Line Diagnostics with Oakton
  10. Marten as Document Database
  11. Asynchronous Processing with Wolverine
  12. Durable Outbox Messaging and Why You Care!
  13. Wolverine HTTP Endpoints
  14. Easy Unit Testing with Pure Functions
  15. Vertical Slice Architecture
  16. Messaging with Rabbit MQ
  17. The “Stateful Resource” Model
  18. Resiliency

Last time up we introduced Wolverine as to help us build command handlers as the “C” in the CQRS architecture. This time out, I want to turn our attention back to Marten and building out some query endpoints to get the “Q” part of CQRS going by exposing projected event data in a read-only way through HTTP web services.

When we talked before about Marten projections (read-only “projected” view representations of the source events), I mentioned that these projected views could be create with three different lifecycles:

  1. “Live” projections are built on demand based on the current event data
  2. “Inline” projections are updated at the time new events are captured such that the “read side” model is always strongly consistent with the raw event data
  3. “Async” projections are continuously built by a background process in Marten applications and give you an eventual consistency model.

Alright, so let’s talk about when you might use different lifecycles of projection creation, then we’ll move on to how that changes the mechanics of how we’ll deliver projection data through web services. Offhand, I’d recommend a decision tree something like:

  • If you want to optimize the system’s “read” performance more than the “writes”, definitely use the Inline lifecycle
  • If you want to optimize the “write” performance of event capture and also want a strongly consistent “read” model that exactly reflects the current state, choose the Live lifecycle. Know though that if you go that way, you will want to model your system in such a way that you can keep your event streams short. It’s of course not exactly simple, because the Live aggregation time can also negatively impact command processing time if you need to first derive the current state in order to “decide” what new events should be emitted.
  • If you want to optimize both the “read” and “write” performance, but can be a little relaxed about the read side consistency, you can opt for Async projections

For starters, let’s build just a simple HTTP endpoint that returns the current state for a single Incident within our new help desk system. As a quick reminder, the IncidentDetails aggerated projection we’re about to work with is built out like this:

public record IncidentDetails(
    Guid Id,
    Guid CustomerId,
    IncidentStatus Status,
    IncidentNote[] Notes,
    IncidentCategory? Category = null,
    IncidentPriority? Priority = null,
    Guid? AgentId = null,

    // Marten is going to set this for us in
    // the projection work
    int Version = 1
);

public record IncidentNote(
    IncidentNoteType Type,
    Guid From,
    string Content,
    bool VisibleToCustomer
);

public enum IncidentNoteType
{
    FromAgent,
    FromCustomer
}

// This class contains the directions for Marten about how to create the
// IncidentDetails view from the raw event data
public class IncidentDetailsProjection: SingleStreamProjection<IncidentDetails>
{
    public static IncidentDetails Create(IEvent<IncidentLogged> logged) =>
        new(logged.StreamId, logged.Data.CustomerId, IncidentStatus.Pending, Array.Empty<IncidentNote>());

    public IncidentDetails Apply(IncidentCategorised categorised, IncidentDetails current) =>
        current with { Category = categorised.Category };

    public IncidentDetails Apply(IncidentPrioritised prioritised, IncidentDetails current) =>
        current with { Priority = prioritised.Priority };

    public IncidentDetails Apply(AgentAssignedToIncident prioritised, IncidentDetails current) =>
        current with { AgentId = prioritised.AgentId };

    public IncidentDetails Apply(IncidentResolved resolved, IncidentDetails current) =>
        current with { Status = IncidentStatus.Resolved };

    public IncidentDetails Apply(ResolutionAcknowledgedByCustomer acknowledged, IncidentDetails current) =>
        current with { Status = IncidentStatus.ResolutionAcknowledgedByCustomer };

    public IncidentDetails Apply(IncidentClosed closed, IncidentDetails current) =>
        current with { Status = IncidentStatus.Closed };
}

I want to make sure that I draw your attention to the Version property of the IncidentDetails projected document. Marten itself has a naming convention (it can be overridden with attributes too) where it will set this member to the current stream version number when Marten builds this single stream projection. That’s going to be vital in the next post when we start introducing concurrency projection models.

For right now, let’s say that we’re choosing to use the Live style. In this case, we’ll need to do the aggregation on the fly, then stream that down the HTTP body like so with MVC Core:

    [HttpGet("/api/incidents/{incidentId}")]
    public async Task<IResult> Get(Guid incidentId)
    {
        // In this case, the IncidentDetails are projected "live"
        var details = await _session.Events.AggregateStreamAsync<IncidentDetails>(incidentId);

        return details != null
            ? Results.Json(details)
            : Results.NotFound();
    }

If, however, we chose to produce the projected IncidentDetails data Inline such that the projected data is already persisted to the Marten database as a document, we’d first make this addition to the AddMarten() configuration in the application’s Program file:

builder.Services.AddMarten(opts =>
{
    // You always have to tell Marten what the connection string to the underlying
    // PostgreSQL database is, but this is the only mandatory piece of 
    // configuration
    var connectionString = builder.Configuration.GetConnectionString("marten");
    opts.Connection(connectionString);
    
    // We have to tell Marten about the projection we built in the previous post
    // so that Marten will "know" how to project events to the IncidentDetails
    // projected view
    opts.Projections.Add<IncidentDetailsProjection>(ProjectionLifecycle.Inline);
});

Lastly, we could now instead write that web service method in our MVC Core controller as:

    [HttpGet("/api/incidents/{incidentId}")]
    public async Task<IResult> Get(Guid incidentId)
    {
        // In this case, the IncidentDetails are projected "live"
        var details = await _session.LoadAsync<IncidentDetails>(incidentId);

        return details != null
            ? Results.Json(details)
            : Results.NotFound();
    }

One last trick for now, let’s make the web service above much faster! I’m going to add another library into the mix with this Nuget reference:

dotnet add package Marten.AspNetCore

And let’s revisit the previous web service endpoint and change it to this:

public class IncidentController : ControllerBase
{
    private readonly IDocumentSession _session;

    public IncidentController(IDocumentSession session)
    {
        _session = session;
    }
    
    [HttpGet("/api/incidents/{incidentId}")]
    public Task Get(Guid incidentId)
    {
        return _session
            .Json
            .WriteById<IncidentDetails>(incidentId, HttpContext);
    }
    
    // other methods....

The WriteById() usage up above is an extension method from the Marten.AspNetCore package that lets you stream raw, persisted JSON data from Marten directly to the HTTP response body in an ASP.Net Core endpoint in a very efficient way. At no point are you even bothering to instantiate an IncidentDetails object in memory just to immediately turn around and serialize it right back to the HTTP response. There’s basically no other faster way to build a web service for this information.

Summary and What’s Next

In this entry we talked a little bit about the consequences of the projection lifecycle decision for your web service. We also mentioned about how Marten can provide the stream version into projected documents that will be valuable soon when we talk about concurrency. Lastly, I introduced the Marten.AspNetCore library and its extension methods to directly “stream” JSON data stored into PostgreSQL directly to the HTTP response in a very efficient way.

In the next post we’re going to look at Marten’s concurrency protections and discuss why you care about these abilities.

Building a Critter Stack Application: Wolverine as Mediator

Hey, did you know that JasperFx Software is ready for formal support plans for Marten and Wolverine? Not only are we trying to make the “Critter Stack” tools be viable long term options for your shop, we’re also interested in hearing your opinions about the tools and how they should change. We’re also certainly open to help you succeed with your software development projects on a consulting basis whether you’re using any part of the Critter Stack or any other .NET server side tooling.

Let’s build a small web service application using the whole “Critter Stack” and their friends, one small step at a time. For right now, the “finished” code is at CritterStackHelpDesk on GitHub.

The posts in this series are:

  1. Event Storming
  2. Marten as Event Store
  3. Marten Projections
  4. Integrating Marten into Our Application
  5. Wolverine as Mediator (this post)
  6. Web Service Query Endpoints with Marten
  7. Dealing with Concurrency
  8. Wolverine’s Aggregate Handler Workflow FTW!
  9. Command Line Diagnostics with Oakton
  10. Integration Testing Harness
  11. Marten as Document Database
  12. Asynchronous Processing with Wolverine
  13. Durable Outbox Messaging and Why You Care!
  14. Wolverine HTTP Endpoints
  15. Easy Unit Testing with Pure Functions
  16. Vertical Slice Architecture
  17. Messaging with Rabbit MQ
  18. The “Stateful Resource” Model
  19. Resiliency

In the previous posts I’ve been focused on Marten as a persistence tool. Today I want to introduce Wolverine into the mix, but strictly as a “Mediator” tool within the commonly used MVC Core or Minimal API tools for web service development.

While Wolverine does much, much more than what we’re going to use today, let’s stay with the them of keeping these posts short and just dip our toes into the Wolverine water with a simple usage.

Using our web service project from previous posts, I’m going to add a reference to the main Wolverine nuget through:

dotnet add package WolverineFx

Next, let’s add Wolverine to our application with this one line of code within our Program file:

builder.Host.UseWolverine(opts =>
{
    // We'll add more here later, but the defaults are all
    // good enough for now
});

As a quick aside, Wolverine is added directly to the IHostBuilder instead of IServiceCollection through a “Use****()” method because it’s also quietly sliding in Lamar as the underlying IoC container. Some folks have been upset at that, so let’s be upfront with that right now. While I may talk about Lamar diagnostics as part of this series, it’s unlikely that that will ever be an issue for most users in any way. Lamar has some specific functionality that was built specifically for Wolverine and utilized quite heavily.

This time out, let’s move into the “C(ommand)” part of our CQRS architecture and build some handling for the CategoriseIncident command we’d initially discovered in our Event Storming session:

public class CategoriseIncident
{
    public Guid Id { get; set; }
    public IncidentCategory Category { get; set; }
    public int Version { get; set; }
}

And next, let’s build our very first ever Wolverine message handler for this command that will load the existing IncidentDetails for the designated incident, decide if the category is being changed, and add a new event to the event stream using Marten’s IDocumentSession service. That handler in code done purposely in an explicit, “long hand” style could be this — but in later posts we will use other Wolverine capabilities to make this code much simpler while even introducing a lot more robust set of validations:

public static class CategoriseIncidentHandler
{
    public static async Task Handle(
        CategoriseIncident command, 
        IDocumentSession session, 
        CancellationToken cancellationToken)
    {
        // Find the existing state of the referenced Incident
        var existing = await session
            .Events
            .AggregateStreamAsync<IncidentDetails>(command.Id, token: cancellationToken);

        // Don't worry, we're going to clean this up later
        if (existing == null)
        {
            throw new ArgumentOutOfRangeException(nameof(command), "Unknown incident id " + command.Id);
        }
        
        // We need to validate whether this command actually 
        // should do anything
        if (existing.Category != command.Category)
        {
            var categorised = new IncidentCategorised
            {
                Category = command.Category,
                UserId = SystemId
            };

            session.Events.Append(command.Id, categorised);
            await session.SaveChangesAsync(cancellationToken);
        }
    }
    
    // This is kinda faked out, nothing to see here!
    public static readonly Guid SystemId = Guid.NewGuid();
}

There’s a couple things I want you to note about the handler class above:

  • We’re not going to make any kind of explicit configuration to help Wolverine discover and use that handler class. Instead, Wolverine is going to discover that within our main service assembly because it’s a public, concrete class suffixed with the name “Handler” (there are other alternatives for this discovery if you don’t like that approach).
  • Wolverine “knows” that the Handle() method is a handler for the CategoriseIncident command because the method is named “Handle” and the first argument is that command type
  • Note that this handler is a static type. It doesn’t have to be, but doing so helps Wolverine shave off some object allocations at runtime.
  • Also note that Wolverine message handlers happily support “method injection” and allow you to inject IoC service dependencies like the Marten IDocumentSession through method arguments. You can also do the more traditional .NET approach of pulling everything through a constructor and setting instance fields, but hey, why not write simpler code?
  • While it’s perfectly legal to handle multiple message types in the same handler class, I typically recommend making that a one to one relationship in most cases

And next, let’s put this into context by having an MVC Core controller expose an HTTP route for this command type, then pass the command on to Wolverine where it will mediate between the HTTP outer world and the inner world of the application services like Marten:

// I'm doing it this way for now because this is 
// a common usage, but we'll move away from 
// this later into more of a "vertical slice"
// approach of organizing code
public class IncidentController : ControllerBase
{
    [HttpPost("/api/incidents/categorize")]
    public Task Categorize(
        [FromBody] CategoriseIncident command,
        [FromServices] IMessageBus bus)

        // IMessageBus is the main entry point into
        // using Wolverine
        => bus.InvokeAsync(command);
}

Summary and Next Time

In this post we looked at the very simplest usage of Wolverine, how to integrate that into your codebase, and how to get started writing command handlers with Wolverine. What I’d like you to take away is that Wolverine is a very different animal from “IHandler of T” frameworks like MediatR, NServiceBus, MassTransit, or Brighter that require mandatory interface signatures and/or base classes. Even when writing long hand code as I did, I hope you can notice already how much lower code ceremony Wolverine requires compared to more typical .NET frameworks that solve similar problems to Wolverine.

I very purposely wrote the message handlers in a very explicit way, and left out some significant use cases like concurrency protection, user input validation, and cross cutting concerns. I’m not 100% sure where I want to go next, but in this next week we’ll look at concurrency protections with Marten, highly efficient GET HTTP endpoints with Marten and ASP.Net Core, and start getting into Wolverine’s HTTP endpoint model.

Why you might ask are all the Wolverine nugets suffixed with “Fx?” The Marten core team and some of our closest collaborators really liked the name “Wolverine” for this project and instantly came up with the project graphics, but when we tried to start publishing Nuget packages, we found out that someone is squatting on the name “Wolverine” in Nuget and we weren’t able to get the rights to that name. Rather than change course, we stubbornly went full speed ahead with the “WolverineFx” naming scheme just for the published Nugets.

Let’s Get Controversial for (only) a Minute!

When my wife and I watched the Silicon Valley show, I think she was bemused when I told her there was a pretty heated debate in development circles over “tabs vs spaces.”

I don’t want this to detract too much from the actual content of this series, but I have very mixed feelings about ASP.Net MVC Core as a framework and the whole idea of using a “mediator” as popularized by the MediatR library within an MVC Core application.

I’ve gone back and forth on both ASP.Net MVC in its various incarnations and also on MediatR both alone and as a complement to MVC Core. Where I’ve landed at right now is the opinion that MVC Core used by itself is a very flawed framework that can easily lead to unmaintainable code over time as an enterprise system grows over time as typical interpretations of the “Clean Architecture” style in concert with MVC Core’s routing rules lead unwary developers to creating bloated MVC controller classes.

While I was admittedly unimpressed with MediatR as I first encountered it on its own merits in isolation, what I will happily admit is that the usage of MediatR is helpful within MVC Core controllers as a way to offload operation specific code into more manageable pieces as opposed to the bloated controllers that frequently result from using MVC Core. I have since occasionally recommended the usage of MediatR within MVC Core codebases to my consulting clients as a way to help make their code easier to maintain over time.

If you’re interested, I touched on this theme somewhat in my talk A Contrarian View of Software Architecture from NDC Oslo 2023. And yes, I absolutely think you can build maintainable systems with MVC Core over time even without the MediatR crutch, but I think you have to veer away from the typical usage of MVC Core to do so and be very mindful of how you’re using the framework. In other words, MVC Core does not by itself lead teams to a “pit of success” for maintainable code in the long run. I think that MediatR or Wolverine with MVC Core can help, but I think we can do better in the long run by moving away from MVC Core. a

By the time this series is over, I will be leaning very hard into organizing code in a vertical slice architecture style and seeing how to use the Critter Stack to create maintainability and testability without the typically complex “Ports and Adapter” style architecture that well meaning server side development teams have been trying to use in the past decade or two.

While I introduced Wolverine today as a “mediator” tool within MVC Core, by the time this series is done we’ll move away from MVC Core with or without MediatR or “Wolverine as MediatR” and use Wolverine’s HTTP endpoint model by itself as simpler alternative with less code ceremony — and I’m going to try hard to make the case that that simpler model is a superior way to build systems.

Building a Critter Stack Application: Integrating Marten into Our Application

Hey, did you know that JasperFx Software is ready for formal support plans for Marten and Wolverine? Not only are we trying to make the “Critter Stack” tools be viable long term options for your shop, we’re also interested in hearing your opinions about the tools and how they should change.

Let’s build a small web service application using the whole “Critter Stack” and their friends, one small step at a time. For right now, the “finished” code is at CritterStackHelpDesk on GitHub.

The posts in this series are:

  1. Event Storming
  2. Marten as Event Store
  3. Marten Projections
  4. Integrating Marten into Our Application (this post)
  5. Wolverine as Mediator
  6. Web Service Query Endpoints with Marten
  7. Dealing with Concurrency
  8. Wolverine’s Aggregate Handler Workflow FTW!
  9. Command Line Diagnostics with Oakton
  10. Integration Testing Harness
  11. Marten as Document Database
  12. Asynchronous Processing with Wolverine
  13. Durable Outbox Messaging and Why You Care!
  14. Wolverine HTTP Endpoints
  15. Easy Unit Testing with Pure Functions
  16. Vertical Slice Architecture
  17. Messaging with Rabbit MQ
  18. The “Stateful Resource” Model
  19. Resiliency

In the previous couple posts I’ve introduced Marten as a standalone library and some of its capabilities for persisting events and creating projected views off those events within an event sourcing persistence strategy. Today I want to end the week by simply talking about how to integrate Marten into an ASP.Net Core application.

Oskar’s Introduction to Event Sourcing – Self Paced Kit has a world of information for folks getting started with event sourcing.

Let’s start a shell of a new web service project and add a Nuget reference to Marten through:

dotnet new webapi
dotnet add package Marten

If you’ll open up the Program.cs file in your new application, find this line of code at the top where it’s just starting to configure your application:

using Marten;
// Many other using statements

var builder = WebApplication.CreateBuilder(args);

Right underneath that (it doesn’t actually matter most times what order this all happens inside the Program code, but I’m giving Marten the seat at the head of the table so to speak), add this code:

// "AddTool()" is now the common .NET idiom
// for integrating tools into .NET applications
builder.Services.AddMarten(opts =>
{
    // You always have to tell Marten what the connection string to the underlying
    // PostgreSQL database is, but this is the only mandatory piece of 
    // configuration
    var connectionString = builder.Configuration.GetConnectionString("marten");
    opts.Connection(connectionString);
    
    // We have to tell Marten about the projection we built in the previous post
    // so that Marten will "know" how to project events to the IncidentDetails
    // projected view
    opts.Projections.Add<IncidentDetailsProjection>(ProjectionLifecycle.Inline);
})
    // This is a mild optimization
    .UseLightweightSessions();;

That little bit of code is adding the necessary Marten services to your application’s underlying IoC container with the correct scoping. The main services you’ll care about are:

ServiceDescriptionLifetime
IDocumentStoreRoot configuration of the Marten databaseSingleton
IQuerySessionRead-only subset of the IDocumentSessionScoped
IDocumentSessionMarten’s unit of work service that also exposes capabilities for querying and the event storeScoped
Marten services in the IoC container

You can read more about the bootstrapping options in Marten in the documentation. If you’re wondering what “Lightweight Session” means to Marten, you can learn more about the different flavors of sessions in the documentation, but treat that as an advanced subject that’s not terribly relevant to this post.

And also, the usage of AddMarten() should feel familiar to .NET developers now as it follows the common idioms for integrating external tools into .NET applications through the generic host infrastructure that came with .NET Core. As a long term .NET developer, I cannot exaggerate how valuable I think this standardization of application bootstrapping has been for the OSS community in .NET.

Using Marten in an MVC Controller

For right now, I want to assume that many of you are already familiar with ASP.Net MVC Core, so let’s start by showing the usage of Marten within a simple controller to build the first couple endpoints to log a new incident and fetch the current state of an incident in our new incident tracking, help desk service:

public class IncidentController : ControllerBase
{
    private readonly IDocumentSession _session;

    public IncidentController(IDocumentSession session)
    {
        _session = session;
    }

    [HttpPost("/api/incidents")]
    public async Task<IResult> Log(
        [FromBody] LogIncident command
        )
    {
        var userId = currentUserId();
        var logged = new IncidentLogged(command.CustomerId, command.Contact, command.Description, userId);

        var incidentId = _session.Events.StartStream(logged).Id;
        await _session.SaveChangesAsync(HttpContext.RequestAborted);

        return Results.Created("/incidents/" + incidentId, incidentId);
    }

    [HttpGet("/api/incidents/{incidentId}")]
    public async Task<IResult> Get(Guid incidentId)
    {
        // In this case, the IncidentDetails are projected
        // "inline", meaning we can load the pre-built projected
        // view
        var details = await _session.LoadAsync<IncidentDetails>(incidentId);

        return details != null
            ? Results.Json(details)
            : Results.NotFound();
    }

    private Guid currentUserId()
    {
        // let's say that we do something here that "finds" the
        // user id as a Guid from the ClaimsPrincipal
        var userIdClaim = User.FindFirst("user-id");
        if (userIdClaim != null && Guid.TryParse(userIdClaim.Value, out var id))
        {
            return id;
        }

        throw new UnauthorizedAccessException("No user");
    }
}

It’s important to note at this point (that might change in Marten 7) that the IDocumentSession should be disposed when you’re doing using it to tell Marten to close down any open database connections. In the usage above, the scoped IoC container mechanics in ASP.Net Core are handling all the necessary object disposal for you.

Summary and What’s Next

Today we strictly just looked at how to integrate Marten services into a .NET application using the AddMarten() mechanism. Using a simple MVC Core Controller, we saw how Marten services are available and managed in the application’s IoC container and how to perform basic event sourcing actions in the context of little web service endpoints.

In later posts in this series we’ll actually replace this IncidentController with Wolverine endpoints and some “special sauce” with Marten to be much more efficient.

In the next post, I think I want to talk through the CQRS architectural style with Marten. Bear with me, but I’ll still be using explicit code with MVC Core controllers that folks are probably already familiar with to talk over the requirements and Marten capabilities in isolation. Don’t worry though, I will eventually introduce Wolverine into the mix to show how the Wolverine + Marten integration can make your code very clean and testable.

Building a Critter Stack Application: Marten Projections

Hey, did you know that JasperFx Software is ready for formal support plans for Marten and Wolverine? Not only are we trying to make the “Critter Stack” tools be viable long term options for your shop, we’re also interested in hearing your opinions about the tools and how they should change.

Let’s build a small web service application using the whole “Critter Stack” and their friends, one small step at a time. For right now, the “finished” code is at CritterStackHelpDesk on GitHub.

The posts in this series are:

  1. Event Storming
  2. Marten as Event Store
  3. Marten Projections (this post)
  4. Integrating Marten into Our Application
  5. Wolverine as Mediator
  6. Web Service Query Endpoints with Marten
  7. Dealing with Concurrency
  8. Wolverine’s Aggregate Handler Workflow FTW!
  9. Command Line Diagnostics with Oakton
  10. Integration Testing Harness
  11. Marten as Document Database
  12. Asynchronous Processing with Wolverine
  13. Durable Outbox Messaging and Why You Care!
  14. Wolverine HTTP Endpoints
  15. Easy Unit Testing with Pure Functions
  16. Vertical Slice Architecture
  17. Messaging with Rabbit MQ
  18. The “Stateful Resource” Model
  19. Resiliency

In the previous post I showed how to use the Marten library as the storage mechanism for events and event streams within an event sourcing persistence strategy. If you’re following along, you’ve basically learned how to stuff little bits of JSON into a database as the authoritative source of truth for your system. You might be asking yourself “what the @#$%@# am I supposed to do this this stuff now?” In today’s post I’m going to show you how Marten can help you derive the current state of the system from the raw event data through its usage of projections.

For more information about the conceptual role of projections in an event sourcing system, see my colleague Oskar Dudycz‘s post Guide to Projections and Read Models in Event-Driven Architecture.

Back to our help desk service, last time we created event streams representing each incident with events like:

public record IncidentLogged(
    Guid CustomerId,
    Contact Contact,
    string Description,
    Guid LoggedBy
);
 
public class IncidentCategorised
{
    public IncidentCategory Category { get; set; }
    public Guid UserId { get; set; }
}
 
public record IncidentPrioritised(IncidentPriority Priority, Guid UserId);
 
public record AgentAssignedToIncident(Guid AgentId);
 
public record AgentRespondedToIncident(        
    Guid AgentId,
    string Content,
    bool VisibleToCustomer);
 
public record CustomerRespondedToIncident(
    Guid UserId,
    string Content
);
 
public record IncidentResolved(
    ResolutionType Resolution,
    Guid ResolvedBy,
    DateTimeOffset ResolvedAt
);

Those events are directly stored in our database as our single source of truth, but we will absolute need to derive the current state of an incident to support:

  • User interface screens
  • Reports
  • Decision making within the help desk workflow (what event sourcing folks call the “write model”

For now, let’s say that we’d really like to have this view of a single incident:

public class IncidentDetails
{
    public Guid Id { get; set; }
    public Guid CustomerId{ get; set; }
    public IncidentStatus Status{ get; set; }
    public IncidentNote[] Notes { get; set; } = Array.Empty<IncidentNote>();
    public IncidentCategory? Category { get; set; }
    public IncidentPriority? Priority { get; set; }
    public Guid? AgentId { get; set; }
    public int Version { get; set; }
}

Let’s teach Marten how to combine the raw events describing an incident into our new IncidentDetails view. The easiest possible way to do that is to drop some new methods onto our IncidentDetails class to “teach” Marten how to modify the projected view:

public class IncidentDetails
{
    public IncidentDetails()
    {
    }

    public IncidentDetails(IEvent<IncidentLogged> logged)
    {
        Id = logged.StreamId;
        CustomerId = logged.Data.CustomerId;
        Status = IncidentStatus.Pending;
    }

    public Guid Id { get; set; }
    public Guid CustomerId{ get; set; }
    public IncidentStatus Status{ get; set; }
    public IncidentNote[] Notes { get; set; } = Array.Empty<IncidentNote>();
    public IncidentCategory? Category { get; set; }
    public IncidentPriority? Priority { get; set; }
    public Guid? AgentId { get; set; }

    // Marten itself will set this to its tracked
    // revision number for the incident
    public int Version { get; set; }

    public void Apply(IncidentCategorised categorised) => Category = categorised.Category;
    public void Apply(IncidentPrioritised prioritised) => Priority = prioritised.Priority;
    public void Apply(AgentAssignedToIncident prioritised) => AgentId = prioritised.AgentId;
    public void Apply(IncidentResolved resolved) => Status = IncidentStatus.Resolved;
    public void Apply(ResolutionAcknowledgedByCustomer acknowledged) => Status = IncidentStatus.ResolutionAcknowledgedByCustomer;
    public void Apply(IncidentClosed closed) => Status = IncidentStatus.Closed;
}

In action, the simplest way to execute the projection is to do a “live aggregation” as shown below:

static async Task PrintIncident(IDocumentStore store, Guid incidentId)
{
    await using var session = store.LightweightSession();
    
    // Tell Marten to load all events -- in order -- for the designated
    // incident event stream, then project that data into an IncidentDetails
    // view
    var incident = await session.Events.AggregateStreamAsync<IncidentDetails>(incidentId);
}

You can see a more complicated version of this projection in action by running the EventSourcingDemo project from the command line. Just see the repository README for instructions on setting up the database.

Marten is using a set of naming conventions to “know” how to pass event data to the IncidentDetails objects. As you can probably guess, Marten is calling the Apply() overloads to mutate the InvoiceDetails object for each event based on the event type. Those conventions are documented here — and yes, there are plenty of other options for using more explicit code instead of the conventional approach if you don’t care for that.

This time, with immutability!

In the example above, I purposely chose the simplest possible approach, and that led me to using a mutable structure for InvoiceDetails that kept all the details of how to project the events in the InvoiceDetails class itself. As an alternative, let’s make the InvoiceDetails be immutable as a C# record instead like so:

public record IncidentDetails(
    Guid Id,
    Guid CustomerId,
    IncidentStatus Status,
    IncidentNote[] Notes,
    IncidentCategory? Category = null,
    IncidentPriority? Priority = null,
    Guid? AgentId = null,
    int Version = 1
);

And as another alternative, let’s say you’d rather have the Marten projection logic external to the nice, clean IncidentDetails code above. That’s still possible by creating a separate class. The most common projection type is to project the events of a single stream, and for that you can subclass the Marten SingleStreamProjection base class to create your projection logic as shown below:

public class IncidentDetailsProjection: SingleStreamProjection<IncidentDetails>
{
    public static IncidentDetails Create(IEvent<IncidentLogged> logged) =>
        new(logged.StreamId, logged.Data.CustomerId, IncidentStatus.Pending, Array.Empty<IncidentNote>());

    public IncidentDetails Apply(IncidentCategorised categorised, IncidentDetails current) =>
        current with { Category = categorised.Category };

    public IncidentDetails Apply(IncidentPrioritised prioritised, IncidentDetails current) =>
        current with { Priority = prioritised.Priority };

    public IncidentDetails Apply(AgentAssignedToIncident prioritised, IncidentDetails current) =>
        current with { AgentId = prioritised.AgentId };

    public IncidentDetails Apply(IncidentResolved resolved, IncidentDetails current) =>
        current with { Status = IncidentStatus.Resolved };

    public IncidentDetails Apply(ResolutionAcknowledgedByCustomer acknowledged, IncidentDetails current) =>
        current with { Status = IncidentStatus.ResolutionAcknowledgedByCustomer };

    public IncidentDetails Apply(IncidentClosed closed, IncidentDetails current) =>
        current with { Status = IncidentStatus.Closed };
}

The exact same set of naming conventions still apply here, with Apply() methods creating a new revision of the IncidentDetails for each event, and the Create() method helping Marten to start an IncidentDetails object for the first event in the stream.

This usage does require you to register the custom projection class upfront in the Marten configuration like this:

var connectionString = "Host=localhost;Port=5433;Database=postgres;Username=postgres;password=postgres";
await using var store = DocumentStore.For(opts =>
{
    opts.Connection(connectionString);
    
    // Telling Marten about the projection logic for the IncidentDetails
    // view of the events
    opts.Projections.Add<IncidentDetailsProjection>(ProjectionLifecycle.Live);
});

Don’t worry too much about that “Live” option, we’ll dive deeper into projection lifecycles as we progress in this series.

Summary and What’s Next

Projections are a Marten feature that enable you to create usable views out of the raw event data. We used the simplest projection recipes in this post to create an IncidentDetails vew out of the raw incident events that we will use later on to build our web service.

In this sample, I was showing Marten’s ability to evaluate projected views on the fly by loading the events into memory and combining them into the final projection result on demand. Marten also has the ability to persist these projected data views ahead of time for faster querying (“Inline” or “Async” projections). If you’re familiar with the concept of materialized views in databases that support that, projections running inline or in a background process are a close analogue.

In the next post, I think I just want to talk about how to integrate Marten into an ASP.Net Core application and utilize Marten in a simple MVC Core controller — but don’t worry, before we’re done, we’re going to replace the MVC Core code with much slimmer code using Wolverine, but one new concept or tool at a time!