Projections, Consistency Models, and Zero Downtime Deployments with the Critter Stack

This content will later be published as a tutorial somewhere on one of our documentation websites. This was originally “just” an article on doing blue/green deployments when using projections with Marten, so hence the two martens up above:)

Event Sourcing may not seem that complicated to implement, and you might be tempted to forego any kind of off the shelf tooling and just roll your own. Just appending events to storage by itself isn’t all that difficult, but you’ll almost always need projections of some sort to derive the system state in a usable way and that’s a whole can of complexity worms as you need to worry about consistency models, concurrency, performance, snapshotting, and you inevitably need to change a projection in a deployment down the road.

Fortunately, the full combination of Marten and Wolverine (the “Critter Stack”) for Event Sourcing architectures gives you powerful options to cover a variety of projection scenarios and needs. Marten by itself provides multiple ways to achieve strongly consistent projected data when you have to have that. When you prefer or truly need eventual consistency instead for certain projections, Wolverine helps Marten scale up to larger data loads by distributing the background work that Marten does for asynchronous projection building. Moreover, when you put the two tools together, the Critter Stack can support zero downtime deployments that involve projections rebuilds without sacrificing strong consistency for certain types of projections.

Consistency Models in Marten

One of the decision points in building projections is determining for each individual projection view whether you need strong consistency where the projected data is guaranteed to match the current state of the persisted events, or if it would be preferable to rely on eventual consistency where the projected data might be behind the current events, but will “eventually” be caught up. Eventual consistency might be attractive because there are definite performance advantages to moving some projection building to an asynchronous, background process (Marten’s async daemon feature). Besides the performance benefits, eventual consistency might be necessary to accommodate cases where highly concurrent system inputs would make it very difficult to update projection data within command handling without either risking data loss or applying events out of sequential order.

Marten supports three projection lifecycles that we’ll explore throughout this paper:

  1. “Live” projections are calculated in memory by fetching the raw events and building up an aggregated view. Live projections are strongly consistent.
  2. “Inline” projections are persisted in the Marten database, and the projected data is updated as part of the same database transaction whenever any events are appended. Inline projections are also strongly consistent.
  3. “Async” projections are continuously built and updated in the database as new events come in a background process in Marten called the “Async Daemon“. On its face this is obviously eventual consistency, but there’s a technical wrinkle where Marten can “fast forward” asynchronous projections to still be strongly consistent on demand.

For Inline or Async projections, the projected data is being persisted to Marten using its document database capabilities and that data is available to be loaded through all of Marten’s querying capabilities, including its LINQ support. Writing “snapshots” of the projected data to the database also has an obvious performance advantage when it comes to reading projection state, especially if your event streams become too long to do Live aggregations on demand.

Now let’s talk about some common projection scenarios and how you should choose projection lifecycles for these scenarios:

A “write model” projection for a single event stream that represents a logical business entity or workflow like an “Invoice” or an “Order” with all the necessary information you would need in command handlers to “decide” how to process incoming commands. You will almost certainly need this data to be strongly consistent with the events in your command processing. I think it’s a perfectly good default to start with a Live lifecycle, and maybe even move to Inline if you want snapshotting in the case of longer event streams, but there’s a way in Marten to actually use Async as well with its FetchForWriting() API as shown below in this sample MVC controller that acts as a command handler (the “C” in CQRS):

    [HttpPost("/api/incidents/categorise")]
    public async Task<IActionResult> Post(
        CategoriseIncident command,
        IDocumentSession session,
        IValidator<CategoriseIncident> validator)
    {
        // Some validation first
        var result = await validator.ValidateAsync(command);
        if (!result.IsValid)
        {
            return Problem(statusCode: 400, detail: result.Errors.Select(x => x.ErrorMessage).Join(", "));
        }

        var userId = currentUserId();

        // This will give us access to the projected current Incident state for this event stream
        // regardless of whatever the projection lifecycle is!
        var stream = await session.Events.FetchForWriting<Incident>(command.Id, command.Version, HttpContext.RequestAborted);
        if (stream.Aggregate == null) return NotFound();
        
        if (stream.Aggregate.Category != command.Category)
        {
            stream.AppendOne(new IncidentCategorised
            {
                Category = command.Category,
                UserId = userId
            });
        }

        await session.SaveChangesAsync();

        return Ok();
    }

The FetchForWriting() API is the recommended way to write command handlers that need to use a “write model” to potentially append new events. FetchForWriting helps you opt into easy optimistic concurrency protection that you probably want to protect against concurrent access to the same event stream. As importantly, FetchForWriting completely encapsulates whatever projection lifecycle we’re using for the Incident write model above. If Incident is registered as:

  • Live, then this API does a live aggregation in memory
  • Inline, then this API just loads the persisted snapshot out of the database similar to IQuerySession.LoadAsync<Incident>(id)
  • Async, then this API does a “catch up” model for you by fetching — in one database round trip mind you! — the last persisted snapshot of the Incident and any captured events to that event stream after the last persisted snapshot, and incrementally applies the extra events to effectively “advance” the Incident to reflect all the current events captured in the system.

The takeaway here is that you can have the strongly consistent model you need for command handlers with concurrent access protections and be able to use any projection lifecycle as you see fit. You can even change lifecycles later without having to make code changes!

In the next section I’ll discuss how that “catch up” ability will allow you to make zero downtime deployments with projection changes.

I didn’t want to use any “magic” in the code sample above to discuss the FetchForWriting API in Marten, but do note that Wolverine’s “aggregate handler workflow” approach to streamlined command handlers utilizes Marten’s FetchForWriting API under the covers. Likewise, Wolverine has some other syntactic sugar for more easily using Marten’s FetchLatest API.

A “read model” projection for a single stream that again represents the state of a logical business entity or workflow, but this time optimized for whatever data needs a user interface or query endpoint of your system needs. You might be okay in some circumstances to get away with eventually consistent data for your “read model” projections, but for the sake of this article let’s say you do want strongly consistent information for your read model projections. There’s also a little bit lighter API called FetchLatest in Marten for fetching a read only view of a projection (this only works with a single stream projection in case you’re wondering):

public static async Task read_latest(
    // Watch this, only available on the full IDocumentSession
    IDocumentSession session,
    Guid invoiceId)
{
    var invoice = await session
        .Events.FetchLatest<Projections.Invoice>(invoiceId);
}

Our third common projection role is simply having a projected view for reporting. This kind of projection may incorporate information from outside of the event data as well, combine information from multiple “event streams” into a single document or record, or even cross over between logical types of event streams. At this point it’s not really possible to do Live aggregations like this, and an Inline projection lifecycle would be problematic if there was any level of concurrent requests that impact the same “multi-stream” projection state. You’ll pretty well have to use the Async lifecycle and accept some level of eventual consistency.

It’s beyond the scope of this paper, but there are ways to “wait” for an asynchronous projection to catch up or to take “side effect” actions whenever an asynchronous projection is being updated in a background process.

I should note that “read model” and “write model” are just roles within your system, and it’s going to be common to get by with a single model that happily plays both roles in simpler systems, but don’t hesitate to use separate projection representations of the same events if the consumers of your system’s data just have very different needs.

Persisting the snapshots comes with a potentially significant challenge when there is inevitably some reason why the projection data has to be rebuilt as part of a deployment. Maybe it’s because of a bug, new business requirements, a change in how your system calculates a metric from the event data, or even just adding an entirely new projection view of the same old event data — but the point is, that kind of change is pretty likely and it’s more reliable to plan for change rather than depend on being perfect upfront in all of your event modeling.

Fortunately, Marten with some serious help from Wolverine, has some answers for that!

There’s also an option to write projected data to “flat” PostgreSQL tables as you see fit.

Zero Downtime with Blue / Green Deployments

As I alluded to just above, one of the biggest challenges with systems using event sourcing is what happens when you need to deploy changes that involve projection changes that will require rebuilding persisted data in the database. As a community we’ve invested a lot of time into making the projection rebuild process smoother and faster, but there’s admittedly more work yet to come.

Instead of requiring some system downtime in order to do projection rebuilds before a new deployment though, the Critter Stack can now do a true “blue / green” deployment where both the old and new versions of the system and even versioned projections can run in parallel as shown below:

Let’s rewind a little bit and talk about how to make this happen, because it is a little bit of a multi-step process.

First off, try to only use FetchForWriting() or FetchLatest() when you need strongly consistent access to any kind of single stream projection (definitely “write model” projections and probably “read model” projections as well).

Next, if you need to make some kind of breaking changes to a projection of any kind, use the ProjectionVersion property and increment it to the next version like so:

// This class contains the directions for Marten about how to create the
// Incident view from the raw event data
public class IncidentProjection: SingleStreamProjection<Incident>
{
    public IncidentProjection()
    {
        // THIS is the magic sauce for side by side execution
        // in blue/green deployments
        ProjectionVersion = 2;
    }

    public static Incident Create(IEvent<IncidentLogged> logged) =>
        new(logged.StreamId, logged.Data.CustomerId, IncidentStatus.Pending, Array.Empty<IncidentNote>());

    public Incident Apply(IncidentCategorised categorised, Incident current) =>
        current with { Category = categorised.Category };

    // More event type handling...
}

By incrementing the projection version, we’re effectively making this a completely new projection in the application that will use completely different database tables for the Incident projection version 1 and version 2. This allows the “blue” nodes running the starting version of our application to keep chugging along using the old version of Incident while “green” nodes running the new version of our application can be running completely in parallel, but depending on the new version 2 of the Incident projection.

You will also need to make every single newly revised projection run under the Async lifecycle as well. As we discussed earlier, the FetchForWriting API is able to “fast forward” a single Incident write model projection as needed for command processing, so our “green” nodes will be able to handle commands against Incident event streams with the correct system state. Admittedly, the system might be running a little slower until the asynchronous Incident V2 projection gets caught up, but “slower” is arguably much better than “down”.

With the case of multi-stream projections (our reports), there is no equivalent to FetchLatest, so we’re stuck with eventual consistency. What you can at least do is deploy some “green” nodes with the new version of the system and the revisioned projections and let it start building the new projections from scratch as it starts — but not allow those nodes to handle outside requests until the new versions of the projection are “close” to being caught up to the current event store.

Now, the next question is “how does Marten know to only run the “green” versions of the projections on “green” nodes and make sure that every single projection + version combination is running somewhere?

While there are plenty of nice to have features that the Wolverine integration with Marten brings for the coding model, this next step is absolutely mandatory for the blue/green approach. In our application, we need to use Wolverine to distribute the background projection processes across our entire application cluster:

// This would be in your application bootstrapping
opts.Services.AddMarten(m =>
    {
        // Other Marten configuration

        m.Projections.Add<IncidentProjection>(ProjectionLifecycle.Async);

    })
    .IntegrateWithWolverine(m =>
    {
        // This makes Wolverine distribute the registered projections
        // and event subscriptions evenly across a running application
        // cluster
        m.UseWolverineManagedEventSubscriptionDistribution = true;
    });

Referring back to the diagram from above, that option above enables Wolverine to distribute projections to running application nodes based on each node’s declared capabilities. This also tries to evenly distribute the background projections so they’re spread out over the running service nodes of our application for better scalability instead of only running “hot/cold” like earlier versions of Marten’s async daemon did.

As “blue” nodes are pulled offline, it’s safe to drop the Marten table storage for the projection versions that are no longer used. Sorry, but at this point there’s nothing built into the Critter Stack, but you can easily do that through PostgreSQL by itself with pure SQL.

Summary

This is a powerful set of capabilities that can be valuable in real life, grown systems that utilize Event Sourcing and CQRS with the Critter Stack, but I think we as a community have failed until now to put all of this content together in one place to unlock its usage by more people.

I am not aware of any other Event Sourcing tool in .NET or any other technical ecosystem for that matter that can match Marten & Wolverine’s ability to support this kind of potentially zero downtime deployment model. I’ve also never seen another Event Sourcing tool that has something like Marten’s FetchForWriting and FetchLatest APIs. I definitely haven’t seen any other CQRS tooling enable your application code to be as streamlined as the Critter Stack’s approach to CQRS and Event Sourcing.

I hope the key takeaway here is that Marten is a mature tool that’s been beaten on by real people building and maintaining real systems, and that it already solves challenging technical issues in Event Sourcing. Lastly, Marten is the most commonly used Event Sourcing tool for .NET as is, and I’m very confident in saying it has by far the most complete and robust feature set while also having a very streamlined getting started experience.

So this was meant to be a quick win blog post that I was going to bang out at the kitchen table after dinner last night, but instead took most of the next day. The Critter Stack core team is working on a new set of tutorials for both Marten and Wolverine, and this will hopefully take its place with that new content soon.

Pretty Substantial Wolverine 3.11 Release

The Critter Stack community just made a pretty big Wolverine 3.11 release earlier today with 5 brand new contributors making their first pull requests! The highlights are:

  • Efficiency and throughput improvements for publishing messages through the Kafka transport
  • Hopefully more resiliency in the Kafka transport
  • A fix for object disposal mechanics that probably got messed up in the 3.0 release (oops on my part)
  • Improvements for the Azure Service Bus transport‘s ability to handle larger message batches
  • New options for the Pulsar transport
  • Expanded ability for interop with non-Wolverine services with the Google Pubsub transport
  • Some fixes for Wolverine.HTTP

Wolverine 4.0 is also under way, but there will be at least some Wolverine.HTTP improvements in the 3.* branch before we get to 4.0.

Big thanks to the whole Critter Stack community for continuing to support Wolverine, including the folks who took the time to create actionable bug reports that led to several of the fixes and the folks who made fixes to the documentation website as well!

New Critter Stack Features

JasperFx Software offers custom consulting engagements or ongoing support contracts for any part of the Critter Stack. Some of the features in this post were either directly part of client engagements or inspired by our work with JasperFx clients.

This week brought out some new functionality and inevitably some new bug fixes in Marten 7.38 and Wolverine 3.10. I’m actually hopeful this is about the last Marten 7.* release, and Marten 8.0 is heavily underway. Likewise, Wolverine 3.* is probably about played out, and Wolverine 4.0 will come out at the same time. For now though, here’s some highlights of new functionality.

Delete All Marten Data for a Single Tenant

A JasperFx client has a need to occasionally remove all data for a single named tenant across their entire system. Some of their Marten documents and the events themselves are multi-tenanted, while others are global documents. In their particular case, they’re using Marten’s support for managed table partitions by tenant, but other folks might not. To make the process of cleaning out all data for a single tenant as easy as possible regardless of your particular Marten storage configuration, Marten 7.38 added this API:

public static async Task delete_all_tenant_data(IDocumentStore store, CancellationToken token)
{
    await store.Advanced.DeleteAllTenantDataAsync("AAA", token);
}

Rabbit MQ Quorum Queues or Streams with Wolverine

At the request of another JasperFx Software customer, Wolverine has the ability to declare Rabbit MQ quorum queues or streams like so:

var builder = Host.CreateApplicationBuilder();
builder.UseWolverine(opts =>
{
    opts
        .UseRabbitMq(builder.Configuration.GetConnectionString("rabbit"))
        
        // You can configure the queue type for declaration with this
        // usage as well
        .DeclareQueue("stream", q => q.QueueType = QueueType.stream)

        // Use quorum queues by default as a policy
        .UseQuorumQueues()

        // Or instead use streams
        .UseStreamsAsQueues();

    opts.ListenToRabbitQueue("quorum1")
        // Override the queue type in declarations for a
        // single queue, and the explicit configuration will win
        // out over any policy or convention
        .QueueType(QueueType.quorum);
   
    
});

Note that nothing in Wolverine changed other than giving you the ability to make Wolverine declare Rabbit MQ queues as quorum queues or as streams.

Easy Access to Marten Event Sourced Aggregation Data in Wolverine

While the Wolverine + Marten “aggregate handler workflow” is a popular feature for command handlers that may need to append events, sometimes you just want a read only version of an event sourced aggregate. Marten has its FetchLatest API that lets you retrieve the current state of an aggregated projection consistent with the current event store data regardless of the lifecycle of the projection (live, inline, or async). Wolverine now has a quick short cut for accessing that data as a value “pushed” into your HTTP endpoints by decorating a parameter of your handler method with the new [ReadAggregate] attribute like so:

[WolverineGet("/orders/latest/{id}")]
public static Order GetLatest(Guid id, [ReadAggregate] Order order) => order;

or injected into a message handler similarly like this:

public record FindAggregate(Guid Id);

public static class FindLettersHandler
{
    // This is admittedly just some weak sauce testing support code
    public static LetterAggregateEnvelope Handle(
        FindAggregate command, 
        [ReadAggregate] LetterAggregate aggregate)
    
        => new LetterAggregateEnvelope(aggregate);
}

This feature was inspired by a session with a JasperFx Software client where their HTTP endpoints frequently needed to access projected aggregate data for multiple event streams, but only append events to one stream. This functionality was probably already overdue anyway as a way to quickly get projection data any time you just need to read that data as part of a command or query handler.

Critter Stack Roadmap Update for February

The last time I wrote about the Critter Stack / JasperFx roadmap, I was admittedly feeling a little conservative about big new releases and really just focused on stabilization. In the past week though, the rest of the Critter Stack Core Team decided it was time to get going on the next round of releases for what will be Marten 8.0 and Wolverine 4.0, so let’s get into the details.

Definitely in Scope:

  • Upgrade Marten (and Weasel/Wolverine) to Npgsql 9.0
  • Drop .NET 6/7 support in Marten and .NET 7 support in Wolverine. Both will have targets for .NET 8/9
  • Consolidation of supporting libraries. What is today JasperFx.Core, JasperFx.CodeGeneration, and Oakton are getting combined into a new library called JasperFx. That’s partially to simplify setup by reducing the number of dotnet add ... calls you need to do, but also to potentially streamline configuration that’s today duplicated between Marten & Wolverine.
  • Drop the synchronous APIs that are already marked as [Obsolete] in Marten’s API surface
  • Stream Compacting” in Marten/Wolverine/CritterWatch. This feature is being done in partnership with a JasperFx client

In addition to that work, JasperFx Software is working hard on the forthcoming “Critter Watch” tooling that will be a management and monitoring console application for Wolverine and Marten, so there’s also a bit of the work to help support Critter Watch through improvements to instrumentation and additional APIs that will land in Wolverine or Marten proper.

I’ll write much more about Critter Watch soon. Right now the MVP looks to be:

  1. A dead letter message explorer and management tool for Wolverine
  2. A view of your Critter Watch application configuration, which will be able to span multiple applications to better understand how messages flow throughout your greater ecosystem of services
  3. Viewing and managing asynchronous projections in Marten, which should include performance information, a dashboard explaining what projections or subscriptions are running, and the ability to trigger projection rebuilds, rewind subscriptions, and to pause/restart projections at runtime
  4. Displaying performance metrics about your Wolverine / Marten application by integration with your Otel tooling (we’re initially thinking about PromQL integration here).

Maybe in Scope???

It may be that we go for a quick and relatively low impact Marten 8 / Wolverine 4 release, but here are the things we are considering for this round of releases and would love any feedback or requests you might have:

  • Overhaul the Marten projection support, with a very particular emphasis on simplifying the multi-stream projections especially. The core team & I did quite a bit of work on that in the 4th quarter last year in the first attempt at Marten 8, and that work might feed into this effort as well. Part of that goal is to make it as easy as possible to use purely explicit code for projections as a ready alternative to the conventional Apply/Create method conventions. There’s an existing conversation in this issue.
  • Multi-tenancy support for EF Core with Wolverine commensurate with the existing Marten + Wolverine + multi-tenancy support. I really want to be expanding the Wolverine user base this year, and better EF Core support feels like a way to help achieve that.
  • Revisit the async daemon and add support for dependencies between asynchronous projections and/or the ability to “lock” the execution of 2 or more projections together. That’s 100% about scalability and throughput for folks who have particularly nasty complicated multi-stream projections. This would also hopefully be in partnership with a JasperFx client.
  • Revisiting the event serialization in Marten and its ability to support “downcasters” or “upcasters” for event versioning. There is an opportunity to ratchet up performance by moving to higher performance serializers like MessagePack or MemoryPack for the event serialization. You’d have to make that an opt in model, probably support side by side JSON & whatever other serialization, and make sure folks know that means losing the LINQ querying support for Marten events if you opt for the better performance.
  • Potentially risky time sink: pull quite a bit of the event store support code in Marten today into a new shared library (like the IEvent model and maybe quite a bit of the projection subsystem) where that code could be shared between Marten and the long planned Sql Server-backed event store. And maybe even a CosmosDb integration.
  • Some improvements to Wolverine specifically for modular monolith usage discussed in more depth in the next section.

Wolverine 4 and Modular Monoliths

This is all related to this issue in the Wolverine backlog about mixing and matching databases in the same application. So, the modular monolith thing in Wolverine? It’s admittedly taken some serious work in the past 3-4 months to make Wolverine work the way the creative folks pushing the modular monolith concept have needed.

I think we’re in good shape with Wolverine message handler discovery and routing for modular monoliths, but there’s some challenges around database integration, the transactional inbox/outbox support, and transactional middleware within with a single application that’s potentially talking to multiple databases from a single process — and then make things more complicated still by throwing in the possibility of using multi-tenancy through separated databases.

Wolverine already does fine with an architecture like the one below where you might have separate logical “modules” in your system that generally work against the same database, but using separate database schemas for the isolation:

Where Wolverine doesn’t yet go (and I’m also not aware of any other .NET tooling that actually solves this) is the case where separate modules may be talking to completely separate physical databases as shown below:

The work I’m doing right now with “Critter Watch” touches on Wolverine’s message storage, so it’s somewhat convenient to try to improve Wolverine’s ability to allow you to mix and match different databases and even different database engines from one Wolverine application as part of this release.

Retry on Errors in Wolverine

Coaching my daughter’s 1st/2nd grade basketball team is a trip. I don’t know that the girls are necessarily learning much, but one thing I’d love for them to understand is to “follow your shot” and try for a rebuild for a second shot if the ball doesn’t go in on their first shot. That’s the tortured metaphor/excuse for the marten playing basketball for this post:-)

I’m currently helping a JasperFx Software client to retrofit in some concurrency protection to their existing system that uses Marten for event sourcing by utilizing Marten’s FetchForWriting API deep in the guts of a custom repository to prevent their system from being put into an inconsistent state.

Great, right! Except that there’s not a very real possibility that their application will throw Marten’s ConcurrencyException when an operation fails in Marten’s optimistic concurrency checks.

Our next trick is building in some selective retries for the commands that could probably succeed if they just started over from the new system state after first triggering the concurrency check — and that’s an absolutely perfect use case for the built in Wolverine error handling policies!

This particular system was built around MediatR that doesn’t have any built in error handling policies, and we’ll probably end up rigging up some kind of pipeline behavior or even a flat out decorator MediatR. I did call out the error handling in Wolverine as an advantage in the Wolverine for MediatR Users guide.

In the ubiquitous “Incident Service” example we use in documentation here and there, we have a message handler for trying to automatically assign a priority to an in flight customer reported “Incident” like this:

public static class TryAssignPriorityHandler
{
    // Wolverine will call this method before the "real" Handler method,
    // and it can "magically" connect that the Customer object should be delivered
    // to the Handle() method at runtime
    public static Task<Customer?> LoadAsync(Incident details, IDocumentSession session)
    {
        return session.LoadAsync<Customer>(details.CustomerId);
    }

    // There's some database lookup at runtime, but I've isolated that above, so the
    // behavioral logic that "decides" what to do is a pure function below. 
    [AggregateHandler]
    public static (Events, OutgoingMessages) Handle(
        TryAssignPriority command, 
        Incident details,
        Customer customer)
    {
        var events = new Events();
        var messages = new OutgoingMessages();

        if (details.Category.HasValue && customer.Priorities.TryGetValue(details.Category.Value, out var priority))
        {
            if (details.Priority != priority)
            {
                events.Add(new IncidentPrioritised(priority, command.UserId));

                if (priority == IncidentPriority.Critical)
                {
                    messages.Add(new RingAllTheAlarms(command.IncidentId));
                }
            }
        }

        return (events, messages);
    }
}

The handler above depends on the current state of the Incident in the system, and it’s somewhat possible that two or more people or transactions are happily trying to modify the same Incident at the same time. The Wolverine aggregate handler workflow triggered by the [AggregateHandler] usage up above happily builds in optimistic concurrency protection such that an attempt to save the pending transaction will throw an exception if something else has modified that Incident between the command starting and the call to persist all changes.

Now, depending on the command, you may want to either:

  1. Immediately discard the command message because it’s not obsolete
  2. Just have the command message retried from scratch, either immediately, with a little delay, or even scheduled for a much later time

Wolverine will happily do that for you. While you can happily set global error handling, you can also fine tune the specific error handling for specific message handlers, exception types, and even exception details as shown below:

public static class TryAssignPriorityHandler
{
    public static void Configure(HandlerChain chain)
    {
        // It's a fall through, so you would only do *one*
        // of these options!

        // It can never succeed, so just discard it instead of wasting
        // time on retries or dead letter queues
        chain.OnException<ConcurrencyException>().Discard();

        // Do some selective retries with a progressive wait
        // in between tries, and if that fails, move it to the dead
        // letter storage
        chain.OnException<ConcurrencyException>()
            .RetryWithCooldown(50.Milliseconds(), 100.Milliseconds(), 250.Milliseconds())
            .Then
            .MoveToErrorQueue();
        
        // Or throw it away after a few tries...
        chain.OnException<ConcurrencyException>()
            .RetryWithCooldown(50.Milliseconds(), 100.Milliseconds(), 250.Milliseconds())
            .Then
            .Discard();
    }
    
    // rest of the handler code...

If you’re processing messages through the asynchronous messaging in Wolverine — and this includes from local, in memory queues too — you have the full set of error policies. If you’re consuming Wolverine as a “Mediator” tool where you may be delegating to Wolverine like so:

public static async Task delegate_to_wolverine(IMessageBus bus, TryAssignPriority command)
{
    await bus.InvokeAsync(command);
}

Wolverine can still use any “Retry” or “Discard” error handling policies, and if Wolverine does a retry, it effectively starts from a completely clean slate so you don’t have to worry about any dirty state from scoped services used by the initial failed attempt to process the message.

Summary

Wolverine puts a ton of emphasis on allowing our users to build low ceremony code that’s highly testable, but we also aren’t compromising on resiliency or observability. While being a “mediator” isn’t really what our hopes and dreams for Wolverine were originally, it does it quite credibly and even brings some of the error handling resiliency that you may be used to in asynchronous messaging frameworks but aren’t always a feature of smaller “mediator” tools.

Wolverine for MediatR Users

I happened to see this post from Milan Jovanović today about a little backlash to the MediatR library. For my part, I think MediatR is just a victim of its own success and any backlash is mostly due to folks misusing it very badly in unnecessarily complicated ways (that’s my experience). That aside, yes, I absolutely feel that Wolverine is a much stronger toolset that covers a much broader set of use cases while doing a lot more than MediatR to potentially simplify your application code and do more to promote testability, so here goes this post.

This is taken from the Wolverine for MediatR users guide in the Wolverine documentation.

MediatR is an extraordinarily successful OSS project in the .NET ecosystem, but it’s a very limited tool and the Wolverine team frequently fields questions from folks converting to Wolverine from MediatR. Offhand, the common reasons to do so are:

  1. Wolverine has built in support for the transactional outbox, even for its in memory, local queues
  2. Many people are using MediatR and a separate asynchronous messaging framework like MassTransit or NServiceBus while Wolverine handles the same use cases as MediatR and asynchronous messaging as well with one single set of rules for message handlers
  3. Wolverine’s programming model can easily result in significantly less application code than the same functionality would with MediatR

It’s important to note that Wolverine allows for a completely different coding model than MediatR or other “IHandler of T” application frameworks in .NET. While you can use Wolverine as a near exact drop in replacement for MediatR, that’s not taking advantages of Wolverine’s capabilities.

Handlers

MediatR is an example of what I call an “IHandler of T” framework, just meaning that the primary way to plug into the framework is by implementing an interface signature from the framework like this simple example in MediatR:

public class Ping : IRequest<Pong>
{
    public string Message { get; set; }
}

public class PingHandler : IRequestHandler<Ping, Pong> 
{
    private readonly TextWriter _writer;

    public PingHandler(TextWriter writer)
    {
        _writer = writer;
    }

    public async Task<Pong> Handle(Ping request, CancellationToken cancellationToken)
    {
        await _writer.WriteLineAsync($"--- Handled Ping: {request.Message}");
        return new Pong { Message = request.Message + " Pong" };
    }
}

Now, if you assume that TextWriter is a registered service in your application’s IoC container, Wolverine could easily run the exact class above as a Wolverine handler. While most Hollywood Principle application frameworks usually require you to implement some kind of adapter interface, Wolverine instead wraps around your code, with this being a perfectly acceptable handler implementation to Wolverine:

// No marker interface necessary, and records work well for this kind of little data structure
public record Ping(string Message);
public record Pong(string Message);

// It is legal to implement more than message handler in the same class
public static class PingHandler
{
    public static Pong Handle(Ping command, TextWriter writer)
    {
        _writer.WriteLine($"--- Handled Ping: {request.Message}");
        return new Pong(command.Message);
    }
}

So you might notice a couple of things that are different right away:

  • While Wolverine is perfectly capable of using constructor injection for your handlers and class instances, you can eschew all that ceremony and use static methods for just a wee bit fewer object allocations
  • Like MVC Core and Minimal API, Wolverine supports “method injection” such that you can pass in IoC registered services directly as arguments to the handler methods for a wee bit less ceremony
  • There are no required interfaces on either the message type or the handler type
  • Wolverine discovers message handlers through naming conventions (or you can also use marker interfaces or attributes if you have to)
  • You can use synchronous methods for your handlers when that’s valuable so you don’t have to scatter return Task.CompletedTask(); all over your code
  • Moreover, Wolverine’s best practice as much as possible is to use pure functions for the message handlers for the absolute best testability

There are more differences though. At a minimum, you probably want to look at Wolverine’s compound handler capability as a way to build more complex handlers.

Wolverine was built with the express goal of allowing you to write very low ceremony code. To that end we try to minimize the usage of adapter interfaces, mandatory base classes, or attributes in your code.

Built in Error Handling

Wolverine’s IMessageBus.InvokeAsync() is the direct equivalent to MediatR’s IMediator.Send()but, the Wolverine usage also builds in support for some of Wolverine’s error handling policies such that you can build in selective retries.

MediatR’s INotificationHandler

Point blank, you should not be using MediatR’s INotificationHandler for any kind of background work that needs a true delivery guarantee (i.e., the notification will get processed even if the process fails unexpectedly). This has consistently been one of the very first things I tell JasperFx customers when I start working with any codebase that uses MediatR.

MediatR’s INotificationHandler concept is strictly fire and forget, which is just not suitable if you need delivery guarantees of that work. Wolverine on the other hand supports both a “fire and forget” (Buffered in Wolverine parlance) or a durable, transactional inbox/outbox approach with its in memory, local queues such that work will not be lost in the case of errors. Moreover, using the Wolverine local queues allows you to take advantage of Wolverine’s error handling capabilities for a much more resilient system that you’ll achieve with MediatR.

INotificationHandler in Wolverine is just a message handler. You can publish messages anytime through the IMessageBus.PublishAsync() API, but if you’re just needing to publish additional messages (either commands or events, to Wolverine it’s all just a message), you can utilize Wolverine’s cascading message usage as a way of building more testable handler methods.

MediatR IPipelineBehavior to Wolverine Middleware

MediatR uses its IPipelineBehavior model as a “Russian Doll” model for handling cross cutting concerns across handlers. Wolverine has its own mechanism for cross cutting concerns with its middleware capabilities that are far more capable and potentially much more efficient at runtime than the nested doll approach that MediatR (and MassTransit for that matter) take in its pipeline behavior model.

The Fluent Validation example is just about the most complicated middleware solution in Wolverine, but you can expect that most custom middleware that you’d write in your own application would be much simpler.

Let’s just jump into an example. With MediatR, you might try to use a pipeline behavior to apply Fluent Validation to any handlers where there are Fluent Validation validators for the message type like this sample:

    public class ValidationBehaviour<TRequest, TResponse> : IPipelineBehavior<TRequest, TResponse> where TRequest : IRequest<TResponse>
    {
        private readonly IEnumerable<IValidator<TRequest>> _validators;
        public ValidationBehaviour(IEnumerable<IValidator<TRequest>> validators)
        {
            _validators = validators;
        }
        public async Task<TResponse> Handle(TRequest request, CancellationToken cancellationToken, RequestHandlerDelegate<TResponse> next)
        {
            if (_validators.Any())
            {
                var context = new ValidationContext<TRequest>(request);
                var validationResults = await Task.WhenAll(_validators.Select(v => v.ValidateAsync(context, cancellationToken)));
                var failures = validationResults.SelectMany(r => r.Errors).Where(f => f != null).ToList();
                if (failures.Count != 0)
                    throw new ValidationException(failures);
            }
            return await next();
        }
    }

It’s cheating a little bit, because Wolverine has both an add on for incorporating Fluent Validation middleware for message handlers and a separate one for HTTP usage that relies on the ProblemDetails specification for relaying validation errors. Let’s still dive into how that works just to see how Wolverine really differs — and why we think those differences matter for performance and also to keep exception stack traces cleaner (don’t laugh, we really did design Wolverine quite purposely to avoid the really nasty kind of Exception stack traces you get from many other middleware or “behavior” using frameworks).

Let’s say that you have a Wolverine.HTTP endpoint like so:

public record CreateCustomer
(
    string FirstName,
    string LastName,
    string PostalCode
)
{
    public class CreateCustomerValidator : AbstractValidator<CreateCustomer>
    {
        public CreateCustomerValidator()
        {
            RuleFor(x => x.FirstName).NotNull();
            RuleFor(x => x.LastName).NotNull();
            RuleFor(x => x.PostalCode).NotNull();
        }
    }
}

public static class CreateCustomerEndpoint
{
    [WolverinePost("/validate/customer")]
    public static string Post(CreateCustomer customer)
    {
        return "Got a new customer";
    }
}

In the application bootstrapping, I’ve added this option:

app.MapWolverineEndpoints(opts =>
{
    // more configuration for HTTP...

    // Opting into the Fluent Validation middleware from
    // Wolverine.Http.FluentValidation
    opts.UseFluentValidationProblemDetailMiddleware();
}

Just like with MediatR, you would need to register the Fluent Validation validator types in your IoC container as part of application bootstrapping. Now, here’s how Wolverine’s model is very different from MediatR’s pipeline behaviors. While MediatR is applying that ValidationBehaviour to each and every message handler in your application whether or not that message type actually has any registered validators, Wolverine is able to peek into the IoC configuration and “know” whether there are registered validators for any given message type. If there are any registered validators, Wolverine will utilize them in the code it generates to execute the HTTP endpoint method shown above for creating a customer. If there is only one validator, and that validator is registered as a Singleton scope in the IoC container, Wolverine generates this code:

    public class POST_validate_customer : Wolverine.Http.HttpHandler
    {
        private readonly Wolverine.Http.WolverineHttpOptions _wolverineHttpOptions;
        private readonly Wolverine.Http.FluentValidation.IProblemDetailSource<WolverineWebApi.Validation.CreateCustomer> _problemDetailSource;
        private readonly FluentValidation.IValidator<WolverineWebApi.Validation.CreateCustomer> _validator;

        public POST_validate_customer(Wolverine.Http.WolverineHttpOptions wolverineHttpOptions, Wolverine.Http.FluentValidation.IProblemDetailSource<WolverineWebApi.Validation.CreateCustomer> problemDetailSource, FluentValidation.IValidator<WolverineWebApi.Validation.CreateCustomer> validator) : base(wolverineHttpOptions)
        {
            _wolverineHttpOptions = wolverineHttpOptions;
            _problemDetailSource = problemDetailSource;
            _validator = validator;
        }



        public override async System.Threading.Tasks.Task Handle(Microsoft.AspNetCore.Http.HttpContext httpContext)
        {
            // Reading the request body via JSON deserialization
            var (customer, jsonContinue) = await ReadJsonAsync<WolverineWebApi.Validation.CreateCustomer>(httpContext);
            if (jsonContinue == Wolverine.HandlerContinuation.Stop) return;
            
            // Execute FluentValidation validators
            var result1 = await Wolverine.Http.FluentValidation.Internals.FluentValidationHttpExecutor.ExecuteOne<WolverineWebApi.Validation.CreateCustomer>(_validator, _problemDetailSource, customer).ConfigureAwait(false);

            // Evaluate whether or not the execution should be stopped based on the IResult value
            if (result1 != null && !(result1 is Wolverine.Http.WolverineContinue))
            {
                await result1.ExecuteAsync(httpContext).ConfigureAwait(false);
                return;
            }


            
            // The actual HTTP request handler execution
            var result_of_Post = WolverineWebApi.Validation.ValidatedEndpoint.Post(customer);

            await WriteString(httpContext, result_of_Post);
        }

    }

The point here is that Wolverine is trying to generate the most efficient code possible based on what it can glean from the IoC container registrations and the signature of the HTTP endpoint or message handler methods. The MediatR model has to effectively use runtime wrappers and conditional logic at runtime.

Do note that Wolverine has built in middleware for logging, validation, and transactional middleware out of the box. Most of the custom middleware that folks are building for Wolverine are much simpler than the validation middleware I talked about in this guide.

Vertical Slice Architecture

MediatR is almost synonymous with the “Vertical Slice Architecture” (VSA) approach in .NET circles, but Wolverine arguably enables a much lower ceremony version of VSA. The typical approach you’ll see is folks delegating to MediatR commands or queries from either an MVC Core Controller like this (stolen from this blog post):

public class AddToCartRequest : IRequest<Result>
{
    public int ProductId { get; set; }
    public int Quantity { get; set; }
}

public class AddToCartHandler : IRequestHandler<AddToCartRequest, Result>
{
    private readonly ICartService _cartService;

    public AddToCartHandler(ICartService cartService)
    {
        _cartService = cartService;
    }

    public async Task<Result> Handle(AddToCartRequest request, CancellationToken cancellationToken)
    {
        // Logic to add the product to the cart using the cart service
        bool addToCartResult = await _cartService.AddToCart(request.ProductId, request.Quantity);

        bool isAddToCartSuccessful = addToCartResult; // Check if adding the product to the cart was successful.
        return Result.SuccessIf(isAddToCartSuccessful, "Failed to add the product to the cart."); // Return failure if adding to cart fails.
    }
    
public class CartController : ControllerBase
{
    private readonly IMediator _mediator;

    public CartController(IMediator mediator)
    {
        _mediator = mediator;
    }

    [HttpPost]
    public async Task<IActionResult> AddToCart([FromBody] AddToCartRequest request)
    {
        var result = await _mediator.Send(request);

        if (result.IsSuccess)
        {
            return Ok("Product added to the cart successfully.");
        }
        else
        {
            return BadRequest(result.ErrorMessage);
        }
    }
}

While the introduction of MediatR probably is a valid way to sidestep the common code bloat from MVC Core Controllers, with Wolverine we’d recommend just using the Wolverine.HTTP mechanism for writing HTTP endpoints in a much lower ceremony way and ditch the “mediator” step altogether. Moreover, we’d even go so far as to drop repository and domain service layers and just put the functionality right into an HTTP endpoint method if that code isn’t going to be reused any where else in your application.

See Automatically Loading Entities to Method Parameters for some context around that [Entity] attribute usage

So something like this:

public static class AddToCartRequestEndpoint
{
    // Remember, we can do validation in middleware, or
    // even do a custom Validate() : ProblemDetails method
    // to act as a filter so the main method is the happy path
    
    [WolverinePost("/api/cart/add")]
    public static Update<Cart> Post(
        AddToCartRequest request, 
        
        // See 
        [Entity] Cart cart)
    {
        return cart.TryAddRequest(request) ? Storage.Update(cart) : Storage.Nothing(cart);
    }
}

We of course believe that Wolverine is more optimized for Vertical Slice Architecture than MediatR or any other “mediator” tool by how Wolverine can reduce the number of moving parts, layers, and code ceremony.

IoC Usage

Just know that Wolverine has a very different relationship with your application’s IoC container than MediatR. Wolverine’s philosophy all along has been to keep the usage of IoC service location at runtime to a bare minimum. Instead, Wolverine wants to mostly use the IoC tool as a service registration model at bootstrapping time.

Summary

Wolverine has some overlap with MediatR, but it’s a quite different animal altogether with a very different approach and far more functionality like the integrated transactional inbox/outbox support that’s important for building resilient server side systems. The Wolverine.HTTP mechanism cuts down the number of code artifacts compared to MediatR + MVC Core or Minimal API. Moreover, the way that you write Wolverine handlers, its integration with persistence tooling, and its middleware strategies can just much more to simplify your application code compared to just about anything else in the .NET ecosystem.

And lastly, let me just admit that I would be thrilled beyond belief if Wolverine had 1/100 the usage that MediatR already has by the end of this year. When you see a lot of posts about “why X is better than Y!” (why Golang is better than JavaScript!) it’s a clear sign that the “Y” in question is already a hugely successful project and the “X” isn’t there yet.

Introducing the JasperFx Software YouTube Channel

JasperFx Software is in business to help our clients make the most of the “Critter Stack” tools, Event Sourcing, CQRS, Event Driven Architecture, Test Automation, and server side .NET development in general. We’d be happy to talk with your company and see how we could help you be more successful!

Jeffry Gonzalez and I have kicked off what we plan to be a steady stream of content on the “Critter Stack” (Marten, Wolverine, and related tools) in the JasperFx Software YouTube channel.

In the first video, we started diving in on a new sample “Incident Service” that’s admittedly heavily in flight that shows how to use Marten with both Event Sourcing and as a Document Database over PostgreSQL and its integration with Wolverine as a higher level HTTP web service and asynchronous messaging platform.

We covered a lot, but here’s some of the highlights:

  • Hopefully showing off how easy it is to get started with Marten and Wolverine both, especially with Marten’s ability to lay down its own database schema as needed in its default mode. Later videos will show off how Wolverine does the same for any database schemas it needs and even message broker setup.
  • Utilizing Wolverine.HTTP for web services and how it can be used for a very low code ceremony approach for “Vertical Slice Architecture” and how it promotes testability in code without all the hassle of a complex Clean Architecture project structure or reams of abstractions scattered about in your code. It also leads to simpler code than the more common “MVC Core/Minimal API + MediatR” approach to Vertical Slice Architecture.
  • How Wolverine’s emphasis on pure function handlers leads to business or workflow logic being easy to test
  • Integration testing through the entire stack with Alba specifications inside of xUnit.Net test harnesses.
  • The Critter Stack’s support for command line diagnostics and development time tools, including a way to “unwind the magic” with Wolverine so it can show you exactly how it’s calling your code

Here’s the first video:

In the second video, we got into:

  • Wolverine’s “aggregate handler workflow” style of CQRS command handlers and how you can do that with easily testable pure functions
  • A little bit about Marten projection lifecycles and how that impacts performance or consistency
  • Using Marten’s ability to stream JSON data directly to HTTP for the most efficient possible “read side” query endpoints
  • Wolverine’s message scheduling capability
  • Marten’s utilization of PostgreSQL partitioning for maximizing scalability

I can’t say for sure where we’ll go next, but there will be a part 3 to this series in the next couple weeks and hopefully a series of shorter video content soon too! We’re certainly happy to take requests!

Wringing More Scalability out of Event Sourcing with the Critter Stack

JasperFx Software works with our customers to help wring the absolute best results out of our customer’s usage of the “Critter Stack.” We build several improvements in collaboration with our customers last year to both Marten and Wolverine specifically to improve scalability of large systems using Event Sourcing. If you’re concerned about whether or not your approach to Event Sourcing will actually scale, definitely look at the Critter Stack, and give JasperFx a shout for help making it all work.

Alright, you’re using Event Sourcing with the whole Critter Stack, and you want to get the best scalability possible in the face of an expected onslaught of incoming events. There’s some “opt in” features in Marten especially that you can take advantage of to get your system going a little bit faster and handle bigger databases.

Using the near ubiquitous “Incident Service” example originally built by Oskar Dudycz, the “Critter Stack” community is building out a new version in the Wolverine codebase that when (and if) finished, will hopefully show off an end to end example of using an event sourced workflow.

In this application we’ll need to track common events for the workflow of a customer reported Incident like when it’s logged, categorised, collects notes, and hopefully gets closed. Coming into this, we think it’s going to get very heavy usage so we expect to have tons of events streaming into the database. We’ve also been told by our business partners that we only need to retain closed incidents in the active views of the user interface for a certain amount of time — but we never want to lose data permanently.

All that being said, let’s look at a few options we can enable in Marten right off the bat:

builder.Services.AddMarten(opts =>
{
    var connectionString = builder.Configuration.GetConnectionString("Marten");
    opts.Connection(connectionString);
    opts.DatabaseSchemaName = "incidents";
    
    // We're going to refer to this one soon
    opts.Projections.Snapshot<Incident>(SnapshotLifecycle.Inline);

    // Use PostgreSQL partitioning for hot/cold event storage
    opts.Events.UseArchivedStreamPartitioning = true;
    
    // Recent optimization that will specifically make command processing
    // with the Wolverine "aggregate handler workflow" a bit more efficient
    opts.Projections.UseIdentityMapForAggregates = true;

    // This is big, use this by default with all new development
    // Long story
    opts.Events.AppendMode = EventAppendMode.Quick;
})
    
// Another performance optimization if you're starting from
// scratch
.UseLightweightSessions()
    
// Run projections in the background
.AddAsyncDaemon(DaemonMode.HotCold)

// This adds configuration with Wolverine's transactional outbox and
// Marten middleware support to Wolverine
.IntegrateWithWolverine();

There are three options here I want to bring to your attention:

  1. UseLightweightSessions() directs Marten to use IDocumentSession sessions by default (what’s injected by your DI container) to avoid any performance overhead from identity map tracking in the session. Don’t use this of course if you really do want or need the identity map tracking.
  2. opts.Events.UseArchivedStreamPartitioning = true sets us up for Marten’s “hot/cold” event storage scheme using PostgreSQL native partitioning. More on this in the section on stream archiving below. Read more about this feature in the Marten documentation.
  3. Setting UseIdentityMapForAggregates = true opts into some recent performance optimizations for updating Inline aggregates through Marten’s FetchForWriting API. More detail on this here. Long story short, this makes Marten and Wolverine do less work and make fewer database round trips to support the aggregate handler workflow I’m going to demonstrate below.
  4. Events.AppendMode = EventAppendMode.Quick makes the event appending operations upon saving a Marten session a lot faster, like 50% faster in our testing. It also makes Marten’s “async daemon” feature work smoothly. The downside is that you lose access to some event metadata during Inline projections — which most people won’t care about, but again, we try not to break existing users.

The “Aggregate Handler Workflow”

I have typically described this as Wolverine’s version of the Decider Pattern, but no, I’m now saying that this is a significantly different approach that I believe will lead to better results in larger systems than the “Decider” in that it manages complexity better and handles several technical details that the “Decider” pattern does not. Plus you won’t end up with the humongous switch statements with the Wolverine “Aggregate Handler Workflow” that a Decider function can easily become with any level of domain complexity.

Using Wolverine’s aggregate handler workflow, a command handler that may result in a new event being appended to Marten will look like this one for categorizing an incident:

public static class CategoriseIncidentEndpoint
{
    // This is Wolverine's form of "Railway Programming"
    // Wolverine will execute this before the main endpoint,
    // and stop all processing if the ProblemDetails is *not*
    // "NoProblems"
    public static ProblemDetails Validate(Incident incident)
    {
        return incident.Status == IncidentStatus.Closed 
            ? new ProblemDetails { Detail = "Incident is already closed" } 
            
            // All good, keep going!
            : WolverineContinue.NoProblems;
    }
    
    // This tells Wolverine that the first "return value" is NOT the response
    // body
    [EmptyResponse]
    [WolverinePost("/api/incidents/{incidentId:guid}/category")]
    public static IncidentCategorised Post(
        // the actual command
        CategoriseIncident command, 
        
        // Wolverine is generating code to look up the Incident aggregate
        // data for the event stream with this id
        [Aggregate("incidentId")] Incident incident)
    {
        // This is a simple case where we're just appending a single event to
        // the stream.
        return new IncidentCategorised(incident.Id, command.Category, command.CategorisedBy);
    }
}

The UseIdentityMapForAggregates = true flag optimizes the code above by allowing Marten to use the exact same Incident aggregate object that was originally passed into the Post() method above as the starting basis for updating the Incident data stored in the database. The application of the Inline projection to update the Incident will start with our originally fetched value, apply any new events on top of that, and update the Incident in the same transaction as the events being captured. Without that flag, Marten would have to fetch the Incident starting data from the database all over again when it applies the projection updates while committing the Marten unit of work containing the events.

There’s plenty of rocket science and sophisticated techniques to improving performance, but one simple thing that almost always works out is not repetitively fetching the exact same data from the database if you don’t have to — and that’s the point of the UseIdentityMapForAggregates optimization.

Hot/Cold Storage

Here’s an exciting, relatively new feature in Marten that was planned for years before JasperFx was able to build this for a client late last year. The UseArchivedStreamPartitioning flag sets up your Marten database for “hot / code storage”:

Again, it might require some brain surgery to really improve performance sometimes, but an absolute no-brainer that’s frequently helpful is to just keep your transactional database tables as small and sprightly as possible over time by moving out obsolete or archived data — and that’s exactly what we’re going to do here.

When an Incident event stream is closed, we want to keep that Incident data shown in the user interface for 3 days, then we’d like all the data for that Incident to get archived. Here’s the sample command handler for the CloseIncident command:

public record CloseIncident(
    Guid ClosedBy,
    int Version
);

public static class CloseIncidentEndpoint
{
    [WolverinePost("/api/incidents/close/{id}")]
    public static (UpdatedAggregate, Events, OutgoingMessages) Handle(
        CloseIncident command, 
        [Aggregate]
        Incident incident)
    {
        /* More logic for later
        if (current.Status is not IncidentStatus.ResolutionAcknowledgedByCustomer)
               throw new InvalidOperationException("Only incident with acknowledged resolution can be closed");

           if (current.HasOutstandingResponseToCustomer)
               throw new InvalidOperationException("Cannot close incident that has outstanding responses to customer");

         */
        
        
        if (incident.Status == IncidentStatus.Closed)
        {
            return (new UpdatedAggregate(), [], []);
        }

        return (

            // Returning the latest view of
            // the Incident as the actual response body
            new UpdatedAggregate(),

            // New event to be appended to the Incident stream
            [new IncidentClosed(command.ClosedBy)],

            // Getting fancy here, telling Wolverine to schedule a 
            // command message for three days from now
            [new ArchiveIncident(incident.Id).DelayedFor(3.Days())]);
    }
}

The ArchiveIncident message is being published by this handler using Wolverine’s scheduled message capability so that it will be executed in exactly 3 days time from the current time (you could get fancier and set an exact time to end of business on that day if you wanted).

Note that even when doing the message scheduling, we can still use Wolverine’s cascading message feature. The point of doing this is to keep our handler a pure function that doesn’t have to invoke services, create side effects, or do anything that would force us into asynchronous methods and all of the inherent complexity and noise that inevitably causes.

The ArchiveIncident command handler might just be this:

public record ArchiveIncident(Guid IncidentId);

public static class ArchiveIncidentHandler
{
    // Just going to code this one pretty crudely
    // I'm assuming that we have "auto-transactions"
    // turned on in Wolverine so we don't have to much
    // with the asynchronous IDocumentSession.SaveChangesAsync()
    public static void Handle(ArchiveIncident command, IDocumentSession session)
    {
        session.Events.Append(command.IncidentId, new Archived("It'd done baby!"));
        session.Delete<Incident>(command.IncidentId);
    }
}

When that command executes in three days time, it will delete the projected Incident document from the database and mark the event stream as archived, which will cause PostgreSQL to move that data into the “cold” archived storage.

To close the loop, all normal database operations in Marten specifically filter out archived data with a SQL filter so that they will always be querying directly against the much smaller “active” partition table.

To sum this up, if you use the event archival partitioning and are able to be aggressive about archiving event streams, you can hugely improve the performance of your event sourced application even after you’ve captured a huge number of events because the actual table that Marten is reading and writing from will be relatively stable in side.

As the late, great Stuart Scott would have told us, that’s cooler than the other side of the pillow!

Why aren’t these all defaults?!?

It’s an imperfect world. Every one of the three flags I showed here either subtly change underlying behavior or force additive changes to your application database. The UseIdentityMapForAggregates flag has to be an “opt in” because using that will absolutely give unexpected results for Marten users who mutate the projected aggregate inside of their command handlers (basically anyone doing any type of AggregateRoot base class approach).

Likewise, Marten was originally built using a session with the somewhat more expensive identity map mechanics built in to mimic the commercial tool we were originally trying to replace. I’ve always regretted this decision, but once this has escaped into real systems, changing the underlying behavior absolutely breaks some existing code.

Lastly, introducing the hot/cold partitioning of the event & stream tables to an existing database will cause an expensive database migration, and we certainly don’t want to be inflicting that on unsuspecting users doing an upgrade.

It’s a lot of overhead and compromise, but we’ve chosen to maintain backward compatibility for existing users over enabling out of the box performance improvements.

But wait, there’s more!

Marten has been able to grow quite a bit in capability after I started JasperFx Software as a company to support it. Doing that has allowed us to partner with shops pushing the limits on Marten and Wolverine, and the feedback, collaboration, and yes, compensation has allowed us to push the Critter Stack’s capabilities a lot in the last 18 months.

Wolverine now has the ability to better spread the work of running projections and event subscriptions from Marten over an application cluster.

Sometime in the current quarter, we’re also going to be building and releasing a new “Stream Compacting” feature as another way to deal with archiving data from very long event streams. And yes, a lot of the Event Sourcing community will lecture you about how you should “keep your streams” short, and while there may be some truth to that, that advice is partially around using less capable technical event sourcing solutions. We strive to make Marten & Wolverine more robust so you don’t have to be omniscient and perfect in your upfront modeling.

Why the Critter Stack is Good

JasperFx Software already has a strong track record in our short life of helping our customers be more successful using Event Sourcing, Event Driven Architecture, and Test Automation. Much of the content from these new guides came directly out of our client work. We’re certainly ready to partner with your shop as well!

I’ve had a chance the past two weeks to really buckle down and write more tutorials and guides for Wolverine by itself and the full “Critter Stack” combination with Marten. I’ll admit to being a little disappointed by the download numbers on Wolverine right now, but all that really means is that there’s a lot of untapped potential for growth!

If you do any work on the server side with .NET, or are looking for a technical platform to use for event sourcing, event driven architecture, web services, or asynchronous messaging, Wolverine is going to help you build systems that are resilient, easy to change, and highly testable without having to incur the code complexity common to Clean/Onion/Hexagonal Architecture approaches.

Please don’t make a direct comparison of Wolverine to MediatR as a straightforward “Mediator” tool, or to MassTransit or NServiceBus as an Asynchronous Messaging framework, or to MVC Core as a straight up HTTP service framework. Wolverine does far more than any of those other tools to help you write your actual application code.

On to the new guides for Wolverine:

  • Converting from MediatR – We’re getting more and more questions from users who are coming from MediatR to Wolverine to take advantage of Wolverine capabilities like a transactional outbox that MediatR lacks. Going much further though, this guide tries to explain how to first shift to Wolverine, some important features that Wolverine provides that MediatR does not , and how to lean into Wolverine to make your code a lot simpler and easier to test.
  • Vertical Slice Architecture – Wolverine has quite a bit of “special sauce” that makes it a unique fit for “Vertical Slice Architecture” (VSA). We believe that Wolverine does more to make a VSA coding style effective than any other server side tooling in the .NET ecosystem. If you haven’t looked at Wolverine recently, you’ll want to check this out because Wolverine just got even more ways to simplify code and improve testability in vertical slices without having to resort to the kind of artifact bloat that’s nearly inevitable with prescriptive Clean/Onion Architecture approaches.
  • Modular Monolith Architecture – I’ll freely admit that Wolverine was originally optimized for micro-services, and we’ve had to scramble a bit in the recent 3.6.0 release and today’s 3.7.0 release to improve Wolverine’s support for how folks are wanting to do asynchronous workflows between modules in a modular monolith approach. In this guide we’ll talk about how best to use Wolverine for modular monolith architectures, dealing with eventual consistency, database tooling usage, and test automation.
  • CQRS and Event Sourcing with Marten – Marten is already the most robust and most commonly used toolset for Event Sourcing in the .NET ecosystem. Combined with Wolverine to form the full “Critter Stack,” we think it is one of the most productive toolsets for building resilient and scalable systems using CQRS with Event Sourcing and this guide will show you how the Critter Stack gets that done. There’s also a big section on building integration testing harnesses for the Critter Stack with some of their test support. There are some YouTube videos coming soon that cover this same ground and using some of the same samples.
  • Railway Programming – Wolverine has some lightweight facilities for “Railway Programming” inside of message handlers or HTTP endpoints that can help code complex workflows with simpler individual steps — and do that without incurring loads of generics and custom “result” types. And for a bonus, this guide even shows you how Wolverine’s Railway Programming usage helps you generate OpenAPI metadata from type signatures without having to clutter up your code with noisy attributes to keep the ReST police off your back.

I personally need a break from writing documentation, but we’ll pop up soon with additional guides for:

  • Moving from NServiceBus or MassTransit to Wolverine
  • Interoperability with Wolverine

And on strictly the Marten side of things:

  • Complex workflows with Event Sourcing
  • Multi-Stream Projections

Wolverine 3.6: Modular Monolith and Vertical Slice Architecture Goodies

Wolverine 3.6 just went out tonight as a big release with bug fixes and quite a few significant features to improve Wolverine‘s usability for modular monolith architectures and to further improve Wolverine’s already outstanding usability for vertical slice architecture.

Highlights:

  • New Persistence Helpers feature to make handlers or http endpoint code event cleaner
  • The new “Separated” option to better use multiple handlers for the same message type that’s been a source of friction for Wolverine users using modular monolithic approaches to event driven architecture
  • A huge update to the Message Routing documentation to reflect some new features and existing diagnostics

And of course, the full list of closed issues addressed by this release.

As a little sneak peek from the documentation, what if you could write HTTP endpoints as just a simple little pure function like this:

// Use "Id" as the default member
[WolverinePost("/api/todo/update")]
public static Update<Todo2> Handle(
    // The first argument is always the incoming message
    RenameTodo command, 
    
    // By using this attribute, we're telling Wolverine
    // to load the Todo entity from the configured
    // persistence of the app using a member on the
    // incoming message type
    [Entity] Todo2 todo)
{
    // Do your actual business logic
    todo.Name = command.Name;
    
    // Tell Wolverine that you want this entity
    // updated in persistence
    return Storage.Update(todo);
}

In the code above, the little method tries to load an entity from the application’s persistence tooling (EF Core, Marten, and RavenDb are supported so far) because of the [Entity] attribute, and the return value of Update<Todo2> will result in the Todo2 entity being updated by the same persistence tooling. That’s arguably an easy method to read and reason about, it was definitely easy to write, it’s easy to unit test, and didn’t require umpteen separate “Clean/Onion Architecture” projects and layers to get to testable code that isn’t directly coupled to infrastructure.