Critter Stack Roadmap Update for February

The last time I wrote about the Critter Stack / JasperFx roadmap, I was admittedly feeling a little conservative about big new releases and really just focused on stabilization. In the past week though, the rest of the Critter Stack Core Team decided it was time to get going on the next round of releases for what will be Marten 8.0 and Wolverine 4.0, so let’s get into the details.

Definitely in Scope:

  • Upgrade Marten (and Weasel/Wolverine) to Npgsql 9.0
  • Drop .NET 6/7 support in Marten and .NET 7 support in Wolverine. Both will have targets for .NET 8/9
  • Consolidation of supporting libraries. What is today JasperFx.Core, JasperFx.CodeGeneration, and Oakton are getting combined into a new library called JasperFx. That’s partially to simplify setup by reducing the number of dotnet add ... calls you need to do, but also to potentially streamline configuration that’s today duplicated between Marten & Wolverine.
  • Drop the synchronous APIs that are already marked as [Obsolete] in Marten’s API surface
  • Stream Compacting” in Marten/Wolverine/CritterWatch. This feature is being done in partnership with a JasperFx client

In addition to that work, JasperFx Software is working hard on the forthcoming “Critter Watch” tooling that will be a management and monitoring console application for Wolverine and Marten, so there’s also a bit of the work to help support Critter Watch through improvements to instrumentation and additional APIs that will land in Wolverine or Marten proper.

I’ll write much more about Critter Watch soon. Right now the MVP looks to be:

  1. A dead letter message explorer and management tool for Wolverine
  2. A view of your Critter Watch application configuration, which will be able to span multiple applications to better understand how messages flow throughout your greater ecosystem of services
  3. Viewing and managing asynchronous projections in Marten, which should include performance information, a dashboard explaining what projections or subscriptions are running, and the ability to trigger projection rebuilds, rewind subscriptions, and to pause/restart projections at runtime
  4. Displaying performance metrics about your Wolverine / Marten application by integration with your Otel tooling (we’re initially thinking about PromQL integration here).

Maybe in Scope???

It may be that we go for a quick and relatively low impact Marten 8 / Wolverine 4 release, but here are the things we are considering for this round of releases and would love any feedback or requests you might have:

  • Overhaul the Marten projection support, with a very particular emphasis on simplifying the multi-stream projections especially. The core team & I did quite a bit of work on that in the 4th quarter last year in the first attempt at Marten 8, and that work might feed into this effort as well. Part of that goal is to make it as easy as possible to use purely explicit code for projections as a ready alternative to the conventional Apply/Create method conventions. There’s an existing conversation in this issue.
  • Multi-tenancy support for EF Core with Wolverine commensurate with the existing Marten + Wolverine + multi-tenancy support. I really want to be expanding the Wolverine user base this year, and better EF Core support feels like a way to help achieve that.
  • Revisit the async daemon and add support for dependencies between asynchronous projections and/or the ability to “lock” the execution of 2 or more projections together. That’s 100% about scalability and throughput for folks who have particularly nasty complicated multi-stream projections. This would also hopefully be in partnership with a JasperFx client.
  • Revisiting the event serialization in Marten and its ability to support “downcasters” or “upcasters” for event versioning. There is an opportunity to ratchet up performance by moving to higher performance serializers like MessagePack or MemoryPack for the event serialization. You’d have to make that an opt in model, probably support side by side JSON & whatever other serialization, and make sure folks know that means losing the LINQ querying support for Marten events if you opt for the better performance.
  • Potentially risky time sink: pull quite a bit of the event store support code in Marten today into a new shared library (like the IEvent model and maybe quite a bit of the projection subsystem) where that code could be shared between Marten and the long planned Sql Server-backed event store. And maybe even a CosmosDb integration.
  • Some improvements to Wolverine specifically for modular monolith usage discussed in more depth in the next section.

Wolverine 4 and Modular Monoliths

This is all related to this issue in the Wolverine backlog about mixing and matching databases in the same application. So, the modular monolith thing in Wolverine? It’s admittedly taken some serious work in the past 3-4 months to make Wolverine work the way the creative folks pushing the modular monolith concept have needed.

I think we’re in good shape with Wolverine message handler discovery and routing for modular monoliths, but there’s some challenges around database integration, the transactional inbox/outbox support, and transactional middleware within with a single application that’s potentially talking to multiple databases from a single process — and then make things more complicated still by throwing in the possibility of using multi-tenancy through separated databases.

Wolverine already does fine with an architecture like the one below where you might have separate logical “modules” in your system that generally work against the same database, but using separate database schemas for the isolation:

Where Wolverine doesn’t yet go (and I’m also not aware of any other .NET tooling that actually solves this) is the case where separate modules may be talking to completely separate physical databases as shown below:

The work I’m doing right now with “Critter Watch” touches on Wolverine’s message storage, so it’s somewhat convenient to try to improve Wolverine’s ability to allow you to mix and match different databases and even different database engines from one Wolverine application as part of this release.

Retry on Errors in Wolverine

Coaching my daughter’s 1st/2nd grade basketball team is a trip. I don’t know that the girls are necessarily learning much, but one thing I’d love for them to understand is to “follow your shot” and try for a rebuild for a second shot if the ball doesn’t go in on their first shot. That’s the tortured metaphor/excuse for the marten playing basketball for this post:-)

I’m currently helping a JasperFx Software client to retrofit in some concurrency protection to their existing system that uses Marten for event sourcing by utilizing Marten’s FetchForWriting API deep in the guts of a custom repository to prevent their system from being put into an inconsistent state.

Great, right! Except that there’s not a very real possibility that their application will throw Marten’s ConcurrencyException when an operation fails in Marten’s optimistic concurrency checks.

Our next trick is building in some selective retries for the commands that could probably succeed if they just started over from the new system state after first triggering the concurrency check — and that’s an absolutely perfect use case for the built in Wolverine error handling policies!

This particular system was built around MediatR that doesn’t have any built in error handling policies, and we’ll probably end up rigging up some kind of pipeline behavior or even a flat out decorator MediatR. I did call out the error handling in Wolverine as an advantage in the Wolverine for MediatR Users guide.

In the ubiquitous “Incident Service” example we use in documentation here and there, we have a message handler for trying to automatically assign a priority to an in flight customer reported “Incident” like this:

public static class TryAssignPriorityHandler
{
    // Wolverine will call this method before the "real" Handler method,
    // and it can "magically" connect that the Customer object should be delivered
    // to the Handle() method at runtime
    public static Task<Customer?> LoadAsync(Incident details, IDocumentSession session)
    {
        return session.LoadAsync<Customer>(details.CustomerId);
    }

    // There's some database lookup at runtime, but I've isolated that above, so the
    // behavioral logic that "decides" what to do is a pure function below. 
    [AggregateHandler]
    public static (Events, OutgoingMessages) Handle(
        TryAssignPriority command, 
        Incident details,
        Customer customer)
    {
        var events = new Events();
        var messages = new OutgoingMessages();

        if (details.Category.HasValue && customer.Priorities.TryGetValue(details.Category.Value, out var priority))
        {
            if (details.Priority != priority)
            {
                events.Add(new IncidentPrioritised(priority, command.UserId));

                if (priority == IncidentPriority.Critical)
                {
                    messages.Add(new RingAllTheAlarms(command.IncidentId));
                }
            }
        }

        return (events, messages);
    }
}

The handler above depends on the current state of the Incident in the system, and it’s somewhat possible that two or more people or transactions are happily trying to modify the same Incident at the same time. The Wolverine aggregate handler workflow triggered by the [AggregateHandler] usage up above happily builds in optimistic concurrency protection such that an attempt to save the pending transaction will throw an exception if something else has modified that Incident between the command starting and the call to persist all changes.

Now, depending on the command, you may want to either:

  1. Immediately discard the command message because it’s not obsolete
  2. Just have the command message retried from scratch, either immediately, with a little delay, or even scheduled for a much later time

Wolverine will happily do that for you. While you can happily set global error handling, you can also fine tune the specific error handling for specific message handlers, exception types, and even exception details as shown below:

public static class TryAssignPriorityHandler
{
    public static void Configure(HandlerChain chain)
    {
        // It's a fall through, so you would only do *one*
        // of these options!

        // It can never succeed, so just discard it instead of wasting
        // time on retries or dead letter queues
        chain.OnException<ConcurrencyException>().Discard();

        // Do some selective retries with a progressive wait
        // in between tries, and if that fails, move it to the dead
        // letter storage
        chain.OnException<ConcurrencyException>()
            .RetryWithCooldown(50.Milliseconds(), 100.Milliseconds(), 250.Milliseconds())
            .Then
            .MoveToErrorQueue();
        
        // Or throw it away after a few tries...
        chain.OnException<ConcurrencyException>()
            .RetryWithCooldown(50.Milliseconds(), 100.Milliseconds(), 250.Milliseconds())
            .Then
            .Discard();
    }
    
    // rest of the handler code...

If you’re processing messages through the asynchronous messaging in Wolverine — and this includes from local, in memory queues too — you have the full set of error policies. If you’re consuming Wolverine as a “Mediator” tool where you may be delegating to Wolverine like so:

public static async Task delegate_to_wolverine(IMessageBus bus, TryAssignPriority command)
{
    await bus.InvokeAsync(command);
}

Wolverine can still use any “Retry” or “Discard” error handling policies, and if Wolverine does a retry, it effectively starts from a completely clean slate so you don’t have to worry about any dirty state from scoped services used by the initial failed attempt to process the message.

Summary

Wolverine puts a ton of emphasis on allowing our users to build low ceremony code that’s highly testable, but we also aren’t compromising on resiliency or observability. While being a “mediator” isn’t really what our hopes and dreams for Wolverine were originally, it does it quite credibly and even brings some of the error handling resiliency that you may be used to in asynchronous messaging frameworks but aren’t always a feature of smaller “mediator” tools.

Introducing the JasperFx Software YouTube Channel

JasperFx Software is in business to help our clients make the most of the “Critter Stack” tools, Event Sourcing, CQRS, Event Driven Architecture, Test Automation, and server side .NET development in general. We’d be happy to talk with your company and see how we could help you be more successful!

Jeffry Gonzalez and I have kicked off what we plan to be a steady stream of content on the “Critter Stack” (Marten, Wolverine, and related tools) in the JasperFx Software YouTube channel.

In the first video, we started diving in on a new sample “Incident Service” that’s admittedly heavily in flight that shows how to use Marten with both Event Sourcing and as a Document Database over PostgreSQL and its integration with Wolverine as a higher level HTTP web service and asynchronous messaging platform.

We covered a lot, but here’s some of the highlights:

  • Hopefully showing off how easy it is to get started with Marten and Wolverine both, especially with Marten’s ability to lay down its own database schema as needed in its default mode. Later videos will show off how Wolverine does the same for any database schemas it needs and even message broker setup.
  • Utilizing Wolverine.HTTP for web services and how it can be used for a very low code ceremony approach for “Vertical Slice Architecture” and how it promotes testability in code without all the hassle of a complex Clean Architecture project structure or reams of abstractions scattered about in your code. It also leads to simpler code than the more common “MVC Core/Minimal API + MediatR” approach to Vertical Slice Architecture.
  • How Wolverine’s emphasis on pure function handlers leads to business or workflow logic being easy to test
  • Integration testing through the entire stack with Alba specifications inside of xUnit.Net test harnesses.
  • The Critter Stack’s support for command line diagnostics and development time tools, including a way to “unwind the magic” with Wolverine so it can show you exactly how it’s calling your code

Here’s the first video:

In the second video, we got into:

  • Wolverine’s “aggregate handler workflow” style of CQRS command handlers and how you can do that with easily testable pure functions
  • A little bit about Marten projection lifecycles and how that impacts performance or consistency
  • Using Marten’s ability to stream JSON data directly to HTTP for the most efficient possible “read side” query endpoints
  • Wolverine’s message scheduling capability
  • Marten’s utilization of PostgreSQL partitioning for maximizing scalability

I can’t say for sure where we’ll go next, but there will be a part 3 to this series in the next couple weeks and hopefully a series of shorter video content soon too! We’re certainly happy to take requests!

Wringing More Scalability out of Event Sourcing with the Critter Stack

JasperFx Software works with our customers to help wring the absolute best results out of our customer’s usage of the “Critter Stack.” We build several improvements in collaboration with our customers last year to both Marten and Wolverine specifically to improve scalability of large systems using Event Sourcing. If you’re concerned about whether or not your approach to Event Sourcing will actually scale, definitely look at the Critter Stack, and give JasperFx a shout for help making it all work.

Alright, you’re using Event Sourcing with the whole Critter Stack, and you want to get the best scalability possible in the face of an expected onslaught of incoming events. There’s some “opt in” features in Marten especially that you can take advantage of to get your system going a little bit faster and handle bigger databases.

Using the near ubiquitous “Incident Service” example originally built by Oskar Dudycz, the “Critter Stack” community is building out a new version in the Wolverine codebase that when (and if) finished, will hopefully show off an end to end example of using an event sourced workflow.

In this application we’ll need to track common events for the workflow of a customer reported Incident like when it’s logged, categorised, collects notes, and hopefully gets closed. Coming into this, we think it’s going to get very heavy usage so we expect to have tons of events streaming into the database. We’ve also been told by our business partners that we only need to retain closed incidents in the active views of the user interface for a certain amount of time — but we never want to lose data permanently.

All that being said, let’s look at a few options we can enable in Marten right off the bat:

builder.Services.AddMarten(opts =>
{
    var connectionString = builder.Configuration.GetConnectionString("Marten");
    opts.Connection(connectionString);
    opts.DatabaseSchemaName = "incidents";
    
    // We're going to refer to this one soon
    opts.Projections.Snapshot<Incident>(SnapshotLifecycle.Inline);

    // Use PostgreSQL partitioning for hot/cold event storage
    opts.Events.UseArchivedStreamPartitioning = true;
    
    // Recent optimization that will specifically make command processing
    // with the Wolverine "aggregate handler workflow" a bit more efficient
    opts.Projections.UseIdentityMapForAggregates = true;

    // This is big, use this by default with all new development
    // Long story
    opts.Events.AppendMode = EventAppendMode.Quick;
})
    
// Another performance optimization if you're starting from
// scratch
.UseLightweightSessions()
    
// Run projections in the background
.AddAsyncDaemon(DaemonMode.HotCold)

// This adds configuration with Wolverine's transactional outbox and
// Marten middleware support to Wolverine
.IntegrateWithWolverine();

There are three options here I want to bring to your attention:

  1. UseLightweightSessions() directs Marten to use IDocumentSession sessions by default (what’s injected by your DI container) to avoid any performance overhead from identity map tracking in the session. Don’t use this of course if you really do want or need the identity map tracking.
  2. opts.Events.UseArchivedStreamPartitioning = true sets us up for Marten’s “hot/cold” event storage scheme using PostgreSQL native partitioning. More on this in the section on stream archiving below. Read more about this feature in the Marten documentation.
  3. Setting UseIdentityMapForAggregates = true opts into some recent performance optimizations for updating Inline aggregates through Marten’s FetchForWriting API. More detail on this here. Long story short, this makes Marten and Wolverine do less work and make fewer database round trips to support the aggregate handler workflow I’m going to demonstrate below.
  4. Events.AppendMode = EventAppendMode.Quick makes the event appending operations upon saving a Marten session a lot faster, like 50% faster in our testing. It also makes Marten’s “async daemon” feature work smoothly. The downside is that you lose access to some event metadata during Inline projections — which most people won’t care about, but again, we try not to break existing users.

The “Aggregate Handler Workflow”

I have typically described this as Wolverine’s version of the Decider Pattern, but no, I’m now saying that this is a significantly different approach that I believe will lead to better results in larger systems than the “Decider” in that it manages complexity better and handles several technical details that the “Decider” pattern does not. Plus you won’t end up with the humongous switch statements with the Wolverine “Aggregate Handler Workflow” that a Decider function can easily become with any level of domain complexity.

Using Wolverine’s aggregate handler workflow, a command handler that may result in a new event being appended to Marten will look like this one for categorizing an incident:

public static class CategoriseIncidentEndpoint
{
    // This is Wolverine's form of "Railway Programming"
    // Wolverine will execute this before the main endpoint,
    // and stop all processing if the ProblemDetails is *not*
    // "NoProblems"
    public static ProblemDetails Validate(Incident incident)
    {
        return incident.Status == IncidentStatus.Closed 
            ? new ProblemDetails { Detail = "Incident is already closed" } 
            
            // All good, keep going!
            : WolverineContinue.NoProblems;
    }
    
    // This tells Wolverine that the first "return value" is NOT the response
    // body
    [EmptyResponse]
    [WolverinePost("/api/incidents/{incidentId:guid}/category")]
    public static IncidentCategorised Post(
        // the actual command
        CategoriseIncident command, 
        
        // Wolverine is generating code to look up the Incident aggregate
        // data for the event stream with this id
        [Aggregate("incidentId")] Incident incident)
    {
        // This is a simple case where we're just appending a single event to
        // the stream.
        return new IncidentCategorised(incident.Id, command.Category, command.CategorisedBy);
    }
}

The UseIdentityMapForAggregates = true flag optimizes the code above by allowing Marten to use the exact same Incident aggregate object that was originally passed into the Post() method above as the starting basis for updating the Incident data stored in the database. The application of the Inline projection to update the Incident will start with our originally fetched value, apply any new events on top of that, and update the Incident in the same transaction as the events being captured. Without that flag, Marten would have to fetch the Incident starting data from the database all over again when it applies the projection updates while committing the Marten unit of work containing the events.

There’s plenty of rocket science and sophisticated techniques to improving performance, but one simple thing that almost always works out is not repetitively fetching the exact same data from the database if you don’t have to — and that’s the point of the UseIdentityMapForAggregates optimization.

Hot/Cold Storage

Here’s an exciting, relatively new feature in Marten that was planned for years before JasperFx was able to build this for a client late last year. The UseArchivedStreamPartitioning flag sets up your Marten database for “hot / code storage”:

Again, it might require some brain surgery to really improve performance sometimes, but an absolute no-brainer that’s frequently helpful is to just keep your transactional database tables as small and sprightly as possible over time by moving out obsolete or archived data — and that’s exactly what we’re going to do here.

When an Incident event stream is closed, we want to keep that Incident data shown in the user interface for 3 days, then we’d like all the data for that Incident to get archived. Here’s the sample command handler for the CloseIncident command:

public record CloseIncident(
    Guid ClosedBy,
    int Version
);

public static class CloseIncidentEndpoint
{
    [WolverinePost("/api/incidents/close/{id}")]
    public static (UpdatedAggregate, Events, OutgoingMessages) Handle(
        CloseIncident command, 
        [Aggregate]
        Incident incident)
    {
        /* More logic for later
        if (current.Status is not IncidentStatus.ResolutionAcknowledgedByCustomer)
               throw new InvalidOperationException("Only incident with acknowledged resolution can be closed");

           if (current.HasOutstandingResponseToCustomer)
               throw new InvalidOperationException("Cannot close incident that has outstanding responses to customer");

         */
        
        
        if (incident.Status == IncidentStatus.Closed)
        {
            return (new UpdatedAggregate(), [], []);
        }

        return (

            // Returning the latest view of
            // the Incident as the actual response body
            new UpdatedAggregate(),

            // New event to be appended to the Incident stream
            [new IncidentClosed(command.ClosedBy)],

            // Getting fancy here, telling Wolverine to schedule a 
            // command message for three days from now
            [new ArchiveIncident(incident.Id).DelayedFor(3.Days())]);
    }
}

The ArchiveIncident message is being published by this handler using Wolverine’s scheduled message capability so that it will be executed in exactly 3 days time from the current time (you could get fancier and set an exact time to end of business on that day if you wanted).

Note that even when doing the message scheduling, we can still use Wolverine’s cascading message feature. The point of doing this is to keep our handler a pure function that doesn’t have to invoke services, create side effects, or do anything that would force us into asynchronous methods and all of the inherent complexity and noise that inevitably causes.

The ArchiveIncident command handler might just be this:

public record ArchiveIncident(Guid IncidentId);

public static class ArchiveIncidentHandler
{
    // Just going to code this one pretty crudely
    // I'm assuming that we have "auto-transactions"
    // turned on in Wolverine so we don't have to much
    // with the asynchronous IDocumentSession.SaveChangesAsync()
    public static void Handle(ArchiveIncident command, IDocumentSession session)
    {
        session.Events.Append(command.IncidentId, new Archived("It'd done baby!"));
        session.Delete<Incident>(command.IncidentId);
    }
}

When that command executes in three days time, it will delete the projected Incident document from the database and mark the event stream as archived, which will cause PostgreSQL to move that data into the “cold” archived storage.

To close the loop, all normal database operations in Marten specifically filter out archived data with a SQL filter so that they will always be querying directly against the much smaller “active” partition table.

To sum this up, if you use the event archival partitioning and are able to be aggressive about archiving event streams, you can hugely improve the performance of your event sourced application even after you’ve captured a huge number of events because the actual table that Marten is reading and writing from will be relatively stable in side.

As the late, great Stuart Scott would have told us, that’s cooler than the other side of the pillow!

Why aren’t these all defaults?!?

It’s an imperfect world. Every one of the three flags I showed here either subtly change underlying behavior or force additive changes to your application database. The UseIdentityMapForAggregates flag has to be an “opt in” because using that will absolutely give unexpected results for Marten users who mutate the projected aggregate inside of their command handlers (basically anyone doing any type of AggregateRoot base class approach).

Likewise, Marten was originally built using a session with the somewhat more expensive identity map mechanics built in to mimic the commercial tool we were originally trying to replace. I’ve always regretted this decision, but once this has escaped into real systems, changing the underlying behavior absolutely breaks some existing code.

Lastly, introducing the hot/cold partitioning of the event & stream tables to an existing database will cause an expensive database migration, and we certainly don’t want to be inflicting that on unsuspecting users doing an upgrade.

It’s a lot of overhead and compromise, but we’ve chosen to maintain backward compatibility for existing users over enabling out of the box performance improvements.

But wait, there’s more!

Marten has been able to grow quite a bit in capability after I started JasperFx Software as a company to support it. Doing that has allowed us to partner with shops pushing the limits on Marten and Wolverine, and the feedback, collaboration, and yes, compensation has allowed us to push the Critter Stack’s capabilities a lot in the last 18 months.

Wolverine now has the ability to better spread the work of running projections and event subscriptions from Marten over an application cluster.

Sometime in the current quarter, we’re also going to be building and releasing a new “Stream Compacting” feature as another way to deal with archiving data from very long event streams. And yes, a lot of the Event Sourcing community will lecture you about how you should “keep your streams” short, and while there may be some truth to that, that advice is partially around using less capable technical event sourcing solutions. We strive to make Marten & Wolverine more robust so you don’t have to be omniscient and perfect in your upfront modeling.

Why the Critter Stack is Good

JasperFx Software already has a strong track record in our short life of helping our customers be more successful using Event Sourcing, Event Driven Architecture, and Test Automation. Much of the content from these new guides came directly out of our client work. We’re certainly ready to partner with your shop as well!

I’ve had a chance the past two weeks to really buckle down and write more tutorials and guides for Wolverine by itself and the full “Critter Stack” combination with Marten. I’ll admit to being a little disappointed by the download numbers on Wolverine right now, but all that really means is that there’s a lot of untapped potential for growth!

If you do any work on the server side with .NET, or are looking for a technical platform to use for event sourcing, event driven architecture, web services, or asynchronous messaging, Wolverine is going to help you build systems that are resilient, easy to change, and highly testable without having to incur the code complexity common to Clean/Onion/Hexagonal Architecture approaches.

Please don’t make a direct comparison of Wolverine to MediatR as a straightforward “Mediator” tool, or to MassTransit or NServiceBus as an Asynchronous Messaging framework, or to MVC Core as a straight up HTTP service framework. Wolverine does far more than any of those other tools to help you write your actual application code.

On to the new guides for Wolverine:

  • Converting from MediatR – We’re getting more and more questions from users who are coming from MediatR to Wolverine to take advantage of Wolverine capabilities like a transactional outbox that MediatR lacks. Going much further though, this guide tries to explain how to first shift to Wolverine, some important features that Wolverine provides that MediatR does not , and how to lean into Wolverine to make your code a lot simpler and easier to test.
  • Vertical Slice Architecture – Wolverine has quite a bit of “special sauce” that makes it a unique fit for “Vertical Slice Architecture” (VSA). We believe that Wolverine does more to make a VSA coding style effective than any other server side tooling in the .NET ecosystem. If you haven’t looked at Wolverine recently, you’ll want to check this out because Wolverine just got even more ways to simplify code and improve testability in vertical slices without having to resort to the kind of artifact bloat that’s nearly inevitable with prescriptive Clean/Onion Architecture approaches.
  • Modular Monolith Architecture – I’ll freely admit that Wolverine was originally optimized for micro-services, and we’ve had to scramble a bit in the recent 3.6.0 release and today’s 3.7.0 release to improve Wolverine’s support for how folks are wanting to do asynchronous workflows between modules in a modular monolith approach. In this guide we’ll talk about how best to use Wolverine for modular monolith architectures, dealing with eventual consistency, database tooling usage, and test automation.
  • CQRS and Event Sourcing with Marten – Marten is already the most robust and most commonly used toolset for Event Sourcing in the .NET ecosystem. Combined with Wolverine to form the full “Critter Stack,” we think it is one of the most productive toolsets for building resilient and scalable systems using CQRS with Event Sourcing and this guide will show you how the Critter Stack gets that done. There’s also a big section on building integration testing harnesses for the Critter Stack with some of their test support. There are some YouTube videos coming soon that cover this same ground and using some of the same samples.
  • Railway Programming – Wolverine has some lightweight facilities for “Railway Programming” inside of message handlers or HTTP endpoints that can help code complex workflows with simpler individual steps — and do that without incurring loads of generics and custom “result” types. And for a bonus, this guide even shows you how Wolverine’s Railway Programming usage helps you generate OpenAPI metadata from type signatures without having to clutter up your code with noisy attributes to keep the ReST police off your back.

I personally need a break from writing documentation, but we’ll pop up soon with additional guides for:

  • Moving from NServiceBus or MassTransit to Wolverine
  • Interoperability with Wolverine

And on strictly the Marten side of things:

  • Complex workflows with Event Sourcing
  • Multi-Stream Projections

Critter Stack Roadmap for 2025

A belated Happy New Year’s to everybody!

The “Critter Stack” had a huge 2024, and I listed off some of the highlights of the improvements we made in Critter Stack Year in Review for 2024. For 2025, we’ve reordered our priority order from what I was writing last summer. I think we might genuinely focus more on sample applications, tutorials, and videos early this year than we do on coding new features.

There’s also a separate post on JasperFx Software in 2025. Please do remember that JasperFx Software is available for either ongoing support contracts for Marten and/or Wolverine and consulting engagements to help you wring the most possible value out of the tools — or to just help you with any old server side .NET architecture you have.

Marten

At this point, I believe that Marten is by far and away the most robust and most productive tooling for Event Sourcing in the .NET ecosystem. Moreover, if you believe Nuget download numbers, it’s also the most heavily used Event Sourcing tooling in .NET. I think most of the potential growth for Marten this year will simply be a result of developers hopefully being more open to using Event Sourcing as that technique becomes better known. I don’t have hard numbers to back this up, but my feeling is that Marten’s main competitor is shops choosing to roll their own Event Sourcing frameworks in house rather than any other specific tool.

  • I think we’re putting off the planned Marten 8.0 release for now. Instead, we’ll mostly be focused on dealing with whatever issues come up from our users and JasperFx clients with Marten 7 for the time being.
  • Babu is working on adding a formal “Crypto Shredding” feature to Marten 7
  • More sample applications and matching tutorials for Marten
  • Possibly adding a “Marten Events to EF Core” projection model?
  • Formal support for PostgreSQL PostGIS spatial data? I don’t know what that means yet though
  • When we’re able to reconsider Marten 8 this year, that will include:
    • A reorganization of the JasperFx building blocks to remove duplication between Marten, Wolverine, and other tools
    • Stream-lining the Projection API
    • Yet more scalability and performance improvements to the async daemon. There’s some potential features that we’re discussing with JasperFx clients that might drive this work

After the insane pace of Marten changes we made last year, I see Marten development and the torrid pace of releases (hopefully) slowing quite a bit in 2025.

Wolverine

Wolverine doesn’t yet have anywhere near the usage of Marten and exists in a much more crowded tooling space to boot. I’m hopeful that we can greatly increase Wolverine usage in 2025 by further differentiating it from its competitor tools by focusing on how Wolverine allows teams to write backend systems with much lower ceremony code without sacrificing testability, robustness, or maintainability.

We’re shelving any thoughts about a Wolverine 4.0 release early this year, but that’s opened the flood gates for planned enhancements to Wolverine 3.*:

  • Wolverine 3.6 is heavily in flight for release this month, and will be a pretty large release bringing some needed improvements for Wolverine within “Modular Monolith” usage, yet more special sauce for lower “Vertical Slice Architecture” usage, enhancements to the “aggregate handler workflow” integration with Marten, and improved EF Core integration
  • Multi-Tenancy support for EF Core in line with what Wolverine can already do with its Marten integration
  • CosmosDb integration for Transactional Inbox/Outbox support, saga storage, transactional middleware
  • More options for runtime message routing
  • Authoring more sample applications to show off how Wolverine allows for a different coding model than other messaging or mediator or HTTP endpoint tools

I think there’s a lot of untapped potential for Wolverine, and I’ll personally be focused on growing its usage in the community this year. I’m hoping the better EF Core integration, having more database options, and maybe even yet more messaging options can help us grow.

I honestly don’t know what is going to happen with Wolverine & Aspire. Aspire doesn’t really play nicely with frameworks like Wolverine right now, and I think it would take custom Wolverine/Aspire adapter libraries to get a truly good experience. My strong preference right now is to just use Docker Compose for local development, but it’s Microsoft’s world and folks like me building OSS tools just have to live in it.

Ermine & Other New Critters

Sigh, “Ermine” is the code name for a long planned port of Marten’s event sourcing functionality to Sql Server. I would still love to see this happen in 2025, but it’s going to be pushed off for a little bit. With plenty of input from other Marten contributors, I’ve done some preliminary work trying to centralize plenty of Marten’s event sourcing internals to a potentially shared assembly.

We’ve also at least considered extending Marten’s style of event sourcing to other databases, with CosmosDb, RavenDb, DynamoDb, SqlLite, and Oracle (people still use it apparently) being kicked around as options.

“Critter Watch”

This is really a JasperFx Software initiative to create a commercial tool that will be a dedicated management portal and performance monitoring tool (meant to be used in conjunction with Grafana/Prometheus/et al) for the “Critter Stack”. I’ll share concrete details of this when there are some, but Babu & I plan to be working in earnest on “Critter Watch” in the 1st quarter.

Note about Blogging

I’m planning to blog much less in the coming year and focus more on either writing more robust tutorials or samples within technical documentation sites and finally joining the modern world and moving to YouTube or Twitch video content creation.

Marten V7.35 Drops for a Little Post Christmas Cheer

And of course, JasperFx Software is available for any kind of consulting engagement around the Critter Stack tools, event sourcing, event driven architecture, test automation, or just any kind of server side .NET architecture.

Absurdly enough, the Marten community made one major release (7.0 was a big change) and 35 different releases of new functionality. Some significant, some just including a new tactical convenience method or two. I think Marten ends the 2024 calendar year with the 7.35.0 release today.

The big highlight is some work for a JasperFx Software client who needs to run some multi-stream projections asynchronously (as one probably should), but needs their user interface client in some scenarios to be showing the very latest information. That’s now possible with the QueryForNonStaleData<T>()` API shown below:

var builder = Host.CreateApplicationBuilder();
builder.Services.AddMarten(opts =>
{
    opts.Connection(builder.Configuration.GetConnectionString("marten"));
    opts.Projections.Add<TripProjection>(ProjectionLifecycle.Async);
}).AddAsyncDaemon(DaemonMode.HotCold);

using var host = builder.Build();
await host.StartAsync();

// DocumentStore() is an extension method in Marten just
// as a convenience method for test automation
await using var session = host.DocumentStore().LightweightSession();

// This query operation will first "wait" for the asynchronous projection building the
// Trip aggregate document to catch up to at least the highest event sequence number assigned
// at the time this method is called
var latest = await session.QueryForNonStaleData<Trip>(5.Seconds())
    .OrderByDescending(x => x.Started)
    .Take(10)
    .ToListAsync();

Of course, there is a non-zero risk of that operation timing out, so it’s not a silver bullet and you’ll need to be aware of that in your usage, but hey, it’s a way around needing to adopt eventual consistency while also providing a good user experience in your client by not appearing to have lost data.

See the documentation on this feature for more information.

The highlight for me personally is that as of this second, the open issue count for Marten on GitHub is sitting at 37 (bugs, enhancement requests, 8.0 planning, documentation TODOs), which is the lowest that number has been for 7/8 years. Feels good.

Critter Stack Year in Review for 2024

Just for fun, here’s what I wrote as the My Technical Plans and Aspirations for 2024 detailing what I had hoped to accomplish this year.

While there’s still just a handful of technical deliverables I’m trying to get out in this calendar year, I’m admittedly running on mental fumes rolling into the holiday season. Thinking back about how much was delivered for the “Critter Stack” (Marten, Weasel, and Wolverine) this year is making me feel a lot better about giving myself some mental recharge time during the holidays. Happily for me, most of the advances in the Critter Stack this year were either from the community (i.e., not me) or done in collaboration and with the sponsorship of JasperFx Software customers for their systems.

The biggest highlights and major releases were Marten 7.0 and Wolverine 3.0.


Performance and Scalability

  • Marten 7.0 brought a new “partial update” model based on native PostgreSQL functions that no longer required the PLv8 add on. Hat tip to Babu Annamalai for that feature!
  • The very basic database execution pipeline underneath Marten was largely rewritten to be far more parsimonious with how it uses database connections and to take advantage of more efficient Npgsql usage. That included using the very latest improvements to Npgsql for batching queries and moving to positional parameters instead of named parameters. Small ball optimizations for sure, but being more parsimonious with connections has been advantageous
  • Marten’s “quick append” model sacrifices a little bit of metadata tracking for a whole lot of throughput improvements (we’ve measured a 50% improvement) when appending events. This mode will be a default in Marten 8. This also helps stabilize “event skipping” in the async daemon under heavy loads. I think this was a big win that we need to broadcast more
  • Random optimizations in the “inline projection” model in Marten to reduce database round trips
  • Using PostgreSQL Read Replicas in Marten. Hat tip to JT.
  • First class support for PostgreSQL table partitioning in Marten. Long planned and requested, finally got here. Still admittedly shaking out some database migration issues with this though.
  • Performance optimizations for CQRS command handlers where you want to fetch the final state of a projected aggregate that has been “advanced” as part of the command handler. Mostly in Marten, but there’s a helper in Wolverine too.

Resiliency

Multi Tenancy

Multi-tenancy has been maybe the biggest single source of client requests for JasperFx Software this year. You can hear about some of that on a recent video conversation I got to do with Derek Cromartin.

Complex Workflows

I’m probably way too sloppy or at least not being precise about the differences between stateful sagas and process managers and tend to call any stateful, long lived workflow a “saga”. I’m not losing any sleep over that.

“Day 2” Improvements

By “Day 2” I just mean features for production support like instrumentation or database migrations or event versioning

Options for Querying

  • Marten 7.0 brought a near rewrite of Marten’s LINQ subsystem that closed a lot of gaps in functionality that we previously had. It also spawned plenty of regression bugs that we’ve had to address in the meantime, but the frequency of LINQ related issues has dramatically fallen
  • Marten got another, more flexible option for the specification pattern. I.e., we don’t need no stinkin’ repositories here!
  • There were quite a few improvements to Marten’s ability to allow you to use explicit SQL as a replacement or supplement to LINQ from the community

Messaging Improvements

This is mostly Wolverine related.

  • A new PostgreSQL backed messaging transport
  • Strictly ordered queuing options in Wolverine
  • “Sticky” message listeners so that only one node in a cluster listens to a certain messaging endpoint. This is super helpful for processes that are stateful. This also helps for multi-tenancy.
  • Wolverine got a GCP Pubsub transport
  • And we finally released the Pulsar transport
  • Way more options for Rabbit MQ conventional message routing
  • Rabbit MQ header exchange support

Test Automation Support

Hey, the “Critter Stack” community takes testability, test automation, and TDD very seriously. To that end, we’ve invested a lot into test automation helpers this year.

Strong Typed Identifiers

Despite all my griping along the way and frankly threatening bodily harm to the authors of some of the most popular libraries for strong typed identifiers, Marten has gotten a lot of first class support for strong typed identifiers in both the document database and event store features. There will surely be more to come because it’s a permutation hell problem where people stumble into yet more scenarios with these damn things.

But whatever, we finally have it. And quite a bit of the most time consuming parts of that work has been de facto paid for by JasperFx clients, which takes a lot of the salt out of the wound for me!

Modular Monolith Usage

This is going to be a major area of improvement for Wolverine here at the tail end of the year because suddenly everybody and their little brother wants to use this architectural pattern in ways that aren’t yet great with Wolverine.

Other Cool New Features

There was actually quite a few more refinements made to both tools, but I’ve exhausted the time I allotted myself to write this, so let’s wrap up.

Summary

Last January I wrote that an aspiration for 2024 was to:

Continue to push Marten & Wolverine to be the best possible technical platform for building event driven architectures

At this point I believe that the “Critter Stack” is already the best set of technical tooling in the .NET ecosystem for building a system using an Event Driven Architecture, especially if Event Sourcing is a significant part of your persistence strategy. There are other messaging frameworks that have more messaging options, but Wolverine already does vastly more to help you productively write code that’s testable, resilient, easier to reason about, and well instrumented than older messaging tools in the .NET space. Likewise, Wolverine.HTTP is the lowest ceremony coding model for ASP.Net Core web service development, and the only one that has a first class transactional outbox integration. In terms of just Event Sourcing, I do not believe that Marten has any technical peer in the .NET ecosystem.

But of course there are plenty of things we can do better, and we’re not standing still in 2025 by any means. After some rest, I’ll pop back in January with some aspirations and theoretical roadmap for the “Critter Stack” in 2025. Details then, but expect that to include more database options and yes, long simmering plans for commercialization. And the overarching technical goal in 2025 for the “Critter Stack” is to be the best technical platform on the planet for Event Driven Architecture development.

Marten Improvements in 7.34

Through a combination of Marten community members and in collaboration with some JasperFx Software clients, we’re able to push some new fixes and functionality in Marten 7.34 just today.

For the F# Person in your Life

You can now use F# Option types in LINQ Where() clauses in Marten. Check out the pull request for that to see samples. The LINQ provider code is just a difficult problem domain, and I can’t tell you how grateful I am to have gotten the community pull request for this.

Fetch the Latest Aggregate

Marten has had the FetchForWriting() API for awhile now as our recommended way to build CQRS command handlers with Marten event sourcing as I wrote about recently in CQRS Command Handlers with Marten. Great, but…

  1. What if you just want a read only view of the current data for an aggregate projection over a single event stream and wouldn’t mind a lighter weight API than FetchForWriting()?
  2. What if in your command handler you used FetchForWriting(), but now you want to return the now updated version of your aggregate projection for the caller of the command? And by the way, you want this to be as performant as possible no matter how the projection is configured.

Now you’re in luck, because Marten 7.34 adds the new FetchLatest() API for both of the bullets above.

Let’s pretend we’re building an invoicing system with Marten event sourcing and have this “self-aggregating” version of an Invoice:

public record InvoiceCreated(string Description, decimal Amount);

public record InvoiceApproved;
public record InvoiceCancelled;
public record InvoicePaid;
public record InvoiceRejected;

public class Invoice
{
    public Invoice()
    {
    }

    public static Invoice Create(IEvent<InvoiceCreated> created)
    {
        return new Invoice
        {
            Amount = created.Data.Amount,
            Description = created.Data.Description,

            // Capture the timestamp from the event
            // metadata captured by Marten
            Created = created.Timestamp,
            Status = InvoiceStatus.Created
        };
    }

    public int Version { get; set; }

    public decimal Amount { get; set; }
    public string Description { get; set; }
    public Guid Id { get; set; }
    public DateTimeOffset Created { get; set; }
    public InvoiceStatus Status { get; set; }

    public void Apply(InvoiceCancelled _) => Status = InvoiceStatus.Cancelled;
    public void Apply(InvoiceRejected _) => Status = InvoiceStatus.Rejected;
    public void Apply(InvoicePaid _) => Status = InvoiceStatus.Paid;
    public void Apply(InvoiceApproved _) => Status = InvoiceStatus.Approved;
}

And for now, we’re going to let our command handlers just use a Live aggregation of the Invoice from the raw events on demand:

var builder = Host.CreateApplicationBuilder();
builder.Services.AddMarten(opts =>
{
    opts.Connection(builder.Configuration.GetConnectionString("marten"));

    // Just telling Marten upfront that we will use
    // live aggregation for the Invoice aggregate
    // This would be the default anyway if you didn't explicitly
    // register Invoice this way, but doing so let's
    // Marten "know" about Invoice for code generation
    opts.Projections.LiveStreamAggregation<Projections.Invoice>();
});

Now we can get at the latest, greatest, current view of the Invoice that is consistent with the captured events for that invoice stream at this very moment with this usage:

public static async Task read_latest(
    // Watch this, only available on the full IDocumentSession
    IDocumentSession session,
    Guid invoiceId)
{
    var invoice = await session
        .Events.FetchLatest<Projections.Invoice>(invoiceId);
}

The usage of the API above would be completely unchanged if you were to switch the projection lifecycle of the Invoice to be either Inline (where the view is updated in the database at the same time new events are captured) or Async. That usage gives you a little bit of what we called “reversibility” in the XP days, which just means that you’re easily able to change your mind later about exactly what projection lifecycle you want to use for Invoice views.

The main reason that FetchLatest() was envisioned, however, was to pair up with FetchForWriting() in command handlers. It’s turned out to be a common use case that folks want their command handlers to:

  1. Use the current state of the projected aggregate for the event stream to…
  2. “Decide” what new events should be appended to this stream based on the incoming command and existing state of the aggregate
  3. Save the changes
  4. Return a now updated version of the projected aggregate for the event stream with the newly captured events reflected in the projected aggregate.

There is going to be a slicker integration of this with Wolverine’s aggregate handler workflow with Marten by early next week, but for now, let’s pretend we’re working with Marten from within maybe ASP.Net Minimal API and want to just work that way. Let’s say that we have a little helper method for a mini-“Decider” pattern implementation for our Invoice event streams like this one:

public static class MutationExtensions
{
    public static async Task<Projections.Invoice> MutateInvoice(this IDocumentSession session, Guid id, Func<Projections.Invoice, IEnumerable<object>> decider,
        CancellationToken token = default)
    {
        var stream = await session.Events.FetchForWriting<Projections.Invoice>(id, token);

        // Decide what new events should be appended based on the current
        // state of the aggregate and application logic
        var events = decider(stream.Aggregate);
        stream.AppendMany(events);

        // Persist any new events
        await session.SaveChangesAsync(token);

        return await session.Events.FetchLatest<Projections.Invoice>(id, token);
    }
}

Which could be used something like:

public static Task Approve(IDocumentSession session, Guid invoiceId)
{
    // I'd maybe suggest taking the lambda being passed in
    // here out somewhere where it's easy to test
    // Wolverine does that for you, so maybe just use that!
    return session.MutateInvoice(invoiceId, invoice =>
    {
        if (invoice.Status != InvoiceStatus.Approved)
        {
            return [new InvoiceApproved()];
        }

        return [];
    });
}

New Marten System Level “Archived” Event

Much more on this soon with an example more end to end with Wolverine explaining how this would add value for performance and testability.

Marten now has a built in event named Archived that can be appended to any event stream:

namespace Marten.Events;

/// <summary>
/// The presence of this event marks a stream as "archived" when it is processed
/// by a single stream projection of any sort
/// </summary>
public record Archived(string Reason);

When this event is appended to an event stream and that event is processed through any type of single stream projection for that event stream (snapshot or what we used to call a “self-aggregate”, SingleStreamProjection, or CustomProjection with the AggregateByStream option), Marten will automatically mark that entire event stream as archived as part of processing the projection. This applies for both Inline and Async execution of projections.

Let’s try to make this concrete by building a simple order processing system that might include this aggregate:

public class Item
{
    public string Name { get; set; }
    public bool Ready { get; set; }
}

public class Order
{
    // This would be the stream id
    public Guid Id { get; set; }

    // This is important, by Marten convention this would
    // be the
    public int Version { get; set; }

    public Order(OrderCreated created)
    {
        foreach (var item in created.Items)
        {
            Items[item.Name] = item;
        }
    }

    public void Apply(IEvent<OrderShipped> shipped) => Shipped = shipped.Timestamp;
    public void Apply(ItemReady ready) => Items[ready.Name].Ready = true;

    public DateTimeOffset? Shipped { get; private set; }

    public Dictionary<string, Item> Items { get; set; } = new();

    public bool IsReadyToShip()
    {
        return Shipped == null && Items.Values.All(x => x.Ready);
    }
}

Next, let’s say we’re having the Order aggregate snapshotted so that it’s updated every time new events are captured like so:

var builder = Host.CreateApplicationBuilder();
builder.Services.AddMarten(opts =>
{
    opts.Connection("some connection string");

    // The Order aggregate is updated Inline inside the
    // same transaction as the events being appended
    opts.Projections.Snapshot<Order>(SnapshotLifecycle.Inline);

    // Opt into an optimization for the inline aggregates
    // used with FetchForWriting()
    opts.Projections.UseIdentityMapForAggregates = true;
})

// This is also a performance optimization in Marten to disable the
// identity map tracking overall in Marten sessions if you don't
// need that tracking at runtime
.UseLightweightSessions();

Now, let’s say as a way to keep our application performing as well as possible, we’d like to be aggressive about archiving shipped orders to keep the “hot” event storage table small. One way we can do that is to append the Archived event as part of processing a command to ship an order like so:

public static async Task HandleAsync(ShipOrder command, IDocumentSession session)
{
    var stream = await session.Events.FetchForWriting<Order>(command.OrderId);
    var order = stream.Aggregate;

    if (!order.Shipped.HasValue)
    {
        // Mark it as shipped
        stream.AppendOne(new OrderShipped());

        // But also, the order is done, so let's mark it as archived too!
        stream.AppendOne(new Archived("Shipped"));

        await session.SaveChangesAsync();
    }
}

If an Order hasn’t already shipped, one of the outcomes of that command handler executing is that the entire event stream for the Order will be marked as archived.

Marten Event Sourcing Gets Some New Tools

JasperFx Software has gotten the chance this work to build out several strategic improvements to both Marten and Wolverine through collaborations with our clients who have had some specific needs. This has been highly advantageous because it’s helped push some significant, long planned technical improvements while getting all important feedback as clients integrate the new features. Today I’d like to throw out a couple valuable features and capabilities that Marten has gained as part of recent client work.

“Side Effects” in Projections

In a recent post called Multi Step Workflows with the Critter Stack I talked about using Wolverine sagas (really process managers if you have to be precise about the pattern name because I’m slopping about interchanging “saga” and “process manager”) for long running workflows. In that post I talked about how an incoming file would be:

  1. Broken up into batches of rows
  2. Each batch would be validated as a separately handled message for some parallelization and more granular retries
  3. When there were validation results recorded for each record batch, the file processing itself would either stop with a call back message summarizing the failures to the upstream sender or continue to the next stage.

As it turns out, event sourcing with a projected aggregate document for the state of the file import turns out to be another good way to implement this workflow, especially with the new “side effects” model recently introduced in Marten at the behest of a JasperFx client.

In this usage. let’s say that we have this aggregated state for a file being imported:

public class FileImportState
{

    // Identity for this saga within our system
    public Guid Id { get; set; }
    public string FileName { get; set; }
    public string PartnerTrackingNumber { get; set; }
    public DateTimeOffset Created { get; set; } = DateTimeOffset.UtcNow;
    public List<RecordBatchTracker> RecordBatches { get; set; } = new();

    public FileImportStage Stage { get; set; } = FileImportStage.Validating;
}

The FileImportState would be updated by appending events like BatchValidated, with Marten “projecting” those events in the rolled up state of the entire file. In Marten’s async daemon process that runs projections in a background process, Marten is processing a certain range (think events 10000 to 11000) at a time. As the daemon processes events into a projection for the FileImportState, it’s grouping the events in that range into event “slices” that are grouped by file id.

For managing the workflow, we can now append all new events as a “side effect” of processing an event slice in the daemon as the aggregation data is updated in the background. Let’s say that we have a single stream projection for our FileImportState aggregation like this below:

public class FileImportProjection : SingleStreamProjection<FileImportState>
{
    // Other Apply / Create methods to update the state of the 
    // FileImportState aggregate document

    public override ValueTask RaiseSideEffects(IDocumentOperations operations, IEventSlice<FileImportState> slice)
    {
        var state = slice.Aggregate;
        if (state.Stage == FileImportStage.Validating &&
            state.RecordBatches.All(x => x.ValidationStatus != RecordStatus.Pending))
        {
            // At this point, the file is completely validated, and we can decide what should happen next with the
            // file
            
            // Are there any validation message failures?
            var rejected = state.RecordBatches.SelectMany(x => x.ValidationMessages).ToArray();
            if (rejected.Any())
            {
                // Append a validation failed event to the stream
                slice.AppendEvent(new ValidationFailed());
                
                // Also, send an outgoing command message that summarizes
                // the validation failures
                var message = new FileRejectionSummary()
                {
                    FileName = state.FileName,
                    Messages = rejected,
                    TrackingNumber = state.PartnerTrackingNumber
                };
                
                // This will "publish" a message once the daemon
                // has successfully committed all changes for the 
                // current batch of events
                // Unsurprisingly, there's a Wolverine integration 
                // for this
                slice.PublishMessage(message);
            }
            else
            {
                slice.AppendEvent(new ValidationSucceeded());
            }
        }

        return new ValueTask();
    }
}

And unsurprisingly, there is also the ability to “publish” outgoing messages as part of processing through asynchronous projections with an integration to Wolverine available.

This feature has long, long been planned and I was glad to get the chance to build it out this fall for a client. I’m happy to say that this is in production for them — after the obligatory shakedown cruise and some bug fixes.

Optimized Projection Rebuilds

Another JasperFx client has a system where they retrofitted Marten into an in flight system using event sourcing for a very large data set, but didn’t take advantage of many Marten capabilities including the ability to effectively pre-build or “snapshot” projected data to optimize system state reads.

With a little bit of work in their system, we knew we would be able to introduce the new projection snapshotting into their system with Marten’s blue/green deployment model for projections where Marten would immediately start trying to pre-build a new projection (or new version of an existing projection) from scratch. Great! Except we knew that was potentially going to be a major performance problem until the projection caught up to the current “high water mark” of the event store.

To ease the cost of introducing a new, persisted projection on top of ~100 million events, we built out Marten’s new optimized projection rebuild feature. To demonstrate what I mean, let’s first opt into using this feature (it had to be opt in because it forces users to made additive changes to existing database tables):

builder.Services.AddMarten(opts =>
{
    opts.Connection("some connection string");

    // Opts into a mode where Marten is able to rebuild single
    // stream projections faster by building one stream at a time
    // Does require new table migrations for Marten 7 users though
    opts.Events.UseOptimizedProjectionRebuilds = true; 
});

Now, when our users redeploy their system with the new snapshotted projection running with Marten’s Async workflow for the first time Marten will see that the projection has not been processed before, so will try to use an “optimized rebuild mode.” Since we’ve turned on optimized projection rebuilds, for a single stream projection, Marten runs the projection in “rebuild” mode by:

  1. First building a new table to track each event stream that relates to the aggregate type in question, but builds this table in reverse order of when each stream has been changed. The whole point of that is to make sure our optimized rebuild process is dealing with the most recently changed event streams so that the system can perform well even while the rebuild process in running
  2. The rebuild process rebuilds the aggregates event stream by event stream as a way of minimizing the number of database reads and writes it takes to rebuild the single stream projection. Compare that to the previous, naive “left fold” approach where it just works from event sequence = 1 to the high water mark and constantly writes and reads back the same projection document as its encountered throughout the event store
  3. When the optimized rebuild is complete, it switches the projection to running in its normal, continuous mode from the point at which the rebuild started

That’s a lot of words and maybe some complicated explanation, but the point is that Marten makes it possible to introduce new projections to a large, in flight system without incurring system downtime or even inconsistent data showing up to users.

Other Recent Improvements for Clients

Some other recent work that JasperFx has done for our clients includes:

Summary

I was part of a discussion slash argument a couple weeks ago about whether or not it was necessary to use an off the shelf event sourcing library or framework like Marten, or if you were just fine rolling your own. While I’d gladly admit that you can easily build purely a storage subsystem for events, it’s not even remotely feasible to quickly roll your own tooling that matches advanced features in Marten such as the work I presented here.