Introducing the JasperFx Software YouTube Channel

JasperFx Software is in business to help our clients make the most of the “Critter Stack” tools, Event Sourcing, CQRS, Event Driven Architecture, Test Automation, and server side .NET development in general. We’d be happy to talk with your company and see how we could help you be more successful!

Jeffry Gonzalez and I have kicked off what we plan to be a steady stream of content on the “Critter Stack” (Marten, Wolverine, and related tools) in the JasperFx Software YouTube channel.

In the first video, we started diving in on a new sample “Incident Service” that’s admittedly heavily in flight that shows how to use Marten with both Event Sourcing and as a Document Database over PostgreSQL and its integration with Wolverine as a higher level HTTP web service and asynchronous messaging platform.

We covered a lot, but here’s some of the highlights:

  • Hopefully showing off how easy it is to get started with Marten and Wolverine both, especially with Marten’s ability to lay down its own database schema as needed in its default mode. Later videos will show off how Wolverine does the same for any database schemas it needs and even message broker setup.
  • Utilizing Wolverine.HTTP for web services and how it can be used for a very low code ceremony approach for “Vertical Slice Architecture” and how it promotes testability in code without all the hassle of a complex Clean Architecture project structure or reams of abstractions scattered about in your code. It also leads to simpler code than the more common “MVC Core/Minimal API + MediatR” approach to Vertical Slice Architecture.
  • How Wolverine’s emphasis on pure function handlers leads to business or workflow logic being easy to test
  • Integration testing through the entire stack with Alba specifications inside of xUnit.Net test harnesses.
  • The Critter Stack’s support for command line diagnostics and development time tools, including a way to “unwind the magic” with Wolverine so it can show you exactly how it’s calling your code

Here’s the first video:

In the second video, we got into:

  • Wolverine’s “aggregate handler workflow” style of CQRS command handlers and how you can do that with easily testable pure functions
  • A little bit about Marten projection lifecycles and how that impacts performance or consistency
  • Using Marten’s ability to stream JSON data directly to HTTP for the most efficient possible “read side” query endpoints
  • Wolverine’s message scheduling capability
  • Marten’s utilization of PostgreSQL partitioning for maximizing scalability

I can’t say for sure where we’ll go next, but there will be a part 3 to this series in the next couple weeks and hopefully a series of shorter video content soon too! We’re certainly happy to take requests!

Wringing More Scalability out of Event Sourcing with the Critter Stack

JasperFx Software works with our customers to help wring the absolute best results out of our customer’s usage of the “Critter Stack.” We build several improvements in collaboration with our customers last year to both Marten and Wolverine specifically to improve scalability of large systems using Event Sourcing. If you’re concerned about whether or not your approach to Event Sourcing will actually scale, definitely look at the Critter Stack, and give JasperFx a shout for help making it all work.

Alright, you’re using Event Sourcing with the whole Critter Stack, and you want to get the best scalability possible in the face of an expected onslaught of incoming events. There’s some “opt in” features in Marten especially that you can take advantage of to get your system going a little bit faster and handle bigger databases.

Using the near ubiquitous “Incident Service” example originally built by Oskar Dudycz, the “Critter Stack” community is building out a new version in the Wolverine codebase that when (and if) finished, will hopefully show off an end to end example of using an event sourced workflow.

In this application we’ll need to track common events for the workflow of a customer reported Incident like when it’s logged, categorised, collects notes, and hopefully gets closed. Coming into this, we think it’s going to get very heavy usage so we expect to have tons of events streaming into the database. We’ve also been told by our business partners that we only need to retain closed incidents in the active views of the user interface for a certain amount of time — but we never want to lose data permanently.

All that being said, let’s look at a few options we can enable in Marten right off the bat:

builder.Services.AddMarten(opts =>
{
    var connectionString = builder.Configuration.GetConnectionString("Marten");
    opts.Connection(connectionString);
    opts.DatabaseSchemaName = "incidents";
    
    // We're going to refer to this one soon
    opts.Projections.Snapshot<Incident>(SnapshotLifecycle.Inline);

    // Use PostgreSQL partitioning for hot/cold event storage
    opts.Events.UseArchivedStreamPartitioning = true;
    
    // Recent optimization that will specifically make command processing
    // with the Wolverine "aggregate handler workflow" a bit more efficient
    opts.Projections.UseIdentityMapForAggregates = true;

    // This is big, use this by default with all new development
    // Long story
    opts.Events.AppendMode = EventAppendMode.Quick;
})
    
// Another performance optimization if you're starting from
// scratch
.UseLightweightSessions()
    
// Run projections in the background
.AddAsyncDaemon(DaemonMode.HotCold)

// This adds configuration with Wolverine's transactional outbox and
// Marten middleware support to Wolverine
.IntegrateWithWolverine();

There are three options here I want to bring to your attention:

  1. UseLightweightSessions() directs Marten to use IDocumentSession sessions by default (what’s injected by your DI container) to avoid any performance overhead from identity map tracking in the session. Don’t use this of course if you really do want or need the identity map tracking.
  2. opts.Events.UseArchivedStreamPartitioning = true sets us up for Marten’s “hot/cold” event storage scheme using PostgreSQL native partitioning. More on this in the section on stream archiving below. Read more about this feature in the Marten documentation.
  3. Setting UseIdentityMapForAggregates = true opts into some recent performance optimizations for updating Inline aggregates through Marten’s FetchForWriting API. More detail on this here. Long story short, this makes Marten and Wolverine do less work and make fewer database round trips to support the aggregate handler workflow I’m going to demonstrate below.
  4. Events.AppendMode = EventAppendMode.Quick makes the event appending operations upon saving a Marten session a lot faster, like 50% faster in our testing. It also makes Marten’s “async daemon” feature work smoothly. The downside is that you lose access to some event metadata during Inline projections — which most people won’t care about, but again, we try not to break existing users.

The “Aggregate Handler Workflow”

I have typically described this as Wolverine’s version of the Decider Pattern, but no, I’m now saying that this is a significantly different approach that I believe will lead to better results in larger systems than the “Decider” in that it manages complexity better and handles several technical details that the “Decider” pattern does not. Plus you won’t end up with the humongous switch statements with the Wolverine “Aggregate Handler Workflow” that a Decider function can easily become with any level of domain complexity.

Using Wolverine’s aggregate handler workflow, a command handler that may result in a new event being appended to Marten will look like this one for categorizing an incident:

public static class CategoriseIncidentEndpoint
{
    // This is Wolverine's form of "Railway Programming"
    // Wolverine will execute this before the main endpoint,
    // and stop all processing if the ProblemDetails is *not*
    // "NoProblems"
    public static ProblemDetails Validate(Incident incident)
    {
        return incident.Status == IncidentStatus.Closed 
            ? new ProblemDetails { Detail = "Incident is already closed" } 
            
            // All good, keep going!
            : WolverineContinue.NoProblems;
    }
    
    // This tells Wolverine that the first "return value" is NOT the response
    // body
    [EmptyResponse]
    [WolverinePost("/api/incidents/{incidentId:guid}/category")]
    public static IncidentCategorised Post(
        // the actual command
        CategoriseIncident command, 
        
        // Wolverine is generating code to look up the Incident aggregate
        // data for the event stream with this id
        [Aggregate("incidentId")] Incident incident)
    {
        // This is a simple case where we're just appending a single event to
        // the stream.
        return new IncidentCategorised(incident.Id, command.Category, command.CategorisedBy);
    }
}

The UseIdentityMapForAggregates = true flag optimizes the code above by allowing Marten to use the exact same Incident aggregate object that was originally passed into the Post() method above as the starting basis for updating the Incident data stored in the database. The application of the Inline projection to update the Incident will start with our originally fetched value, apply any new events on top of that, and update the Incident in the same transaction as the events being captured. Without that flag, Marten would have to fetch the Incident starting data from the database all over again when it applies the projection updates while committing the Marten unit of work containing the events.

There’s plenty of rocket science and sophisticated techniques to improving performance, but one simple thing that almost always works out is not repetitively fetching the exact same data from the database if you don’t have to — and that’s the point of the UseIdentityMapForAggregates optimization.

Hot/Cold Storage

Here’s an exciting, relatively new feature in Marten that was planned for years before JasperFx was able to build this for a client late last year. The UseArchivedStreamPartitioning flag sets up your Marten database for “hot / code storage”:

Again, it might require some brain surgery to really improve performance sometimes, but an absolute no-brainer that’s frequently helpful is to just keep your transactional database tables as small and sprightly as possible over time by moving out obsolete or archived data — and that’s exactly what we’re going to do here.

When an Incident event stream is closed, we want to keep that Incident data shown in the user interface for 3 days, then we’d like all the data for that Incident to get archived. Here’s the sample command handler for the CloseIncident command:

public record CloseIncident(
    Guid ClosedBy,
    int Version
);

public static class CloseIncidentEndpoint
{
    [WolverinePost("/api/incidents/close/{id}")]
    public static (UpdatedAggregate, Events, OutgoingMessages) Handle(
        CloseIncident command, 
        [Aggregate]
        Incident incident)
    {
        /* More logic for later
        if (current.Status is not IncidentStatus.ResolutionAcknowledgedByCustomer)
               throw new InvalidOperationException("Only incident with acknowledged resolution can be closed");

           if (current.HasOutstandingResponseToCustomer)
               throw new InvalidOperationException("Cannot close incident that has outstanding responses to customer");

         */
        
        
        if (incident.Status == IncidentStatus.Closed)
        {
            return (new UpdatedAggregate(), [], []);
        }

        return (

            // Returning the latest view of
            // the Incident as the actual response body
            new UpdatedAggregate(),

            // New event to be appended to the Incident stream
            [new IncidentClosed(command.ClosedBy)],

            // Getting fancy here, telling Wolverine to schedule a 
            // command message for three days from now
            [new ArchiveIncident(incident.Id).DelayedFor(3.Days())]);
    }
}

The ArchiveIncident message is being published by this handler using Wolverine’s scheduled message capability so that it will be executed in exactly 3 days time from the current time (you could get fancier and set an exact time to end of business on that day if you wanted).

Note that even when doing the message scheduling, we can still use Wolverine’s cascading message feature. The point of doing this is to keep our handler a pure function that doesn’t have to invoke services, create side effects, or do anything that would force us into asynchronous methods and all of the inherent complexity and noise that inevitably causes.

The ArchiveIncident command handler might just be this:

public record ArchiveIncident(Guid IncidentId);

public static class ArchiveIncidentHandler
{
    // Just going to code this one pretty crudely
    // I'm assuming that we have "auto-transactions"
    // turned on in Wolverine so we don't have to much
    // with the asynchronous IDocumentSession.SaveChangesAsync()
    public static void Handle(ArchiveIncident command, IDocumentSession session)
    {
        session.Events.Append(command.IncidentId, new Archived("It'd done baby!"));
        session.Delete<Incident>(command.IncidentId);
    }
}

When that command executes in three days time, it will delete the projected Incident document from the database and mark the event stream as archived, which will cause PostgreSQL to move that data into the “cold” archived storage.

To close the loop, all normal database operations in Marten specifically filter out archived data with a SQL filter so that they will always be querying directly against the much smaller “active” partition table.

To sum this up, if you use the event archival partitioning and are able to be aggressive about archiving event streams, you can hugely improve the performance of your event sourced application even after you’ve captured a huge number of events because the actual table that Marten is reading and writing from will be relatively stable in side.

As the late, great Stuart Scott would have told us, that’s cooler than the other side of the pillow!

Why aren’t these all defaults?!?

It’s an imperfect world. Every one of the three flags I showed here either subtly change underlying behavior or force additive changes to your application database. The UseIdentityMapForAggregates flag has to be an “opt in” because using that will absolutely give unexpected results for Marten users who mutate the projected aggregate inside of their command handlers (basically anyone doing any type of AggregateRoot base class approach).

Likewise, Marten was originally built using a session with the somewhat more expensive identity map mechanics built in to mimic the commercial tool we were originally trying to replace. I’ve always regretted this decision, but once this has escaped into real systems, changing the underlying behavior absolutely breaks some existing code.

Lastly, introducing the hot/cold partitioning of the event & stream tables to an existing database will cause an expensive database migration, and we certainly don’t want to be inflicting that on unsuspecting users doing an upgrade.

It’s a lot of overhead and compromise, but we’ve chosen to maintain backward compatibility for existing users over enabling out of the box performance improvements.

But wait, there’s more!

Marten has been able to grow quite a bit in capability after I started JasperFx Software as a company to support it. Doing that has allowed us to partner with shops pushing the limits on Marten and Wolverine, and the feedback, collaboration, and yes, compensation has allowed us to push the Critter Stack’s capabilities a lot in the last 18 months.

Wolverine now has the ability to better spread the work of running projections and event subscriptions from Marten over an application cluster.

Sometime in the current quarter, we’re also going to be building and releasing a new “Stream Compacting” feature as another way to deal with archiving data from very long event streams. And yes, a lot of the Event Sourcing community will lecture you about how you should “keep your streams” short, and while there may be some truth to that, that advice is partially around using less capable technical event sourcing solutions. We strive to make Marten & Wolverine more robust so you don’t have to be omniscient and perfect in your upfront modeling.

Why the Critter Stack is Good

JasperFx Software already has a strong track record in our short life of helping our customers be more successful using Event Sourcing, Event Driven Architecture, and Test Automation. Much of the content from these new guides came directly out of our client work. We’re certainly ready to partner with your shop as well!

I’ve had a chance the past two weeks to really buckle down and write more tutorials and guides for Wolverine by itself and the full “Critter Stack” combination with Marten. I’ll admit to being a little disappointed by the download numbers on Wolverine right now, but all that really means is that there’s a lot of untapped potential for growth!

If you do any work on the server side with .NET, or are looking for a technical platform to use for event sourcing, event driven architecture, web services, or asynchronous messaging, Wolverine is going to help you build systems that are resilient, easy to change, and highly testable without having to incur the code complexity common to Clean/Onion/Hexagonal Architecture approaches.

Please don’t make a direct comparison of Wolverine to MediatR as a straightforward “Mediator” tool, or to MassTransit or NServiceBus as an Asynchronous Messaging framework, or to MVC Core as a straight up HTTP service framework. Wolverine does far more than any of those other tools to help you write your actual application code.

On to the new guides for Wolverine:

  • Converting from MediatR – We’re getting more and more questions from users who are coming from MediatR to Wolverine to take advantage of Wolverine capabilities like a transactional outbox that MediatR lacks. Going much further though, this guide tries to explain how to first shift to Wolverine, some important features that Wolverine provides that MediatR does not , and how to lean into Wolverine to make your code a lot simpler and easier to test.
  • Vertical Slice Architecture – Wolverine has quite a bit of “special sauce” that makes it a unique fit for “Vertical Slice Architecture” (VSA). We believe that Wolverine does more to make a VSA coding style effective than any other server side tooling in the .NET ecosystem. If you haven’t looked at Wolverine recently, you’ll want to check this out because Wolverine just got even more ways to simplify code and improve testability in vertical slices without having to resort to the kind of artifact bloat that’s nearly inevitable with prescriptive Clean/Onion Architecture approaches.
  • Modular Monolith Architecture – I’ll freely admit that Wolverine was originally optimized for micro-services, and we’ve had to scramble a bit in the recent 3.6.0 release and today’s 3.7.0 release to improve Wolverine’s support for how folks are wanting to do asynchronous workflows between modules in a modular monolith approach. In this guide we’ll talk about how best to use Wolverine for modular monolith architectures, dealing with eventual consistency, database tooling usage, and test automation.
  • CQRS and Event Sourcing with Marten – Marten is already the most robust and most commonly used toolset for Event Sourcing in the .NET ecosystem. Combined with Wolverine to form the full “Critter Stack,” we think it is one of the most productive toolsets for building resilient and scalable systems using CQRS with Event Sourcing and this guide will show you how the Critter Stack gets that done. There’s also a big section on building integration testing harnesses for the Critter Stack with some of their test support. There are some YouTube videos coming soon that cover this same ground and using some of the same samples.
  • Railway Programming – Wolverine has some lightweight facilities for “Railway Programming” inside of message handlers or HTTP endpoints that can help code complex workflows with simpler individual steps — and do that without incurring loads of generics and custom “result” types. And for a bonus, this guide even shows you how Wolverine’s Railway Programming usage helps you generate OpenAPI metadata from type signatures without having to clutter up your code with noisy attributes to keep the ReST police off your back.

I personally need a break from writing documentation, but we’ll pop up soon with additional guides for:

  • Moving from NServiceBus or MassTransit to Wolverine
  • Interoperability with Wolverine

And on strictly the Marten side of things:

  • Complex workflows with Event Sourcing
  • Multi-Stream Projections

Wolverine 3.6: Modular Monolith and Vertical Slice Architecture Goodies

Wolverine 3.6 just went out tonight as a big release with bug fixes and quite a few significant features to improve Wolverine‘s usability for modular monolith architectures and to further improve Wolverine’s already outstanding usability for vertical slice architecture.

Highlights:

  • New Persistence Helpers feature to make handlers or http endpoint code event cleaner
  • The new “Separated” option to better use multiple handlers for the same message type that’s been a source of friction for Wolverine users using modular monolithic approaches to event driven architecture
  • A huge update to the Message Routing documentation to reflect some new features and existing diagnostics

And of course, the full list of closed issues addressed by this release.

As a little sneak peek from the documentation, what if you could write HTTP endpoints as just a simple little pure function like this:

// Use "Id" as the default member
[WolverinePost("/api/todo/update")]
public static Update<Todo2> Handle(
    // The first argument is always the incoming message
    RenameTodo command, 
    
    // By using this attribute, we're telling Wolverine
    // to load the Todo entity from the configured
    // persistence of the app using a member on the
    // incoming message type
    [Entity] Todo2 todo)
{
    // Do your actual business logic
    todo.Name = command.Name;
    
    // Tell Wolverine that you want this entity
    // updated in persistence
    return Storage.Update(todo);
}

In the code above, the little method tries to load an entity from the application’s persistence tooling (EF Core, Marten, and RavenDb are supported so far) because of the [Entity] attribute, and the return value of Update<Todo2> will result in the Todo2 entity being updated by the same persistence tooling. That’s arguably an easy method to read and reason about, it was definitely easy to write, it’s easy to unit test, and didn’t require umpteen separate “Clean/Onion Architecture” projects and layers to get to testable code that isn’t directly coupled to infrastructure.

Critter Stack Roadmap for 2025

A belated Happy New Year’s to everybody!

The “Critter Stack” had a huge 2024, and I listed off some of the highlights of the improvements we made in Critter Stack Year in Review for 2024. For 2025, we’ve reordered our priority order from what I was writing last summer. I think we might genuinely focus more on sample applications, tutorials, and videos early this year than we do on coding new features.

There’s also a separate post on JasperFx Software in 2025. Please do remember that JasperFx Software is available for either ongoing support contracts for Marten and/or Wolverine and consulting engagements to help you wring the most possible value out of the tools — or to just help you with any old server side .NET architecture you have.

Marten

At this point, I believe that Marten is by far and away the most robust and most productive tooling for Event Sourcing in the .NET ecosystem. Moreover, if you believe Nuget download numbers, it’s also the most heavily used Event Sourcing tooling in .NET. I think most of the potential growth for Marten this year will simply be a result of developers hopefully being more open to using Event Sourcing as that technique becomes better known. I don’t have hard numbers to back this up, but my feeling is that Marten’s main competitor is shops choosing to roll their own Event Sourcing frameworks in house rather than any other specific tool.

  • I think we’re putting off the planned Marten 8.0 release for now. Instead, we’ll mostly be focused on dealing with whatever issues come up from our users and JasperFx clients with Marten 7 for the time being.
  • Babu is working on adding a formal “Crypto Shredding” feature to Marten 7
  • More sample applications and matching tutorials for Marten
  • Possibly adding a “Marten Events to EF Core” projection model?
  • Formal support for PostgreSQL PostGIS spatial data? I don’t know what that means yet though
  • When we’re able to reconsider Marten 8 this year, that will include:
    • A reorganization of the JasperFx building blocks to remove duplication between Marten, Wolverine, and other tools
    • Stream-lining the Projection API
    • Yet more scalability and performance improvements to the async daemon. There’s some potential features that we’re discussing with JasperFx clients that might drive this work

After the insane pace of Marten changes we made last year, I see Marten development and the torrid pace of releases (hopefully) slowing quite a bit in 2025.

Wolverine

Wolverine doesn’t yet have anywhere near the usage of Marten and exists in a much more crowded tooling space to boot. I’m hopeful that we can greatly increase Wolverine usage in 2025 by further differentiating it from its competitor tools by focusing on how Wolverine allows teams to write backend systems with much lower ceremony code without sacrificing testability, robustness, or maintainability.

We’re shelving any thoughts about a Wolverine 4.0 release early this year, but that’s opened the flood gates for planned enhancements to Wolverine 3.*:

  • Wolverine 3.6 is heavily in flight for release this month, and will be a pretty large release bringing some needed improvements for Wolverine within “Modular Monolith” usage, yet more special sauce for lower “Vertical Slice Architecture” usage, enhancements to the “aggregate handler workflow” integration with Marten, and improved EF Core integration
  • Multi-Tenancy support for EF Core in line with what Wolverine can already do with its Marten integration
  • CosmosDb integration for Transactional Inbox/Outbox support, saga storage, transactional middleware
  • More options for runtime message routing
  • Authoring more sample applications to show off how Wolverine allows for a different coding model than other messaging or mediator or HTTP endpoint tools

I think there’s a lot of untapped potential for Wolverine, and I’ll personally be focused on growing its usage in the community this year. I’m hoping the better EF Core integration, having more database options, and maybe even yet more messaging options can help us grow.

I honestly don’t know what is going to happen with Wolverine & Aspire. Aspire doesn’t really play nicely with frameworks like Wolverine right now, and I think it would take custom Wolverine/Aspire adapter libraries to get a truly good experience. My strong preference right now is to just use Docker Compose for local development, but it’s Microsoft’s world and folks like me building OSS tools just have to live in it.

Ermine & Other New Critters

Sigh, “Ermine” is the code name for a long planned port of Marten’s event sourcing functionality to Sql Server. I would still love to see this happen in 2025, but it’s going to be pushed off for a little bit. With plenty of input from other Marten contributors, I’ve done some preliminary work trying to centralize plenty of Marten’s event sourcing internals to a potentially shared assembly.

We’ve also at least considered extending Marten’s style of event sourcing to other databases, with CosmosDb, RavenDb, DynamoDb, SqlLite, and Oracle (people still use it apparently) being kicked around as options.

“Critter Watch”

This is really a JasperFx Software initiative to create a commercial tool that will be a dedicated management portal and performance monitoring tool (meant to be used in conjunction with Grafana/Prometheus/et al) for the “Critter Stack”. I’ll share concrete details of this when there are some, but Babu & I plan to be working in earnest on “Critter Watch” in the 1st quarter.

Note about Blogging

I’m planning to blog much less in the coming year and focus more on either writing more robust tutorials or samples within technical documentation sites and finally joining the modern world and moving to YouTube or Twitch video content creation.

Critter Stack Year in Review for 2024

Just for fun, here’s what I wrote as the My Technical Plans and Aspirations for 2024 detailing what I had hoped to accomplish this year.

While there’s still just a handful of technical deliverables I’m trying to get out in this calendar year, I’m admittedly running on mental fumes rolling into the holiday season. Thinking back about how much was delivered for the “Critter Stack” (Marten, Weasel, and Wolverine) this year is making me feel a lot better about giving myself some mental recharge time during the holidays. Happily for me, most of the advances in the Critter Stack this year were either from the community (i.e., not me) or done in collaboration and with the sponsorship of JasperFx Software customers for their systems.

The biggest highlights and major releases were Marten 7.0 and Wolverine 3.0.


Performance and Scalability

  • Marten 7.0 brought a new “partial update” model based on native PostgreSQL functions that no longer required the PLv8 add on. Hat tip to Babu Annamalai for that feature!
  • The very basic database execution pipeline underneath Marten was largely rewritten to be far more parsimonious with how it uses database connections and to take advantage of more efficient Npgsql usage. That included using the very latest improvements to Npgsql for batching queries and moving to positional parameters instead of named parameters. Small ball optimizations for sure, but being more parsimonious with connections has been advantageous
  • Marten’s “quick append” model sacrifices a little bit of metadata tracking for a whole lot of throughput improvements (we’ve measured a 50% improvement) when appending events. This mode will be a default in Marten 8. This also helps stabilize “event skipping” in the async daemon under heavy loads. I think this was a big win that we need to broadcast more
  • Random optimizations in the “inline projection” model in Marten to reduce database round trips
  • Using PostgreSQL Read Replicas in Marten. Hat tip to JT.
  • First class support for PostgreSQL table partitioning in Marten. Long planned and requested, finally got here. Still admittedly shaking out some database migration issues with this though.
  • Performance optimizations for CQRS command handlers where you want to fetch the final state of a projected aggregate that has been “advanced” as part of the command handler. Mostly in Marten, but there’s a helper in Wolverine too.

Resiliency

Multi Tenancy

Multi-tenancy has been maybe the biggest single source of client requests for JasperFx Software this year. You can hear about some of that on a recent video conversation I got to do with Derek Cromartin.

Complex Workflows

I’m probably way too sloppy or at least not being precise about the differences between stateful sagas and process managers and tend to call any stateful, long lived workflow a “saga”. I’m not losing any sleep over that.

“Day 2” Improvements

By “Day 2” I just mean features for production support like instrumentation or database migrations or event versioning

Options for Querying

  • Marten 7.0 brought a near rewrite of Marten’s LINQ subsystem that closed a lot of gaps in functionality that we previously had. It also spawned plenty of regression bugs that we’ve had to address in the meantime, but the frequency of LINQ related issues has dramatically fallen
  • Marten got another, more flexible option for the specification pattern. I.e., we don’t need no stinkin’ repositories here!
  • There were quite a few improvements to Marten’s ability to allow you to use explicit SQL as a replacement or supplement to LINQ from the community

Messaging Improvements

This is mostly Wolverine related.

  • A new PostgreSQL backed messaging transport
  • Strictly ordered queuing options in Wolverine
  • “Sticky” message listeners so that only one node in a cluster listens to a certain messaging endpoint. This is super helpful for processes that are stateful. This also helps for multi-tenancy.
  • Wolverine got a GCP Pubsub transport
  • And we finally released the Pulsar transport
  • Way more options for Rabbit MQ conventional message routing
  • Rabbit MQ header exchange support

Test Automation Support

Hey, the “Critter Stack” community takes testability, test automation, and TDD very seriously. To that end, we’ve invested a lot into test automation helpers this year.

Strong Typed Identifiers

Despite all my griping along the way and frankly threatening bodily harm to the authors of some of the most popular libraries for strong typed identifiers, Marten has gotten a lot of first class support for strong typed identifiers in both the document database and event store features. There will surely be more to come because it’s a permutation hell problem where people stumble into yet more scenarios with these damn things.

But whatever, we finally have it. And quite a bit of the most time consuming parts of that work has been de facto paid for by JasperFx clients, which takes a lot of the salt out of the wound for me!

Modular Monolith Usage

This is going to be a major area of improvement for Wolverine here at the tail end of the year because suddenly everybody and their little brother wants to use this architectural pattern in ways that aren’t yet great with Wolverine.

Other Cool New Features

There was actually quite a few more refinements made to both tools, but I’ve exhausted the time I allotted myself to write this, so let’s wrap up.

Summary

Last January I wrote that an aspiration for 2024 was to:

Continue to push Marten & Wolverine to be the best possible technical platform for building event driven architectures

At this point I believe that the “Critter Stack” is already the best set of technical tooling in the .NET ecosystem for building a system using an Event Driven Architecture, especially if Event Sourcing is a significant part of your persistence strategy. There are other messaging frameworks that have more messaging options, but Wolverine already does vastly more to help you productively write code that’s testable, resilient, easier to reason about, and well instrumented than older messaging tools in the .NET space. Likewise, Wolverine.HTTP is the lowest ceremony coding model for ASP.Net Core web service development, and the only one that has a first class transactional outbox integration. In terms of just Event Sourcing, I do not believe that Marten has any technical peer in the .NET ecosystem.

But of course there are plenty of things we can do better, and we’re not standing still in 2025 by any means. After some rest, I’ll pop back in January with some aspirations and theoretical roadmap for the “Critter Stack” in 2025. Details then, but expect that to include more database options and yes, long simmering plans for commercialization. And the overarching technical goal in 2025 for the “Critter Stack” is to be the best technical platform on the planet for Event Driven Architecture development.

Build Resilient Systems with Wolverine’s Transactional Outbox

JasperFx Software is completely open for business to help you get the best possible results with the “Critter Stack” tools or really any type of server side .NET development efforts. A lot of what I’m writing about is inspired by work we’ve done with our ongoing clients.

I think I’m at the point where I believe and say that leaning on asynchronous messaging is the best way to create truly resilient back end systems. And when I mean “resilient” here, I mean the system is best able to recover from errors it encounters at runtime or performance degradation or even from subsystems being down and still function without human intervention. A system incorporating asynchronous messaging and at least some communication through queues can apply retry policies for errors and utilize patterns like circuit breakers or dead letter queues to avoid losing in flight work.

There’s more to this of course, like:

  • Being able to make finer grained error handling policies around individual steps
  • Dead letter queues and replay of messages
  • Not having “temporal coupling” between systems or subsystems
  • Back pressure mechanics
  • Even maybe being able to better reason about the logical processing steps in an asynchronous model with formal messaging as opposed to just really deep call stacks in purely synchronous code

Wolverine certainly comes with a full range of messaging options and error handling options for resiliency, but a key feature that does lead to Wolverine adoption is its support for the transactional outbox (and inbox) pattern.

What’s the Transactional Outbox all about?

The transactional outbox pattern is an important part of your design pattern toolkit for almost any type of backend system that involves both database persistence and asynchronous work or asynchronous messaging. If you’re not already familiar with the pattern, just consider this message handler (using Wolverine) from a banking system that uses both Wolverine’s transactional middleware and transactional outbox integration (with Marten and PostgreSQL):

public Task<Account> LoadAsync(IDocumentSession session, DebitAccount command)
        => session.LoadAsync<Acount>(command.AccountId);

[Transactional]
public static async Task Handle(
    DebitAccount command,
    Account account,
    IDocumentSession session,
    IMessageContext messaging)
{
    account.Balance -= command.Amount;

    // This just marks the account as changed, but
    // doesn't actually commit changes to the database
    // yet. That actually matters as I hopefully explain
    session.Store(account);

    // Conditionally trigger other, cascading messages
    if (account.Balance > 0 && account.Balance < account.MinimumThreshold)
    {
        await messaging.SendAsync(new LowBalanceDetected(account.Id));
    }
    else if (account.Balance < 0)
    {
        await messaging.SendAsync(new AccountOverdrawn(account.Id), new DeliveryOptions{DeliverWithin = 1.Hours()});

        // Give the customer 10 days to deal with the overdrawn account
        await messaging.ScheduleAsync(new EnforceAccountOverdrawnDeadline(account.Id), 10.Days());
    }

    // "messaging" is a Wolverine IMessageContext or IMessageBus service
    // Do the deliver within rule on individual messages
    await messaging.SendAsync(new AccountUpdated(account.Id, account.Balance),
        new DeliveryOptions { DeliverWithin = 5.Seconds() });
}

You’ll notice up above that the handler both:

  1. Modifies a banking account based on the command and persists those changes to the database
  2. Potentially sends out messages in regard to that account

What the “outbox” is doing for us around this message handler is guaranteeing that:

  • The outgoing messages I registered with the IMessageBus service above are only actually sent to messaging brokers or local queues after the database transaction is successful. Think of the messaging outbox as kind of queueing the outgoing messages as part of your unit of work (which is really implemented by the Marten IDocumentSession up above.
  • The outgoing messages are actually persisted to the same database as the account data as part of a native database transactions
  • As part of a background process, the Wolverine outbox subsystem will make sure the message gets recovered and sent event if — and hate to tell you, but this absolutely does happen in the real world — the running process somehow shuts down unexpectedly between the database transaction succeeding and the messages actually getting successfully sent through local Wolverine queues or remotely sent through messaging brokers like Rabbit MQ or Azure Service Bus.
  • Also as part of the background processing, Wolverine’s outbox is also making sure that persisted, outgoing messages really do get sent out eventually in the case of the messaging broker being temporarily unavailable or network issues — and this is 100% something that actually happens in production, so the ability to recover messages is an awfully important feature for building robust systems.

To sum things up, a good implementation of the transactional outbox pattern in your system can be a great way to make your system be more resilient and “self heal” in the face of inevitable problems in production. As important, the usage of a transactional outbox can do a lot to prevent subtle race condition bugs at runtime from messages getting processed against inconsistent database state before database transactions have completed — and folks, this also absolutely happens in real systems. Ask me how I know:-)

Alright, now that we’ve established what it is, let’s look at some ways in which Wolverine makes its transactional outbox easy to adopt and use — and we’ll show a simpler version of the message handler above, but we just have to introduce more Wolverine concepts.

Setting up the Outbox in Wolverine

If you are using the full “Critter Stack” combination of Marten + Wolverine, you just add both Marten & Wolverine to your application and tie them together with the IntegrateWithWolverine() call from the WolverineFx.Marten Nuget as shown below:

var builder = WebApplication.CreateBuilder(args);

// Adds in some command line diagnostics
builder.Host.ApplyOaktonExtensions();

builder.Services.AddAuthentication("Test");
builder.Services.AddAuthorization();

builder.Services.AddMarten(opts =>
    {
        // You always have to tell Marten what the connection string to the underlying
        // PostgreSQL database is, but this is the only mandatory piece of 
        // configuration
        var connectionString = builder.Configuration.GetConnectionString("postgres");
        opts.Connection(connectionString);
    })
    // This adds middleware support for Marten as well as the 
    // transactional middleware support we'll introduce in a little bit...
    .IntegrateWithWolverine();

builder.Host.UseWolverine();

That does of course require some PostgreSQL tables for the Wolverine outbox storage to function, but Wolverine in this case is able to pull the connection and schema information (the schema can be overridden if you choose) from its Marten integration. In normal development mode, Wolverine — like Marten — is able to apply database migrations itself on the fly so you can just work.

Switching the SQL Server and EF Core combination with Wolverine, you have this setup:

var builder = WebApplication.CreateBuilder(args);

// Just the normal work to get the connection string out of
// application configuration
var connectionString = builder.Configuration.GetConnectionString("sqlserver");

// If you're okay with this, this will register the DbContext as normally,
// but make some Wolverine specific optimizations at the same time
builder.Services.AddDbContextWithWolverineIntegration<ItemsDbContext>(
    x => x.UseSqlServer(connectionString), "wolverine");

// Add DbContext that is not integrated with outbox
builder.Services.AddDbContext<ItemsDbContextWithoutOutbox>(
    x => x.UseSqlServer(connectionString));

builder.Host.UseWolverine(opts =>
{
    // Setting up Sql Server-backed message storage
    // This requires a reference to Wolverine.SqlServer
    opts.PersistMessagesWithSqlServer(connectionString, "wolverine");

    // Set up Entity Framework Core as the support
    // for Wolverine's transactional middleware
    opts.UseEntityFrameworkCoreTransactions();

    // Enrolling all local queues into the
    // durable inbox/outbox processing
    opts.Policies.UseDurableLocalQueues();
});

Likewise, Wolverine is able to build the necessary schema objects for SQL Server on application startup so that the outbox integration “just works” in local development or testing environments. I should note that in all cases, Wolverine provides command line tools to export SQL scripts for these schema objects that you could use within database migration tools like Grate.

Outbox Usage within Message Handlers

Honestly, just to show a lower ceremony version of a Wolverine handler, let’s take the message handler from up above and use Wolverine’s “cascading message” capability to express the same logic for choosing which messages to send out as well as expression the database operation.

Before I show the handler, let me call out a couple things first:

  • Wolverine has an “auto transaction” middleware policy you can opt into to apply transaction handling for Marten, EF Core, or RavenDb around your handler code. This is helpful to keep your handler code simpler and often to allow you to write synchronous code
  • The “outbox” sending kicks in with any messages sent to an endpoint (local queue, Rabbit MQ exchange, AWS SQS queue, Kafka topic) that is configured as “durable” in Wolverine. You can read more about the Wolverine routing here. Do know though that within any application or even within a single handler, you can mix and match durable routes with “fire and forget” endpoints as desired.
  • There’s another concept in Wolverine called “side effects” that I’m going to use just to say “I want this document stored as part of this logical transaction.” It’s yet another thing in Wolverine’s bag of tricks to help you write pure functions for message handlers as a way to maximize the testability of your application code.

This time, we’re going to write a pure function for the handler:

public static class DebitAccountHandler
{
    public static Task<Account> LoadAsync(IDocumentSession session, DebitAccount command)
        => session.LoadAsync<Account>(command.AccountId);
    
    public static async (IMartenOp, OutgoingMessages) Handle(
        DebitAccount command,
        Account account)
    {
        account.Balance -= command.Amount;

        // This just tracks outgoing, or "cascading" messages
        var messages = new OutgoingMessages();

        // Conditionally trigger other, cascading messages
        if (account.Balance > 0 && account.Balance < account.MinimumThreshold)
        {
            messages.Add(new LowBalanceDetected(account.Id));
        }
        else if (account.Balance < 0)
        {
            messages.Add(new AccountOverdrawn(account.Id), new DeliveryOptions{DeliverWithin = 1.Hours()});

            // Give the customer 10 days to deal with the overdrawn account
            messages.Delay(new EnforceAccountOverdrawnDeadline(account.Id), 10.Days());
        }

        // Do the deliver within rule on individual messages
        messages.Add(new AccountUpdated(account.Id, account.Balance),
            new DeliveryOptions { DeliverWithin = 5.Seconds() });

        return (MartenOps.Store(account), messages);
    }
}

When Wolverine executes the DebitCommand, it’s trying to commit a single database transaction with the contents of the Account entity being persisted and any outgoing messages in that OutgoingMessages collection that are routed to a durable Wolverine endpoint. When the transaction succeeds, Wolverine “releases” the outgoing messages to the sending agents within the application, where the persisted message data gets deleted from the database when Wolverine is able to successfully send the message.

Outbox Usage within MVC Core Controllers

Like all messaging frameworks in the .NET space that I’m aware of, the transactional outbox mechanics are pretty well transparent from message handler code. More recently though, the .NET ecosystem has caught up (finally) with the need to expose transactional outbox mechanics outside of a message handler.

A very common use cases is needing to both make database writes and trigger asynchronous work through messages from HTTP web services. For this example, let’s assume the usage of MVC Core Controller classes, but the mechanics I’m showing are similar for Minimal API or other alternative endpoint models in the ASP.Net Core ecosystem.

Assuming the usage of Marten + Wolverine, you can send messages with an outbox through the IMartenOutbox service that somewhat wraps the two tools together like this:

    [HttpPost("/orders/itemready")]
    public async Task Post(
        [FromBody] MarkItemReady command,
        [FromServices] IDocumentSession session,
        [FromServices] IMartenOutbox outbox
    )
    {
        // This is important!
        outbox.Enroll(session);

        // Fetch the current value of the Order aggregate
        var stream = await session
            .Events

            // We're also opting into Marten optimistic concurrency checks here
            .FetchForWriting<Order>(command.OrderId, command.Version);

        var order = stream.Aggregate;

        if (order.Items.TryGetValue(command.ItemName, out var item))
        {
            item.Ready = true;

            // Mark that the this item is ready
            stream.AppendOne(new ItemReady(command.ItemName));
        }
        else
        {
            // Some crude validation
            throw new InvalidOperationException($"Item {command.ItemName} does not exist in this order");
        }

        // If the order is ready to ship, also emit an OrderReady event
        if (order.IsReadyToShip())
        {
            // Publish a cascading command to do whatever it takes
            // to actually ship the order
            // Note that because the context here is enrolled in a Wolverine
            // outbox, the message is registered, but not "released" to
            // be sent out until SaveChangesAsync() is called down below
            await outbox.PublishAsync(new ShipOrder(command.OrderId));
            stream.AppendOne(new OrderReady());
        }

        // This will also persist and flush out any outgoing messages
        // registered into the context outbox
        await session.SaveChangesAsync();
    }

With EF Core + Wolverine, it’s similar, but just a touch more ceremony using IDbContextOutbox<T> as a convenience wrapper around an EF Core DbContext:

    [HttpPost("/items/create2")]
    public async Task Post(
        [FromBody] CreateItemCommand command,
        [FromServices] IDbContextOutbox<ItemsDbContext> outbox)
    {
        // Create a new Item entity
        var item = new Item
        {
            Name = command.Name
        };

        // Add the item to the current
        // DbContext unit of work
        outbox.DbContext.Items.Add(item);

        // Publish a message to take action on the new item
        // in a background thread
        await outbox.PublishAsync(new ItemCreated
        {
            Id = item.Id
        });

        // Commit all changes and flush persisted messages
        // to the persistent outbox
        // in the correct order
        await outbox.SaveChangesAndFlushMessagesAsync();
    }

I personally think the usage of the outbox outside of Wolverine message handlers is a little bit more awkward than I’d ideally prefer (I also feel this way about the NServiceBus or MassTransit equivalents of this usage, but it’s nice that both of those tools do have this important functionality too), so let’s introduce Wolverine’s HTTP endpoint model to write lower ceremony code while still opting into outbox mechanics from web services.

Outbox Usage within Wolverine HTTP

This is beyond annoying, but the libraries and namespaces in Wolverine are all named “Wolverine.*”, but the Nuget packages are named “WolverineFx.*” because some clown is squatting on the “Wolverine” name in Nuget and we didn’t realize that until it was too late and we’d committed to the projection name. Grr.

Wolverine also has an add on model in the WolverineFx.Http Nuget that allows you to use the basics of the Wolverine runtime execution model for HTTP services. One of the advantages of Wolverine.HTTP endpoints is the same kind of pure function model as the message handlers that I believe to be a much lower ceremony programming model than MVC Core or even Minimal API.

Maybe more valuable though, Wolverine.HTTP endpoints support the exact same transactional middleware and outbox integration as the message handlers. That also allows us to use “cascading messages” to publish messages out of our HTTP endpoint handlers without having to deal with asynchronous code or injecting IoC services. Just plain old pure functions in many cases like so:

public static class TodoCreationEndpoint
{
    [WolverinePost("/todoitems")]
    public static (TodoCreationResponse, TodoCreated) Post(CreateTodo command, IDocumentSession session)
    {
        var todo = new Todo { Name = command.Name };

        // Just telling Marten that there's a new entity to persist,
        // but I'm assuming that the transactional middleware in Wolverine is
        // handling the asynchronous persistence outside of this handler
        session.Store(todo);

        // By Wolverine.Http conventions, the first "return value" is always
        // assumed to be the Http response, and any subsequent values are
        // handled independently
        return (
            new TodoCreationResponse(todo.Id),
            new TodoCreated(todo.Id),
        );
    }
}

The Wolverine.HTTP model gives us a way to build HTTP endpoints with Wolverine’s typical, low ceremony coding model (most of the OpenAPI metadata can be gleaned from the method signatures of the endpoints, further obviating the need for repetitive ceremony code that so frequently litters MVC Core code) with easy usage of Wolverine’s transactional outbox.

I should also point out that even if you aren’t using any kind of message storage or durable endpoints, Wolverine will not actually send messages until any database transaction has completed successfully. Think of this as a non-durable, in memory outbox built into your HTTP endpoints.

Summary

The transactional outbox pattern is a valuable tool for helping create resilient systems, and Wolverine makes it easy to use within your system code. I’m frequently working with clients who aren’t utilizing a transactional outbox even when they’re using asynchronous work or trying to cascade work as “domain events” published from other transactions. It’s something I always call out when I see it, but it’s frequently hard to introduce all new infrastructure in existing projects or within tight timelines — and let’s be honest, timelines are always tight.

I think my advice is to be aware of this need upfront when you are picking out the technologies you’re going to use as the foundation for your architecture. To be blunt, a lot of shops I think are naively opting into MediatR as a core tool without realizing the important functionality it is completely missing in order to build a resilient system — like a transactional outbox. You can, and many people do, complement MediatR with a real messaging tool like MassTransit.

Instead, you could just use Wolverine that basically does both “mediator” and asynchronous messaging with one programming model of handlers and does so with a potentially lower ceremony and higher productivity coding model than any of those other tools in .NET.

Message Broker per Tenant with Wolverine

The new feature shown in this post was built by JasperFx Software as part of a client engagement. This is exactly the kind of novel or challenging issue we frequently help our clients solve. If there’s something in your shop’s ongoing efforts where you could use some extra technical help, reach out to sales@jasperfx.net and we’ll be happy to talk with you.

Wolverine 3.4 was released today with a large new feature for multi-tenancy through asynchronous messaging. This feature set was envisioned for usage in an IoT system using the full “Critter Stack” (Marten and Wolverine) where “our system” is centralized in the cloud, but has to communicate asynchronously with physical devices deployed at different client sites:

The system in question already uses Marten’s support for separating per tenant information into separate PostgreSQL databases. Wolverine itself works with Marten’s multi-tenancy to make that a seamless process within Wolverine messaging workflows. All of that arguably quite robust already support was envisioned to be running within either HTTP web services or asynchronous messaging workflows completely controlled by the deployed application and its peer services. What’s new with Wolverine 3.4 is the ability to isolate the communication with remote client (tenant) devices and the centralized, cloud deployed “our system.”

We can isolate the traffic between each client site and our system first by using a separate Rabbit MQ broker or at least a separate virtual host per tenant as implied in the code sample from the docs below:

var builder = Host.CreateApplicationBuilder();

builder.UseWolverine(opts =>
{
    // At this point, you still have to have a *default* broker connection to be used for 
    // messaging. 
    opts.UseRabbitMq(new Uri(builder.Configuration.GetConnectionString("main")))
        
        // This will be respected across *all* the tenant specific
        // virtual hosts and separate broker connections
        .AutoProvision()

        // This is the default, if there is no tenant id on an outgoing message,
        // use the default broker
        .TenantIdBehavior(TenantedIdBehavior.FallbackToDefault)

        // Or tell Wolverine instead to just quietly ignore messages sent
        // to unrecognized tenant ids
        .TenantIdBehavior(TenantedIdBehavior.IgnoreUnknownTenants)

        // Or be draconian and make Wolverine assert and throw an exception
        // if an outgoing message does not have a tenant id
        .TenantIdBehavior(TenantedIdBehavior.TenantIdRequired)

        // Add specific tenants for separate virtual host names
        // on the same broker as the default connection
        .AddTenant("one", "vh1")
        .AddTenant("two", "vh2")
        .AddTenant("three", "vh3")

        // Or, you can add a broker connection to something completel
        // different for a tenant
        .AddTenant("four", new Uri(builder.Configuration.GetConnectionString("rabbit_four")));

    // This Wolverine application would be listening to a queue
    // named "incoming" on all virtual hosts and/or tenant specific message
    // brokers
    opts.ListenToRabbitQueue("incoming");

    opts.ListenToRabbitQueue("incoming_global")
        
        // This opts this queue out from being per-tenant, such that
        // there will only be the single "incoming_global" queue for the default
        // broker connection
        .GlobalListener();

    // More on this in the docs....
    opts.PublishMessage<Message1>()
        .ToRabbitQueue("outgoing").GlobalSender();
});

With this solution, we now have a “global” Rabbit MQ broker we can use for all internal communication or queueing within “our system”, and a separate Rabbit MQ virtual host for each tenant. At runtime, when a message tagged with a tenant id is published out of “our system” to a “per tenant” queue or exchange, Wolverine is able to route it to the correct virtual host for that tenant id. Likewise, Wolverine is listening to the queue named “incoming” on each virtual host (plus the global one), and automatically tags messages coming from the per tenant virtual host queues with the correct tenant id to facilitate the full Marten/Wolverine workflow downstream as the incoming messages are handled.

Now, let’s switch it up and use Azure Service Bus instead to basically do the same thing. This time though, we can register additional tenants to use a separate Azure Service Bus fully qualified namespace or connection string:

var builder = Host.CreateApplicationBuilder();

builder.UseWolverine(opts =>
{
    // One way or another, you're probably pulling the Azure Service Bus
    // connection string out of configuration
    var azureServiceBusConnectionString = builder
        .Configuration
        .GetConnectionString("azure-service-bus");

    // Connect to the broker in the simplest possible way
    opts.UseAzureServiceBus(azureServiceBusConnectionString)

        // This is the default, if there is no tenant id on an outgoing message,
        // use the default broker
        .TenantIdBehavior(TenantedIdBehavior.FallbackToDefault)

        // Or tell Wolverine instead to just quietly ignore messages sent
        // to unrecognized tenant ids
        .TenantIdBehavior(TenantedIdBehavior.IgnoreUnknownTenants)

        // Or be draconian and make Wolverine assert and throw an exception
        // if an outgoing message does not have a tenant id
        .TenantIdBehavior(TenantedIdBehavior.TenantIdRequired)

        // Add new tenants by registering the tenant id and a separate fully qualified namespace
        // to a different Azure Service Bus connection
        .AddTenantByNamespace("one", builder.Configuration.GetValue<string>("asb_ns_one"))
        .AddTenantByNamespace("two", builder.Configuration.GetValue<string>("asb_ns_two"))
        .AddTenantByNamespace("three", builder.Configuration.GetValue<string>("asb_ns_three"))

        // OR, instead, add tenants by registering the tenant id and a separate connection string
        // to a different Azure Service Bus connection
        .AddTenantByConnectionString("four", builder.Configuration.GetConnectionString("asb_four"))
        .AddTenantByConnectionString("five", builder.Configuration.GetConnectionString("asb_five"))
        .AddTenantByConnectionString("six", builder.Configuration.GetConnectionString("asb_six"));
    
    // This Wolverine application would be listening to a queue
    // named "incoming" on all Azure Service Bus connections, including the default
    opts.ListenToAzureServiceBusQueue("incoming");

    // This Wolverine application would listen to a single queue
    // at the default connection regardless of tenant
    opts.ListenToAzureServiceBusQueue("incoming_global")
        .GlobalListener();
    
    // Likewise, you can override the queue, subscription, and topic behavior
    // to be "global" for all tenants with this syntax:
    opts.PublishMessage<Message1>()
        .ToAzureServiceBusQueue("message1")
        .GlobalSender();

    opts.PublishMessage<Message2>()
        .ToAzureServiceBusTopic("message2")
        .GlobalSender();
});

This is a lot to take in, but the major point is to keep client messages completely separate from each other while also enabling the seamless usage of multi-tenanted workflows all the way through the Wolverine & Marten pipeline. As we deal with the inevitable teething pains, the hope is that the behavioral code within the Wolverine message handlers never has to be concerned with any kind of per-tenant bookkeeping. For more information, see:

And as I typed all of that out, I do fully realize that there would be some value in having a comprehensive “Multi-Tenancy with the Critter Stack” guide in one place.

Summary

I honestly don’t know if this feature set gets a lot of usage, but it came out of what’s been a very productive collaboration with JasperFx’s original customer as we’ve worked together on their IoT system. Quite a bit of improvements to Wolverine have come about as a direct reaction to friction or opportunities that we’ve spotted with our collaboration.

As far as multi-tenancy goes, I think the challenges for the Critter Stack toolset has been to give our users all the power they need to keep data and now messaging completely separate across tenants while relentlessly removing repetitive code ceremony or usability issues. My personal philosophy is that lower ceremony code is an important enabler of successful software development efforts over time.

Messaging with Wolverine using Apache Pulsar

As part of the Wolverine 3.0 release a couple weeks back, Wolverine gained a lightweight messaging transport option with Apache Pulsar.

“Lightweight” just meaning “it doesn’t have a lot of features yet”

To get started, first add this Nuget to your system:

dotnet add WolverineFx.Pulsar

And just like that, you’re ready to start adding publishing rules and subscriptions to Pulsar topics in a very idiomatic Wolverine way:

var builder = Host.CreateApplicationBuilder();
builder.UseWolverine(opts =>
{
    opts.UsePulsar(c =>
    {
        var pulsarUri = builder.Configuration.GetValue<Uri>("pulsar");
        c.ServiceUrl(pulsarUri);
        
        // Any other configuration you want to apply to your
        // Pulsar client
    });

    // Publish messages to a particular Pulsar topic
    opts.PublishMessage<Message1>()
        .ToPulsarTopic("persistent://public/default/one")
        
        // And all the normal Wolverine options...
        .SendInline();

    // Listen for incoming messages from a Pulsar topic
    opts.ListenToPulsarTopic("persistent://public/default/two")
        
        // And all the normal Wolverine options...
        .Sequential();
});

It’s a minimal implementation for right now (no conventional routing topology for example), but we’ll happily enhance this transport option if there’s interest. To be honest, the Pulsar transport has been hanging out inside the Wolverine codebase for years, but never got released for whatever reason. Someone asked about this awhile back, so here we go!

Assuming that the US still exists tomorrow and I’m not trying to move my family to Canada, I’ll follow up with Wolverine’s new, fully robust transport option for Google Pubsub.

Network Round Trips are Evil, So Batch Your Queries When You Can

JasperFx Software frequently helps our customers wring better performance or scalability out of our customer’s systems. A somewhat frequent opportunity for improving the responsiveness and throughput of systems is merely identifying ways to batch up requests from middle tier, server side code to the backing database or databases. There’s a certain amount of overhead in making any network round trips between processes, and it often pays off in terms of performance to batch up queries or commands to reduce the number of network round trips.

Today I’m merely going to focus on Marten as a persistence tool and a bit on Wolverine as “Mediator” and show some ways that Marten reduces network round trips. Just know though that this general idea of reducing network round trips by batching up database queries or commands is certainly going to apply to improving performance with any other persistence tooling.

Batching Writes

First off, let’s just look at doing a mixed bag of “writes” with a Marten session to add, delete, or modify user data:

    public static async Task modify_some_users(IDocumentSession session)
    {
        // Mixed bag of document operations
        session.Insert(new User{FirstName = "Hans", LastName = "Gruber"});
        session.Store(new User{FirstName = "John", LastName = "McClane"});
        session.DeleteWhere<User>(x => x.LastName == "Miller");

        session.Patch<User>(x => x.LastName == "May").Set(x => x.Nickname, "Mayday");

        // Let's append some events too just for fun!
        session.Events.StartStream<User>(new UserCreated("Harry", "Ellis"));

        // Commit all the changes
        await session.SaveChangesAsync();
    }

What’s important to note in the code up above is that all the logical operations to insert, “upsert”, delete, patch, or start event streams is batched up into a single database round trip when session.SaveChangesAsync() is called. In the early days of Marten we tried a lot of different things to improve throughput in Marten, including alternative serializers, reducing string concatenation, code generation techniques, and alternative data structures internally. Our consistent finding was that the single biggest improvements always came from reducing network round trips, with alternative JSON serializers being a distant second, and every other factor far behind that.

If you’re curious about the technical underpinnings, Marten 7+ is creating a single NpgsqlBatch for all the commands and even using positional parameters because that’s a touch more efficient for the interaction with PostgreSQL.

Moving to another example, let’s say that you have workflow where you need to apply logical changes to a batch of Item entities using a mix of Marten and Wolverine. Here’s a first, naive cut at this handler:

public static class ApproveItemsHandler
{
    // I'm passing in CancellationToken because:
    // a. It's probably a good idea anyway
    // b. That's how Wolverine "enforces" message timeouts
    public static async Task HandleAsync(
        ApproveItems message,
        IDocumentSession session,
        CancellationToken token)
    {
        foreach (var id in message.Ids)
        {
            var existing = await session.LoadAsync<Item>(id, token);
            if (existing != null)
            {
                existing.Approved = true;
                session.Store(existing);
            }
        }

        await session.SaveChangesAsync(token);
    }
}

Now, let’s assume that we could easily be getting 100-1000 different ids of Item entities to approve at any one time, which would make this operation chatty and potentially slow. Let’s make it a little worse though and add in Wolverine as a “mediator” to handle each individual Item inline:

public static class ApproveItemHandler
{
    public static async Task HandleAsync(
        ApproveItem message, 
        IDocumentSession session, 
        CancellationToken token)
    {
        var existing = await session.LoadAsync<Item>(message.Id, token);
        if (existing == null) return;

        existing.Approved = true;

        await session.SaveChangesAsync(token);
    }
}

public static class ApproveItemsHandler
{
    // I'm passing in CancellationToken because:
    // a. It's probably a good idea anyway
    // b. That's how Wolverine "enforces" message timeouts
    public static async Task HandleAsync(
        ApproveItems message,
        IMessageBus bus,
        CancellationToken token)
    {
        foreach (var id in message.Ids)
        {
            await bus.InvokeAsync(new ApproveItem(id), token);
        }
    }
}

In terms of performance, the second version is even worse. We compounded the existing chattiness problem with looking up each Item individually by separating out the database “writes” to separate database calls and separate transactions within “Wolverine as Mediator” usage through that InvokeAsync()call. You should be aware that when you use any kind of in process “Mediator” tool like Wolverine, MediatR, Brighter, or MassTransit’s in process mediator functionality that each call to InvokeAsync() involves a certain amount of overhead and very likely means a nested transaction that gets committed independently from the parent message handling or HTTP request that triggered the InvokeAsync() call. I think I might go so far as to say that calling IMessageBus.InvokeAsync() from another message handler is a “guilty until proven innocent” type of approach.

I’d of course argue here that the performance may or may not end up being a big deal, but not having a transactional boundary around the original message processing can easily lead to inconsistent state in our system if any of the individual Item updates fail.

Let’s make one last version of this batch approve item handler with an eye toward reducing network round trips and keeping a strongly consistent transaction boundary around all the approvals (meaning they all succeed or all fail, no in between “who knows what really happened” state):

public static class ApproveItemsHandler
{
    // I'm passing in CancellationToken because:
    // a. It's probably a good idea anyway
    // b. That's how Wolverine "enforces" message timeouts
    public static async Task HandleAsync(
        ApproveItems message,
        IDocumentSession session,
        CancellationToken token)
    {
        // Find all the related items in *one* network round trip
        var items = await session.LoadManyAsync<Item>(token, message.Ids);
        foreach (var item in items)
        {
            item.Approved = true;
            session.Store(item);
        }

        await session.SaveChangesAsync().ConfigureAwait(false);
    }
}

In the usage above, we’re making one database call to fetch the matching Item entities, and updating all of the impacted Item entities in a single batched database command within the IDocumentSession.SaveChangesAsync(). This version should almost always be much faster than the earlier versions where we issued individual queries for each Item, plus we have better transactional consistency in the case of system errors.

Lastly of course for the sake of completeness, we could just do this with one network round trip:

public static class ApproveItemsHandler
{
    // Assuming here that Wolverine "auto-transaction"
    // middleware is in place
    public static void Handle(
        ApproveItems message,
        IDocumentSession session)
    {
        session
            .Patch<Item>(x => x.Id.IsOneOf(message.Ids))
            .Set(x => x.Approved, true);
    }
}

That last version eliminates the usage of current state to validate the operation first or give us any indication of what exactly was changed, but hey, that’s the fastest possible way to code this with Marten and it might be suitable sometimes in your own system.

Batch Querying

Marten has strong support for batch querying where you can combine any number of disparate queries in a batch to the database, and read the results one at a time afterward. Here’s an example from the Marten documentation, but just know that session in this case is a Marten IQuerySession:

// Start a new IBatchQuery from an active session
var batch = session.CreateBatchQuery();

// Fetch a single document by its Id
var user1 = batch.Load<User>("username");

// Fetch multiple documents by their id's
var admins = batch.LoadMany<User>().ById("user2", "user3");

// User-supplied sql
var toms = batch.Query<User>("where first_name == ?", "Tom");

// Where with Linq
var jills = batch.Query<User>().Where(x => x.FirstName == "Jill").ToList();

// Any() queries
var anyBills = batch.Query<User>().Any(x => x.FirstName == "Bill");

// Count() queries
var countJims = batch.Query<User>().Count(x => x.FirstName == "Jim");

// The Batch querying supports First/FirstOrDefault/Single/SingleOrDefault() selectors:
var firstInternal = batch.Query<User>().OrderBy(x => x.LastName).First(x => x.Internal);

// Kick off the batch query
await batch.Execute();

// All of the query mechanisms of the BatchQuery return
// Task's that are completed by the Execute() method above
var internalUser = await firstInternal;
Debug.WriteLine($"The first internal user is {internalUser.FirstName} {internalUser.LastName}");

That’s a little more code and complexity than you might have otherwise if you just make the queries independently, but there’s some significant performance gains to be made from batching queries.

This is a much, much longer discussion than I have ambition for today, but the rampant usage of repository abstractions around raw persistence tooling like Marten has a tendency to knock out more powerful functionality like query batching. That’s especially compounded with “noun-centric” code organization where you may have IOrderRepository and IInvoiceRepository wrapping your raw persistence tooling, but yet frequently have logical operations that deal with both Order and Invoice data at the same time. With Wolverine especially, I’m pushing JasperFx clients and our users to try to get away with eschewing these kinds of abstractions and leaning hard into Wolverine’s “A-Frame Architecture” approach so you can utilize the full power of Marten (or EF Core or RavenDb or whatever else you actually use).

What I can tell you is that for a current JasperFx client, we’re looking in the long run to collapse and simplify and inline their current usage of Railway Programming and MediatR-calling-other-MediatR handlers as a way to enable us to utilize query batching to optimize some of their very complicated operations that today end up being very chatty between the server and database.

Including Related Entities when Querying

There are plenty of times you’ll have an operation in your system that needs information from multiple, related entity types. Marten provides its version of Include() in its LINQ provider as a way to batch query related documents in fewer network round trips, and hence better performance like this example from the tests:

[Fact]
public async Task simple_include_for_a_single_document()
{
    var user = new User();
    var issue = new Issue { AssigneeId = user.Id, Title = "Garage Door is busted" };

    using var session = theStore.IdentitySession();
    session.Store<object>(user, issue);
    await session.SaveChangesAsync();

    using var query = theStore.QuerySession();

    // The following query will fetch both the Issue document
    // and the related User document for the Issue in one
    // network round trip
    User included = null;
    var issue2 = query
        .Query<Issue>()
        .Include<User>(x => included = x).On(x => x.AssigneeId)
        .Single(x => x.Title == issue.Title);

    included.ShouldNotBeNull();
    included.Id.ShouldBe(user.Id);

    issue2.ShouldNotBeNull();
}

I’ll refer you to the documentation for more alternative usages, but just know that Marten has this capability and it’s a valuable way to improve performance in your system by reducing the number of network roundtrips between your code and the backend.

Marten’s Include() functionality was originally inspired/copied from RavenDb. We’ve unfortunately had some confusion in the past from folks coming over from EF Core where its Include() means something very different. Oh, and just to pull aside the curtain, it’s not doing any kind of JOIN behind the scenes, but a temporary table + multiple SELECT() statements.

Summary

I just wanted to get a handful of things across in this post:

  1. Network round trips can easily be expensive and a contributing factor in poor system performance. Reducing the number of network round trips by batching queries can sometimes pay off overall even if that sometimes means more complex code
  2. Marten has several features specifically meant to improve system performance by batching database queries that you can utilize. Both Marten and Wolverine are absolutely built with this philosophy of reducing network round trips as much as possible
  3. Any coding or architectural strategy that results in excessive layering, long call stacks (A calls B that calls C that calls D that finally calls to a database), or really just obfuscates your understanding of how system operations lead to increased numbers of network round trips can easily be harmful to your system’s performance because you can’t easily “see” what your system is really doing