CQRS Command Handlers with Marten

Hey, did you know that JasperFx Software offers both consulting services and support plans for the “Critter Stack” tools? One of the common areas where we’ve helped our clients is in using Marten or Wolverine when the usage involves quite a bit of potential concerns about concurrency. As I write this, I’m currently working with a JasperFx client to implement the FetchForWriting API shown in this post as a way of improving their system’s resiliency to concurrency problems.

You’ve decided to use event sourcing as your persistence strategy, so that your persisted state of record are the actual business events segregated by streams that represent changes in state to some kind of logical business entity (an invoice? an order? an incident? a project?). Of course there will have to be some way of resolving or “projecting” the raw events into a usable view of the system state, but we’ll get to that.

You’ve also decided to organize your system around a CQRS architectural style (Command Query Responsibility Segregation). With a CQRS approach, the backend code is mostly organized around the “verbs” of your system, meaning the “command” messages (this could be HTTP services, and I’m not implying that there automatically has to be any asynchronous messaging) that are handled to capture changes to the system (events in our case), and “query” endpoints or APIs that strictly serve up information about your system.

While it’s certainly possible to do either Event Sourcing or CQRS without the other, the two things do go together as Forrest Gump would say, like peas and carrots. Marten is certainly valuable as part of a CQRS with Event Sourcing approach within a range of .NET messaging or web frameworks, but there is quite a bit of synergy between Marten and its “Critter Stack” stable mate Wolverine (see the details about the integration here).

And lastly of course, you’ve quite logically decided to use Marten as the persistence mechanism for the events. Marten is also a strong fit because it comes with some important functionality that we’ll need for CQRS command handlers:

  • Marten’s event projection support can give us a representation of the current state of the raw event data in a usable way that we’ll need within our command handlers to both validate requested actions and to “decide” what additional events should be persisted to our system
  • The FetchForWriting API in Marten will not only give us access to the projected event data, but it provides an easy mechanism for both optimistic and pessimistic concurrency protections in our system
  • Marten allows for a couple different options of projection lifecycle that can be valuable for performance optimization with differing system needs

As a sample application problem domain, I got to be part of a successful effort during the worst of the pandemic to stand up a new “telehealth” web portal using event sourcing. One of the concepts in that system that we needed to track in that system was the activity of a health care provider (nurse, doctor, nurse practitioner) with events for when they were available and what they were doing at any particular time during the day for later decision making:

public record ProviderAssigned(Guid AppointmentId);

public record ProviderJoined(Guid BoardId, Guid ProviderId);

public record ProviderReady;

public record ProviderPaused;

public record ProviderSignedOff;

// "Charting" is basically just whatever
// paperwork they need to do after
// an appointment, and it was important
// for us to track that time as part
// of their availability and future
// planning
public record ChartingFinished;

public record ChartingStarted;

public enum ProviderStatus
{
    Ready,
    Assigned,
    Charting,
    Paused
}

But of course, at several points, you do actually need to know what the actual state of the provider’s current shift is to be able to make decisions within the command handlers, so we had a “write” model something like this:

// I'm sticking the Marten "projection" logic for updating
// state from the events directly into this "write" model,
// but you could separate that into a different class if you
// prefer
public class ProviderShift
{
    public Guid Id { get; set; }

    // This is important, this would be set by Marten to the 
    // current event number or revision of the ProviderShift
    // aggregate. This is going to be important later for
    // concurrency protections
    public int Version { get; set; }
    public Guid BoardId { get; private set; }
    public Guid ProviderId { get; init; }
    public ProviderStatus Status { get; private set; }
    public string Name { get; init; }
    public Guid? AppointmentId { get; set; }
    
    // The Create & Apply methods are conventional targets
    // for Marten's "projection" capabilities
    // But don't worry, you would never *have* to take a reference
    // to Marten itself like I did below jsut out of laziness
    public static ProviderShift Create(
        ProviderJoined joined)
    {
        return new ProviderShift
        {
            Status = ProviderStatus.Ready,
            ProviderId = joined.ProviderId,
            BoardId = joined.BoardId
        };
    }

    public void Apply(ProviderReady ready)
    {
        AppointmentId = null;
        Status = ProviderStatus.Ready;
    }

    public void Apply(ProviderAssigned assigned)
    {
        Status = ProviderStatus.Assigned;
        AppointmentId = assigned.AppointmentId;
    }

    public void Apply(ProviderPaused paused)
    {
        Status = ProviderStatus.Paused;
        AppointmentId = null;
    }

    // This is kind of a catch all for any paperwork the
    // provider has to do after an appointment has ended
    // for the just concluded appointment
    public void Apply(ChartingStarted charting)
    {
        Status = ProviderStatus.Charting;
    }
}

The whole purpose of the ProviderShift type above is to be a “write” model that contains enough information for the command handlers to “decide” what should be done — as opposed to a “read” model that might have richer information like the provider’s name that would be more suitable or usable for using within a user interface. “Write” or “read” in this case is just a role within the system, and at different times it might be valuable to have separate models for different consumers of the information and in other times be able to happily get by with a single model.

Alright, so let’s finally look at a very simple command handler related to providers that tries to mark the provider as being finished charting:

// Since we're focusing on Marten, I'm using an MVC Core
// controller just because it's commonly used and understood
public class CompleteChartingController : ControllerBase
{
    [HttpPost("/provider/charting/complete")]
    public async Task Post(
        [FromBody] CompleteCharting charting,
        [FromServices] IDocumentSession session)
    {
        // We're looking up the current state of the ProviderShift aggregate
        // for the designated provider
        var stream = await session
            .Events
            .FetchForWriting<ProviderShift>(charting.ProviderShiftId, HttpContext.RequestAborted);

        // The current state
        var shift = stream.Aggregate;
        
        if (shift.Status != ProviderStatus.Charting)
        {
            // Obviously do something smarter in your app, but you 
            // get the point
            throw new Exception("The shift is not currently charting");
        }
        
        // Append a single new event just to say 
        // "charting is finished". I'm relying on 
        // Marten's automatic metadata to capture
        // the timestamp of this event for me
        stream.AppendOne(new ChartingFinished());

        // Commit the transaction
        await session.SaveChangesAsync();
    }
}

I’m using the Marten FetchForWriting() API to get at the current state of the event stream for the designated provider shift (a provider’s activity during a single day). I’m also using this API to capture a new event marking the provider as being finished with charting. FetchForWriting() is doing two important things for us:

  1. Executes or finds the projected data for ProviderShift from the raw events. More on this a little later
  2. Provides a little bit of optimistic concurrency protection for our provider shift stream

Building on the theme of concurrency first, the command above will “remember” the current state of the ProviderShift at the point that FetchForWriting() is called. Upon SaveChangesAsync(), Marten will reject the transaction and throw a ConcurrencyException if some how, some way, some other request magically came through and completed against that same ProviderShift stream between the call to FetchForWriting() and SaveChangesAsync().

That level of concurrency is baked in, but we can do a little bit better. Remember that the ProviderShift has this property:

    // This is important, this would be set by Marten to the 
    // current event number or revision of the ProviderShift
    // aggregate. This is going to be important later for
    // concurrency protections
    public int Version { get; set; }

The projection capability of Marten makes it easy for us to “know” and track the current version of the ProviderShift stream so that we can feed it back to command handlers later. Here’s the full definition of the CompleteCharting command:

public record CompleteCharting(
    Guid ProviderShiftId,
    
    // This version is meant to mean "I was issued
    // assuming that the ProviderShift is currently
    // at this version in the server, and if the version
    // has shifted since, then this command is now invalid"
    int Version
);

Let’s tighten up the optimistic concurrency protection such that Marten will shut down the command handling faster before we waste system resources doing unnecessary work by passing the command version right into this overload of FetchForWriting():

// Since we're focusing on Marten, I'm using an MVC Core
// controller just because it's commonly used and understood
public class CompleteChartingController : ControllerBase
{
    [HttpPost("/provider/charting/complete")]
    public async Task Post(
        [FromBody] CompleteCharting charting,
        [FromServices] IDocumentSession session)
    {
        // We're looking up the current state of the ProviderShift aggregate
        // for the designated provider
        var stream = await session
            .Events
            .FetchForWriting<ProviderShift>(
                charting.ProviderShiftId, 
                
                // Passing the expected, starting version of ProviderShift
                // into Marten
                charting.Version,
                HttpContext.RequestAborted);

        // And the rest of the controller stays the same as
        // before....
    }
}

In the usage above, Marten will do a version check both at the point of FetchForWriting() using the version we passed in, and again during the call to SaveChangesAsync() to reject any changes made if there was a concurrent update to that same stream.

Lastly, Marten gives you the ability to opt into heavier, exclusive access to the ProviderShift with this option:

// We're looking up the current state of the ProviderShift aggregate
// for the designated provider
var stream = await session
    .Events
    .FetchForExclusiveWriting<ProviderShift>(
        charting.ProviderShiftId, 
        HttpContext.RequestAborted);

In that last usage, we’re relying on the underlying PostgreSQL database to get us an exclusive row lock on the ProviderShift event stream such that only our current operation is allowed to write to that event stream while we have the lock. This is heavier and comes with some risk of database locking problems, but solves the concurrency issue.

So that’s concurrency protection in FetchForWriting(), but I mostly skipped over when and how that API will execute the projection logic to go from the raw events like ProviderJoined, ProviderReady, or ChartingStarted to the projected ProviderShift.

Any projection in Marten can be calculated or executed with three different “projection lifecycles”:

  1. Live — in this case, a projection is calculated on the fly by loading the raw events in memory and calculating the current state right then and there. In the absence of any other configuration, this is the default lifecycle for the ProviderShift per stream aggregation.
  2. Inline — a projection is updated at the time any events are appended by Marten and persisted by Marten as a document in the PostgreSQL database.
  3. Async — a projection is updated in a background process as events are captured by Marten across the system. The projected state is persisted as a Marten document to the underlying PostgreSQL database

The first two options give you strong consistency models where the projection will always reflect the current state of the events captured to the database. Live is probably a little more optimal in the case where you have many writes, but few reads, and you want to optimize the “write” side. Inline is optimal for cases where you have few writes, but many reads, and you want to optimize the “read” side (or need to query against the projected data rather than just load by id). The Async model gives you the ability to take the work of projecting events into the aggregated state out of both the “write” and “read” side of things. This might easily be advantageous for performance and very frequently necessary for ordering or concurrency concerns.

In the case of the FetchForWriting() API, you will always have a strongly consistent view of the raw events because that API is happily wallpapering over the lifecycle for you. Live aggregation works as you’d expect, Inline aggregation works by just loading the expected document directly from the database, and Async aggregation is a hybrid model that starts from the last known persisted value for the aggregate and applies any missing events right on top of that (the async behavior was a big feature added in Marten 7).

By hiding the actual lifecycle behavior behind the FetchForWriting() signature, teams are able to experiment with different approaches to optimize their application without breaking existing code.

Summary

FetchForWriting() was built specifically to ease the usage of Marten within CQRS command handlers after seeing how much boilerplate code teams were having to use before with Marten. At this point, this is our strongly recommended approach for command handlers. Also note that this API is utilized within the Wolverine + Marten “aggregate handler workflow” usage that does even more to remove code ceremony from CQRS command handler code. To some degree, what is now Wolverine was purposely rebooted and saved from the scrap heap specifically because of that combination with Marten and the FetchForWriting API.

Personally, I’m opposed to any kind of IAggregateRepository or approach where the “write” model itself tracks the events that are applied or uncommitted. I’m trying hard to discourage folks using Marten away from this still somewhat popular old approach in favor of a more Functional Programming-ish approach.

FetchForWriting could be used as part of a homegrown “Decider” pattern usage if that’s something you prefer (I think the “decider” pattern in real life usage ends up devolving into brute force procedural code through massive switch statements personally).

The “telehealth” system I mentioned before was built in real life with a hand-rolled Node.js event sourcing implementation, but that experience has had plenty of influence over later Marten work including a feature that just went into Marten over the weekend for a JasperFx client to be able to emit “side effect” actions and messages during projection updates.

I was deeply unimpressed with the existing Node.js tooling for event sourcing at that time (~2020), but I would hope it’s much better now. Marten has absolutely grown in capability in the past couple years.

Why and How Marten is a Great Document Database

Just a reminder, JasperFx Software offers support contracts and consulting services to help you get the most out of the “Critter Stack” tools (Marten and Wolverine). If you’re building server side applications on .NET, the Critter Stack is the most feature rich tool set for Event Sourcing and Event Driven Architectures around. And as I hope to prove to you in this post, Marten is a great option as a document database too!

Marten as a project started as an ultimately successful attempt to replace my then company’s usage of an early commercial “document database” with the open source PostgreSQL database — but with a small, nascent event store functionality bolted onto the side. With the exception of LINQ provider related issues, most of my attention these days is focused on the event sourcing side of things with the document database features in Marten just being a perfect complement for event projections.

This week and last though, I’ve had cause to work with a different document database option and it served to remind me that hey, Marten has a very strong technical story as a document database option. With that being said, let me get on with lionizing Marten by starting with a quick start.

Let’s say that you are building a server side .NET application with some kind of customer data and you at least start by modeling that data like so:

public class Customer
{
    public Guid Id { get; set; }

    // We'll use this later for some "logic" about how incidents
    // can be automatically prioritized
    public Dictionary<IncidentCategory, IncidentPriority> Priorities { get; set; }
        = new();

    public string? Region { get; set; }

    public ContractDuration Duration { get; set; }
}

public record ContractDuration(DateOnly Start, DateOnly End);

public enum IncidentCategory
{
    Software,
    Hardware,
    Network,
    Database
}

public enum IncidentPriority
{
    Critical,
    High,
    Medium,
    Low
}

And once you have those types, you’d like to have that customer data saved to a database in a way that makes it easy to persist, query, and load that data with minimal developmental cost while still being as robust as need be. Assuming that you have access to a running instance of PostgreSQL (it’s very Docker friendly and I tend to use that as a development default), bring in Marten by first adding a reference to the “Marten” Nuget. Next, write the following code in a simple console application that also contains the C# code from above:

using Marten;
using Newtonsoft.Json;

// Bootstrap Marten itself with default behaviors
await using var store = DocumentStore
    .For("Host=localhost;Port=5432;Database=marten_testing;Username=postgres;password=postgres");

// Build a Customer object to save
var customer = new Customer
{
    Duration = new ContractDuration(new DateOnly(2023, 12, 1), new DateOnly(2024, 12, 1)),
    Region = "West Coast",
    Priorities = new Dictionary<IncidentCategory, IncidentPriority>
    {
        { IncidentCategory.Database, IncidentPriority.High }
    }
};

// IDocumentSession is Marten's unit of work 
await using var session = store.LightweightSession();
session.Store(customer);
await session.SaveChangesAsync();

// Marten assigned an identity for us on Store(), so 
// we'll use that to load another copy of what was 
// just saved
var customer2 = await session.LoadAsync<Customer>(customer.Id);

// Just making a pretty JSON printout
Console.WriteLine(JsonConvert.SerializeObject(customer2, Formatting.Indented));

And that’s that, we’ve got a working usage of Marten to save, then load Customer data to the underlying PostgreSQL database. Right off the bat I’d like to point out a couple things about the code samples above:

  • We didn’t have to do any kind of mapping from our Customer type to a database structure. Marten is using JSON serialization to persist the data to the database, and as long as the Customer type can be bi-directionally serialized to and from JSON, Marten is going to be able to persist and load the type.
  • We didn’t specify or do anything about the actual database structure. In its default “just get things done” settings, Marten is able to happily detect that the necessary database objects for Customer are missing in the database, and build those out for us on demand

So that’s the easiest possible quick start, but what about integrating Marten into a real .NET application? Assuming you have a reference to the Marten nuget package, it’s just an IServiceCollection.AddMarten() call as shown below from a sample web application:

builder.Services.AddMarten(opts =>
    {
        // You always have to tell Marten what the connection string to the underlying
        // PostgreSQL database is, but this is the only mandatory piece of 
        // configuration
        var connectionString = builder.Configuration.GetConnectionString("postgres");
        opts.Connection(connectionString);
    })
    // This is a mild performance optimization
    .UseLightweightSessions();

At this point in the .NET ecosystem, it’s more or less idiomatic to use an Add[Tool]() method to integrate tools with your application’s IHost, and Marten tries to play within the typical .NET rules here.

I think this idiom and the generic host builder tooling has been a huge boon to OSS tool development in the .NET space compared to the old wild, wild west days. I do wish it would stop changing from .NET version to version though.

So that’s all a bunch of simple stuff, so let’s dive into something that shows off how Marten — really PostgreSQL — has a much stronger transactional model than many document databases that only support eventual consistency:

public static async Task manipulate_customer_data(IDocumentSession session)
{
    var customer = new Customer
    {
        Name = "Acme",
        Region = "North America",
        Class = "first"
    };
    
    // Marten has "upsert", insert, and update semantics
    session.Insert(customer);
    
    // Partial updates to a range of Customer documents
    // by a LINQ filter
    session.Patch<Customer>(x => x.Region == "EMEA")
        .Set(x => x.Class, "First");

    // Both the above operations happen in one 
    // ACID transaction
    await session.SaveChangesAsync();

    // Because Marten is ACID compliant, this query would
    // immediately work as expected even though we made that 
    // broad patch up above and inserted a new document.
    var customers = await session.Query<Customer>()
        .Where(x => x.Class == "First")
        .Take(100)
        .ToListAsync();
}

That’s a completely contrived example, but the point is, because Marten is completely ACID-compliant, you can make a range of operations within transactional boundaries and not have to worry about eventual consistency issues in immediate queries that other document databases suffer from.

So what else does Marten do? Here’s a bit of a rundown because Marten has a significantly richer built in feature set than many other low level document databases:

And quite a bit more than that, including some test automation support I really need to better document:/

And on top of everything else, because Marten is really just a fancy library on top of PostgreSQL — the most widely used database engine in the world — Marten instantly comes with a wide array of solid cloud hosting options as well as being deployable to local infrastructure on premise. PostgreSQL is also very Docker-friendly, making it a great technical choice for local development.

What’s a Document Database?

If you’re not familiar with the term “document database,” it refers to a type of NoSQL database where data is almost inevitably stored as JSON data, where the database allows you to quickly marshal objects in code to the database, then query that data later right back into the same object structures. The huge benefit of document databases at development time is being able to code much more productively because you just don’t have nearly as much friction as you do when dealing with any kind of object-relational mapping with either an ORM tool or by writing SQL and object mapping code by hand.

Low Ceremony Sagas with Wolverine

Wolverine puts a very high emphasis on reducing code ceremony and tries really hard to keep itself out of your application code. Wolverine is also built with testability in mind. If you’d be interested in learning more about how Wolverine could simplify your existing application code or set you up with a solid foundation for sustainable productive development for new systems, JasperFx Software is happy to work with you!

Before I get into the nuts and bolts of Wolverine sagas, let me come right out and say that I think that compared to other .NET frameworks, the Wolverine implementation of sagas requires much less code ceremony and therefore easier code to reason about. Wolverine also requires less configuration and explicit code to integrate your custom saga with Wolverine’s saga persistence. Lastly, Wolverine makes the development experience better by building in so much support for automatically configuring development environment resources like database schema objects or message broker objects. I do not believe that any other .NET tooling comes close to the developer experience that the Wolverine and its “Critter Stack” buddy Marten can provide.

Let’s say that you have some kind of multi-step process in your application that might have some mix of:

  • Callouts to 3rd party services
  • Some logical steps that can be parallelized
  • Possibly some conditional workflow based on the results of some of the steps
  • A need to enforce “timeout” conditions if the workflow is taking too long — think maybe of some kind of service level agreement for your workflow

This kind of workflow might be a great opportunity to use Wolverine’s version of Sagas. Conceptually speaking, a “saga” in Wolverine is just a special message handler that needs to inherit from Wolverine’s Saga class and modify itself to track state between messages that impact the saga.

Below is a simple version from the documentation called Order:

public record StartOrder(string OrderId);

public record CompleteOrder(string Id);

public class Order : Saga
{
    // You do need this for the identity
    public string? Id { get; set; }

    // This method would be called when a StartOrder message arrives
    // to start a new Order
    public static (Order, OrderTimeout) Start(StartOrder order, ILogger<Order> logger)
    {
        logger.LogInformation("Got a new order with id {Id}", order.OrderId);

        // creating a timeout message for the saga
        return (new Order{Id = order.OrderId}, new OrderTimeout(order.OrderId));
    }

    // Apply the CompleteOrder to the saga
    public void Handle(CompleteOrder complete, ILogger logger)
    {
        logger.LogInformation("Completing order {Id}", complete.Id);

        // That's it, we're done. Delete the saga state after the message is done.
        MarkCompleted();
    }

    // Delete this order if it has not already been deleted to enforce a "timeout"
    // condition
    public void Handle(OrderTimeout timeout, ILogger<Order> logger)
    {
        logger.LogInformation("Applying timeout to order {Id}", timeout.Id);

        // That's it, we're done. Delete the saga state after the message is done.
        MarkCompleted();
    }

    public static void NotFound(CompleteOrder complete, ILogger logger)
    {
        logger.LogInformation("Tried to complete order {Id}, but it cannot be found", complete.Id);
    }
}

Order is really meant to just be a state machine where it modifies its own state in response to incoming messages and returns cascading messages (you could also use IMessageBus directly as a method argument if you prefer, but my advice is to use simple pure functions) that tell Wolverine what to do next in the multi-step process.

A new Order saga can be created by any old message handler by simply returning a type that inherits from the Saga type in Wolverine. Wolverine is going to automatically discover any public types inheriting from Saga and utilize any public instance methods following certain naming conventions (or static Create() methods) as message handlers that are assumed to modify the state of the saga objects. Wolverine itself is handling everything to do with loading and persisting the Order saga object between commands around the call to the message handler methods on the saga types.

If you’ll notice the Handle(CompleteOrder) method above, the Order is calling MarkCompleted() on itself. That will tell Wolverine that the saga is now complete, and direct Wolverine to delete the current Order saga from the underlying persistence.

As for tracking the saga id between message calls, there are naming conventions about the messages that Wolverine can use to pluck the identity of the saga, but if you’re strictly exchanging messages between a Wolverine saga and other Wolverine message handlers, Wolverine will automatically track metadata about the active saga back and forth.

I’d also ask you to notice the OrderTimeout message that the Order saga returns as it starts. That message type is shown below:

// This message will always be scheduled to be delivered after
// a one minute delay because I guess we want our customers to be
// rushed? Goofy example code:)
public record OrderTimeout(string Id) : TimeoutMessage(1.Minutes());

Wolverine’s cascading message support allows you to return an outgoing message with a time delay — or a particular scheduled time or any other number of options — by just returning a message object. Admittedly this ties you into a little more of Wolverine, but the key takeaway I want you to notice here is that every handler method is a “pure function” with no service dependencies. Every bit of the state change and workflow logic can be tested with simple unit tests that merely work on the before and after state of the Order objects as well as the cascaded messages returned by the message handler functions. No mock objects, no fakes, no custom test harnesses, just simple unit tests. No other saga implementation in the .NET ecosystem can do that for you anywhere nearly as cleanly.

So far I’ve only focused on the logical state machine part of sagas, so let’s jump to persistence. Wolverine has long had a simplistic saga storage mechanism with its integration with Marten, and that’s still one of the easiest and most powerful options. You can also use EF Core for saga persistence, but ick, that means having to use EF Core.

Wolverine 3.0 added a new lightweight saga persistence option for either Sql Server or PostgreSQL (without Marten or EF Core) that just stands up a little table for just a single Saga type and uses JSON serialization to persist the saga. Here’s an example:

using var host = await Host.CreateDefaultBuilder()
    .UseWolverine(opts =>
    {
        // This isn't actually mandatory, but you'll
        // need to do it just to make Wolverine set up
        // the table storage as part of the resource setup
        // otherwise, Wolverine is quite capable of standing
        // up the tables as necessary at runtime if they
        // are missing in its default configuration
        opts.AddSagaType<RedSaga>("red");
        opts.AddSagaType(typeof(BlueSaga),"blue");
       
        
       // This part is absolutely necessary just to have the 
       // normal transactional inbox/outbox support and the new
       // default, lightweight saga persistence
opts.PersistMessagesWithSqlServer(Servers.SqlServerConnectionString, "color_sagas");
        opts.Services.AddResourceSetupOnStartup();
    }).StartAsync();

Just as with the integration with Marten, Wolverine’s lightweight saga implementation is able to build the necessary database table storage on the fly at runtime if it’s missing. The “critter stack” philosophy is to optimize the all important “time to first pull request” metric — meaning that you can get a Wolverine application up fast on your local development box because it’s able to take care of quite a bit of environment setup for you.

Lastly, Wolverine 3.0 is adding optimistic concurrency checks for the Marten saga storage and the new lightweight saga persistence. That’s been an important missing piece of the Wolverine saga story.

Just for some comparison, check out some other saga implementations in .NET:

Critter Stack Workshop at DevUp Next Week

If you’re planning on coming to my workshop, you’ll want .NET 8, Git, and some kind of Docker Desktop on your box to run the sample code I’ll use in the workshop. If Docker doesn’t work for you, you maybe want a local install of PostgreSQL and Rabbit MQ.

Hey folks, I’ll be giving the first ever workshop on building an Event Driven Architecture with the full “Critter Stack” at DevUp 2024 in St. Louis next week on Wednesday the 14th bright and early at 8:30 AM.

We’ll be working through a sample backend web service that also communicates with other headless services using Event Sourcing within a general CQRS architectural approach with the whole “Critter Stack. We’ll use Marten (over PostgreSQL) for our persistence strategy using both its event sourcing support and as a document database. We’ll combine that with Wolverine as a server side framework for background processing, asynchronous messaging, and even as an alternative HTTP endpoint framework. Lastly, just for fun, there’ll be guest appearances from other JasperFx tools like Alba and Oakton for automated integration testing and command line execution respectively.

So why would you want to come to this and what might you get out of it? I’m hoping the takeaways — even if you don’t intend to use Marten or Wolverine — will be:

  • A good introduction to event sourcing as a technical approach and some of the real challenges you’ll face when building a system using event sourcing as a persistence strategy
  • An understanding of what goes into building a robust CQRS system including dealing with transient errors, observability, concurrency, and how to best segment message processing to achieve self-healing systems
  • Challenging the industry conventional wisdom about the efficacy of Hexagonal/Clean/Onion Architecture approaches really are when I show what a very low ceremony “vertical slice architecture” approach can be like with the Wolverine + Marten combination while still being robust, observable, highly testable, and still keeping infrastructure concerns out of the business logic
  • Some exposure to Open Telemetry and general observability tooling for distributed systems you absolutely want if you don’t already have that
  • Techniques for automating integration tests against an Event Driven Architecture

Because I’m absolutely in the business of promoting the “Critter Stack” tools, I’ll try to convince you that:

  • Marten is already the most robust and feature rich solution for event sourcing in the .NET ecosystem while also being arguably the easiest to get up and going with
  • How the Wolverine + Marten combination makes CQRS with Event Sourcing a much easier architectural pattern to use
  • Wolverine’s emphasis on low ceremony code approaches can help systems be more successfully maintained over time by simply having much less noise code and layering in your systems while still being robust
  • The “Critter Stack” has an excellent story for automated integration testing support that can do a lot to make your development efforts more successful
  • Both Marten & Wolverine can help your teams achieve a low “time to first pull request” by doing a lot to configure necessary infrastructure like databases or message brokers on the fly for a better development experience

I’m excited, because this is my first opportunity to do a workshop on the “Critter Stack” tools, and I think we’ve got a very compelling technical story to tell about the tools! And if nothing else, I’m looking forward to any feedback that might help us improve the tools down the line.

And for any *ahem* older folks from St. Louis in my talk, I personally at the time that Jorge Orta was out at first and the Cards should have won that game.

Critter Stack Roadmap for the Rest of 2024

It’s been a little bit since I’ve written any kind of update on the unofficial “Critter Stack” roadmap, with the last update in February. A ton of new, important strategic features have been added to especially Marten since then, with plenty of expansion of Wolverine to boot. Before jumping into what’s to come, let me indulge in a bit of retrospective about what new features or improvements have been delivered in 2024 so far before getting into the road map in the next section.

2024 just so far!

At this point I feel like we’ve crossed off the mass majority of the features I thought we needed to add to Marten this year to be able to stand Marten up against basically any other event store infrastructure tooling on the whole damn planet. What that also means is that I think that Marten development probably slows down to nothing but bug fixes and community contributions as folks run into things. There are still some features in the backlog that I might personally work on, but that will be in the course of some ongoing and potential JasperFx client work.

That being said, let’s talk about the rest of the year!

The Roadmap for the Back Half of 2024

Obviously, this roadmap is just a snapshot in time and client needs, community requests, and who knows what changes from Microsoft or other related tools could easily change priorities from any of this. All that being said, this is the Critter Stack core team & I’s current vision of the next big steps.

  1. Wolverine 3.0 is an ongoing effort. I’m hopeful it can be out in the next couple weeks
  2. RavenDb integration with Wolverine. This is some client sponsored work that I’m hoping will set Wolverine up for easier integration with other database engines in the near future
  3. “Critter Watch” — an ongoing effort to build out a management and monitoring console application for any combination of Marten, Wolverine, and future critters. This will be a paid product. We’ve already had a huge amount of feedback from Marten & Wolverine users, and I’m personally eager to get this moving in the 3rd quarter
  4. Marten 8.0 and Wolverine 4.0 — the goal here is mostly a rearrangement of dependencies underneath both Marten & Wolverine to eliminate duplication and spin out a lot of the functionality around projections and the async daemon. This will also be a significant effort to spin off some new helper libraries for the “Critter Stack” to enable the next bullet point
  5. “Ermine” — a port of Marten’s event store capabilities and a subset of its document database capabilities to SQL Server. My thought is that this will share a ton of guts with Marten. I’m voting that Ermine will have direct integration with Wolverine from the very beginning as well for subscriptions and middleware similar to the existing Wolverine.Marten integration
  6. If Ermine goes halfway well, I’d love to attempt a CosmosDb and maybe a DynamoDb backed event store in 2025

As usual, that list is a guess and unlikely to ever play out exactly that way. All the same though, there’s my hopes and dreams for the next 6 months or so.

Did I miss something you were hoping for? Does any of that concern you? Let me and the rest of the Critter Stack community know either here or anywhere in our Discord room!

Making Marten Faster Through Table Partitioning

There’s been a definite theme lately about increasing the performance and scalability of Marten, as evident (I hope) in my post last week describing new optimization options in Marten 7.25. Today I was able to push a follow up feature that got missed in that release that allows Marten users to utilize PostgreSQL table partitioning behind the scenes for document storage (7.25 added a specific utilization of table partitioning for the event store). The goal here is in selected scenarios, this would enable PostgreSQL to be mostly working with far smaller tables than it would otherwise, and hence perform better in your system.

Think of these common usages of Marten:

  1. You’re using soft deletes in Marten against a document type, and the mass majority of the time Marten is putting a default filter in for you to only query for “not deleted” documents
  2. You are aggressively using the Marten feature to mark event streams as archived when whatever process they model is complete. In this case, Marten is usually querying against the event table using a value of is_archived = false
  3. You’re using “conjoined” multi-tenancy within a single Marten database, and most of the time your system is naturally querying for data from only one tenant at a time
  4. Maybe you have a table where you’re frequently querying against a certain date property or querying for documents by a range of expected values

In all of those cases, it would be more performant to opt into PostgreSQL table partitioning where PostgreSQL is separating the storage for a single, logical table into separate “partition” tables. Again, in all of those cases above we can enable PostgreSQL + Marten to largely be querying against a much smaller table partition than the entire table would be — and querying against smaller database tables can be hugely more performant than querying against bigger tables.

The Marten community has been kicking around the idea of utilizing table partitioning for years (since 2017 by my sleuthing last week through the backlog), but it always got kicked down the road because of the perceived challenges in supporting automatic database migrations for partitions the same way we do today in Marten for every other database schema object (and in Wolverine too for that matter).

Thanks to an engagement with a JasperFx customer who has some pretty extreme scalability needs, I was able to spend the time last week to break through the change management challenges with table partitioning, and finally add table partitioning support for Marten.

As for what’s possible, let’s say that you want to create table partitioning for a certain very large table in your system for a particular document type. Here’s the new option for 7.26:


var store = DocumentStore.For(opts =>
{
    opts.Connection("some connection string");

    // Set up table partitioning for the User document type
    opts.Schema.For<User>()
        .PartitionOn(x => x.Age, x =>
        {
            x.ByRange()
                .AddRange("young", 0, 20)
                .AddRange("twenties", 21, 29)
                .AddRange("thirties", 31, 39);
        });

    // Or use pg_partman to manage partitioning outside of Marten
    opts.Schema.For<User>()
        .PartitionOn(x => x.Age, x =>
        {
            x.ByExternallyManagedRangePartitions();

            // or instead with list

            x.ByExternallyManagedListPartitions();
        });

    // Or use PostgreSQL HASH partitioning and split the users over multiple tables
    opts.Schema.For<User>()
        .PartitionOn(x => x.UserName, x =>
        {
            x.ByHash("one", "two", "three");
        });

    opts.Schema.For<Issue>()
        .PartitionOn(x => x.Status, x =>
        {
            // There is a default partition for anything that doesn't fall into
            // these specific values
            x.ByList()
                .AddPartition("completed", "Completed")
                .AddPartition("new", "New");
        });

});

To use the “hot/cold” storage on soft-deleted documents, you have this new option:

var store = DocumentStore.For(opts =>
{
    opts.Connection("some connection string");

    // Opt into partitioning for one document type
    opts.Schema.For<User>().SoftDeletedWithPartitioning();

    // Opt into partitioning and and index for one document type
    opts.Schema.For<User>().SoftDeletedWithPartitioningAndIndex();

    // Opt into partitioning for all soft-deleted documents
    opts.Policies.AllDocumentsSoftDeletedWithPartitioning();
});

And to partition “conjoined” tenancy documents by their tenant id, you have this feature:

storeOptions.Policies.AllDocumentsAreMultiTenantedWithPartitioning(x =>
{
    // Selectively by LIST partitioning
    x.ByList()
        // Adding explicit table partitions for specific tenant ids
        .AddPartition("t1", "T1")
        .AddPartition("t2", "T2");

    // OR Use LIST partitioning, but allow the partition tables to be
    // controlled outside of Marten by something like pg_partman
    // https://github.com/pgpartman/pg_partman
    x.ByExternallyManagedListPartitions();

    // OR Just spread out the tenant data by tenant id through
    // HASH partitioning
    // This is using three different partitions with the supplied
    // suffix names
    x.ByHash("one", "two", "three");

    // OR Partition by tenant id based on ranges of tenant id values
    x.ByRange()
        .AddRange("north_america", "na", "nazzzzzzzzzz")
        .AddRange("asia", "a", "azzzzzzzz");

    // OR use RANGE partitioning with the actual partitions managed
    // externally
    x.ByExternallyManagedRangePartitions();
});

Summary

Your mileage will vary of course depending on how big your database is and how you really query the database, but at least in some common cases, the Marten community is pretty excited for the potential of table partitioning to improve Marten performance and scalability.

Marten 7.25 is Better, Faster, Stronger

Just a reminder, JasperFx Software offers support contracts and consulting services to help you get the most out of the “Critter Stack” tools (Marten and Wolverine). If you’re building server side applications on .NET, the Critter Stack is the most feature rich tool set for Event Sourcing and Event Driven Architectures around.

The theme of the last couple months for the Marten community and I has been a lot of focus on improving Marten’s event sourcing feature set to be able to reliably handle very large data loads. With that being said, Marten 7.25 was released today with a huge amount of improvements around its performance, scalability, and reliability under very heavy loads (we’re talking about databases with hundreds of millions of events).

Before I get into the details, there’s a lot of thanks and credit to go around:

  • Core team member JT made several changes to reduce the amount of object allocations that Marten does at runtime in SQL generation — and basically every operation it does involves SQL generation
  • Ben Edwards contributed several ideas, important feedback, and some optimization pull requests toward this release
  • Babu made some improvements to our CI pipeline that made it a lot easier for me to troubleshoot the work I was doing
  • a-shtifanov-laya did some important load testing harness work that helped quite a bit to validate the work in this release
  • Urbancsik Gergely for doing a lot of performance and load testing with Marten that helped tremendously
  • And I’ll be giving some personal thanks to a couple JasperFx clients who enabled me to spend so much time on this effort

And now, the highlights for event store performance, scalability, and reliability improvements — most of which are “opt in” configuration items so as to not disturb existing users:

  • The new “Quick Append” option is completely usable and appears from testing to be about 2X as fast as the V4-V7 “Rich” appending process. More than that, opting into the quick append mechanism appears to eliminate the event “skipping” problem with asynchronous projections or event subscriptions that some people have experienced in very heavy loads. Lastly, I originally meant to play this work because I think it will alleviate issues that some people run into with concurrent operations trying to append events to the same event streams
  • Marten can create a Hot/Cold Storage mechanism around its event store by leveraging PostgreSQL native table partitioning. There’s work on users part to mark event streams as archived for this to matter, but this is potentially a huge win for Marten scalability. A later Marten release will shortly add partitioning support to Marten document tables
  • There’s several optimizations inside of even the classic, “rich” event appending that reduce the number of network round trips happening at runtime — and thats a good thing because network round trips are evil!
  • There’s some further optimization to the FetchForWriting() API that I heavily recommend for command handler usage that is documented here.

Outside of the event store improvements, Marten also got a new “Specification” alternative called “query plans” for reusable query logic for when Marten’s compiled query feature won’t work. The goal with this feature is to help a JasperFx client migrate off of Clean Architecture style repository wrapper abstractions in a way that doesn’t cause code duplication while also setting them up to utilize Marten’s batch query feature for a much more performant code.

Summary

I’m still digging out from a very good family vacation, but man, getting this stuff out feels really good. The Marten community is very vibrant right now, with a lot of community engagement that’s driving the tool’s capabilities into much more serious system territory. The “hot/cold storage” feature that just went in has been in the Marten backlog since 2017, and I’m thrilled to finally see that make it in.

Network Round Trips are Evil

As Houston gets drenched by Hurricane Beryl as I write this, I’m reminded of a formative set of continuing education courses I took when I was living in Houston in the late 90’s and plotting my formal move into software development. Whatever we learned about VB6 in those MSDN classes is long, long since obsolete, but one pithy saying from one of our instructors (who went on to become a Marten user and contributor!) stuck with me all these years later:

Network round trips are evil

John Cavnar-Johnson

His point then, and my point now quite frequently working with JasperFx Software clients, is that round trips between browsers to backend web servers or between application servers and the database need to be treated as expensive operations and some level of request, query, or command batching is often a very valuable optimization in systems design.

Consider my family’s current kitchen predicament as diagrammed above. The very expensive, original refrigerator from our 20 year old house finally gave up the ghost, and we’ve had it completely removed while we wait on a different one to be delivered. Fortunately, we have a second refrigerator in the garage. When cooking now though, it’s suddenly a lot more time consuming to go to the refrigerator for an ingredient since I can’t just turn around and grab something when the kitchen refrigerator was just a step away. Now that we have to walk across the house from the kitchen to the garage to get anything from the other refrigerator, it’s becoming very helpful to try to grab as many things as you can at one time so you’re not constantly running back and forth.

While this issue certainly arises from user interfaces or browser applications making a series of little requests to a backing server, I’m going to focus on database access for the rest of this post. Using a simple example from Marten usage, consider this code where I’m just creating five little documents and persisting them to a database:


    public static async Task storing_many(IDocumentSession session)
    {
        var user1 = new User { FirstName = "Magic", LastName = "Johnson" };
        var user2 = new User { FirstName = "James", LastName = "Worthy" };
        var user3 = new User { FirstName = "Michael", LastName = "Cooper" };
        var user4 = new User { FirstName = "Mychal", LastName = "Thompson" };
        var user5 = new User { FirstName = "Kurt", LastName = "Rambis" };

        session.Store(user1);
        session.Store(user2);
        session.Store(user3);
        session.Store(user4);
        session.Store(user5);

        // Marten will *only* make a single database request here that
        // bundles up "upsert" statements for all five users added above
        await session.SaveChangesAsync();
    }

In the code above, Marten is only issuing a single batched command to the backing database that performs all five “upsert” operations in one network round trip. We were very performance conscious in the very early days of Marten development and did quite a bit of experimentation with different options for JSON serialization or how exactly to write SQL that queried inside of JSONB or even table structure. Consistently and unsurprisingly though, the biggest jump in performance was when we introduced command batching to reduce the number of network round trips between code using Marten and the backing PostgreSQL database. That early performance testing also led us to early investments in Marten’s batch querying support and the Include() query functionality that allows Marten users to fetch related data with fewer network hops to the database.

Just based on my own experience, here are two trends I see about interacting with databases in real world systems:

  1. There’s a huge performance gain to be made by finding ways to batch database queries
  2. It’s very common for systems in the real world to suffer from performance problems that can at least partially be traced to unnecessary chattiness between an application and its backing database(s)

At a guess, I think the underlying reasons for the chattiness problem are something like:

  • Developers who just aren’t aware of the expense of network round trips or aren’t aware of how to utilize any kind of database query batching to reduce the problems
  • Wrapper abstractions around the raw database persistence tooling that hides more powerful APIs that might alleviate the chattiness problem
  • Wrapper abstractions that encourage a pattern of only loading data by keys one row/object/document at a time
  • Wrapper abstractions around the raw database persistence that discourage developers from learning more about the underlying persistence tooling they’re using. Don’t underestimate how common that problem is. And I’ve absolutely been guilty of causing that issue as a younger “architect” in the past who created those abstractions.
  • Complicated architectural layering that can make it quite difficult to easily reason about the cause and effect between system inputs and the database queries that those inputs spawn. Big call stacks of a controller calling a mediator tool that calls one service that calls other services that call different repository abstractions that all make database queries is a common source of chattiness because it’s hard to even see where all the chattiness is coming from by reading the code.

As you might know if you’ve stumbled across any of my writings or conference talks from the last couple years, I’m not a big fan of typical Clean/Onion Architecture approaches. I think these approaches introduce a lot of ceremony code into the mix that I think causes more harm overall than whatever benefits they bring.

Here’s an example that’s somewhat contrived, but also quite typical in terms of the performance issues I do see in real life systems. Let’s say you’ve got a command handler for a ShipOrder command that will need to access data for both a related Invoice and Order entity that could look something like this:

public class ShipOrderHandler
{
    private readonly IInvoiceRepository _invoiceRepository;
    private readonly IOrderRepository _orderRepository;
    private readonly IUnitOfWork _unitOfWork;

    public ShipOrderHandler(
        IInvoiceRepository invoiceRepository,
        IOrderRepository orderRepository,
        IUnitOfWork unitOfWork)
    {
        _invoiceRepository = invoiceRepository;
        _orderRepository = orderRepository;
        _unitOfWork = unitOfWork;
    }

    public async Task Handle(ShipOrder command)
    {
        // Making one round trip to get an Invoice
        var invoice = await _invoiceRepository.LoadAsync(command.InvoiceId);

        // Then a second round trip using the results of the first pass
        // to get follow up data
        var order = await _orderRepository.LoadAsync(invoice.OrderId);

        // do some logic that changes the state of one or both of these entities

        // Commit the transaction that spans the two entities
        await _unitOfWork.SaveChangesAsync();
    }
}

The code is pretty simple in this case, but we’re still making more database round trips than we absolutely have to — and real enterprise systems can get much, much bigger than my little contrived example and incur a lot more overhead because of the chattiness problem that the repository abstractions naturally let in.

Let’s try this functionality again, but this time just depending on the raw persistence tooling (Marten’s IDocumentSession and use a Wolverine-style command handler to boot to further reduce the code noise:

public static class ShipOrderHandler
{
    // We're still keeping some separation of concerns to separate the infrastructure from the business
    // logic, but Wolverine lets us do that just through separate functions instead of having to use
    // all the limiting repository abstractions
    public static async Task<(Order, Invoice)> LoadAsync(IDocumentSession session, ShipOrder command)
    {
        // This is important (I think:)), the admittedly complicated
        // Marten usage below fetches both the invoice and its related order in a 
        // single network round trip to the database and can lead to substantially
        // better system performance
        Order order = null;
        var invoice = await session
            .Query<Invoice>()
            .Include<Order>(i => i.OrderId, o => order = o)
            .Where(x => x.Id == command.InvoiceId)
            .FirstOrDefaultAsync();

        return (order, invoice);
    }
    
    public static void Handle(ShipOrder command, Order order, Invoice invoice)
    {
        // do some logic that changes the state of one or both of these entities
        // I'm assuming that Wolverine is handling the transaction boundaries through
        // middleware here
    }
}

In the second code sample, we’ve been able to go right at the Marten tooling to take advantage of its more advanced functionality to batch up data fetching for better performance that wasn’t easily possible when we were putting repository abstractions between our command handler and the underlying persistence tooling. Moreover, we can even reason about the resulting database operations that are happening as a result of our command that can be somewhat obfuscated by more layers and more code separation as is common in Onion/Clean/Ports and Adapters style approaches.

It’s not just repository abstractions that cause problems, sometimes it’s just happily useful little extension methods that can be the source of chattiness. Here’s a pair of helper extension methods around Marten’s event store functionality that help you start a new event stream in a single line of code or append a single event to an existing event stream in a single line of code:

public static class DocumentSessionExtensions
{
    public static Task Add<T>(this IDocumentSession documentSession, Guid id, object @event, CancellationToken ct)
        where T : class
    {
        documentSession.Events.StartStream<T>(id, @event);
        return documentSession.SaveChangesAsync(token: ct);
    }

    public static Task GetAndUpdate<T>(
        this IDocumentSession documentSession,
        Guid id,
        int version,
        
        // If we're being finicky about performance here, these kinds of inline
        // lambdas are NOT cheap at runtime and I'm recommending against
        // continuation passing style APIs in application hot paths for
        // my clients
        Func<T, object> handle,
        CancellationToken ct
    ) where T : class =>
        documentSession.Events.WriteToAggregate<T>(id, version, stream =>
            stream.AppendOne(handle(stream.Aggregate)), ct);
}

Fine, right? These potentially make your code cleaner and simpler but of course, they’re also potentially harmful. Here’s an example of these two extension methods that were similar to some code I saw in the wild last week:

public static class Handler
{
    public static async Task Handle(Command command, IDocumentSession session, CancellationToken token)
    {
        var id = CombGuidIdGeneration.NewGuid();
        
        // One round trip
        await session.Add<Aggregate>(id, new FirstEvent(), token);

        if (command.SomeCondition)
        {
            // This actually makes a pair of round trips, one to fetch the current state
            // of the Aggregate compiled from the first event appended above, then
            // a second to append the SecondEvent
            await session.GetAndUpdate<Aggregate>(id, 1, _ => new SecondEvent(), token);
        }
    }
}

I got involved with this code in reaction to some load testing that was resulting in disappointing results. When I was pulled in, I saw the extra round trips that snuck in because of the usage of the convenience extension methods they had been using, and suggested a change to something like this (but with Wolverine’s aggregate handler workflow that simplified the code more than this):

public static class Handler
{
    public static async Task Handle(Command command, IDocumentSession session, CancellationToken token)
    {
        var events = determineEvents(command).ToArray();
        
        var id = CombGuidIdGeneration.NewGuid();
        session.Events.StartStream<Aggregate>(id, events);

        await session.SaveChangesAsync(token);
    }

    // This was isolated so you can easily unit test the business
    // logic that "decides" what events to append
    public static IEnumerable<object> determineEvents(Command command)
    {
        yield return new FirstEvent();
        if (command.SomeCondition)
        {
            yield return new SecondEvent();
        }
    }
}

The code above cut down the number of network round trips to the database and greatly improved the results of the load testing.

Summary

If system performance is a concern in your system (it’s not always), you probably need to be cognizant of how chatty your application is in regards to its communication and interaction with the backing database. Or any other remote system or infrastructure that your system interacts with at runtime.

Personally, I think that higher ceremony code structures make it much more likely to incur issues with database chattiness especially by first obfuscating your code so you don’t even easily recognize where there’s chattiness, then second by wrapping simplifying abstractions around your database persistence tooling that eliminate the usage of more advanced functionality for query batching.

And of course, both Wolverine and Marten put a heavy emphasis on reducing code ceremony and generally on code noise in general because I personally think that’s very valuable to help teams succeed over time with software systems in the wild. My theory of the case is that even at the cost of a little bit of “magic”, simply reducing the amount of code you have to wade through in existing systems will make those systems easier to maintain and troubleshoot over time.

And on that note, I’m basically on vacation for the next week, and you can address your complaints about my harsh criticism of Clean/Onion Architectures to the ether:-)

The “Critter Stack” Just Leveled Up on Modular Monolith Support

The goal for the “Critter Stack” tools is to be the absolute best set of tools for building server side .NET applications, and especially for any usage of Event Driven Architecture approaches. To go even farther, I would like there to be a day where organizations purposely choose the .NET ecosystem just because of the benefits that the “Critter Stack” provides over other options. But for now, that’s the journey we’re on. This post demonstrates an important new feature that I think fills in a huge capability gap that has long bothered me.

And as always, JasperFx Software is happy to work with any “Critter Stack” users through either support contracts or consulting engagements to help you wring the most value out of our tools and help you succeed with what you’re building.

I recently wrote some posts about the whole “Modular Monolith” architecture approach:

  1. Thoughts on “Modular Monoliths”
  2. Actually Talking about Modular Monoliths
  3. Modular Monoliths and the “Critter Stack”

Marten already has strong support for modular monoliths through its “separate store” functionality. In the last post though, I lamented that all the whizz bang integration between Wolverine and Marten like the aggregate handler workflow or Wolverine’s transactional outbox or Marten side effects or the new event subscription model that make the full “Critter Stack” such a productive toolset for Event Sourcing were, alas, not available in conjunction with Marten’s separate store model.

This week I’m helping a JasperFx client who has some complicated multi-tenancy requirements. In one of their services they have some types of event streams that need to use “conjoined multi-tenancy“, but at least one type of event stream (and related aggregate) that is global across all tenants. Marten event stores are either multi-tenanted or they’re not, with no mixing and matching. It occurred to me that we could solve this issue by putting the one type of global event streams in a separate Marten store. Even though the 2nd Marten store will still target the exact same PostgreSQL database (but in a different schema), we can give this second schema a different configuration to accommodate the different tenancy rules. Moreover, this would even be a good way to improve performance and scalability of their service by effectively sharding the events and streams tables (smaller tables generally mean better performance).

At the same time, I’m also helping them introduce Wolverine message handlers as well, and I really wanted to be able to use the aggregate handler workflow for commands that spawn new Marten events (effectively the Critter Stack version of the “Decider” pattern, but lower ceremony). I finally took some time — and stumbled onto a workable approach — that finally adds far better support for modular monolith architectures with the Wolverine 2.13.0 release that hit today.

Specifically, Wolverine finally got some support for full integration with ancillary document and event stores from Marten in the same application.

To see a sneak peek, let’s say that you have two additional Marten stores for your application like these two:

public interface IPlayerStore : IDocumentStore;
public interface IThingStore : IDocumentStore;

You can now bootstrap a Marten + Wolverine application (using the WolverineFx.Marten Nuget dependency) like so:

theHost = await Host.CreateDefaultBuilder()
    .UseWolverine(opts =>
    {
        opts.Services.AddMarten(Servers.PostgresConnectionString).IntegrateWithWolverine();

        opts.Policies.AutoApplyTransactions();
        opts.Durability.Mode = DurabilityMode.Solo;

        opts.Services.AddMartenStore<IPlayerStore>(m =>
        {
            m.Connection(Servers.PostgresConnectionString);
            m.DatabaseSchemaName = "players";
        })
            // THIS AND BELOW IS WHAT IS NEW FOR WOLVERINE 2.13
            .IntegrateWithWolverine()
            
            // Add a subscription
            .SubscribeToEvents(new ColorsSubscription())
            
            // Forward events to wolverine handlers
            .PublishEventsToWolverine("PlayerEvents", x =>
            {
                x.PublishEvent<ColorsUpdated>();
            });
        
        // Look at that, it even works with Marten multi-tenancy through separate databases!
        opts.Services.AddMartenStore<IThingStore>(m =>
        {
            m.MultiTenantedDatabases(tenancy =>
            {
                tenancy.AddSingleTenantDatabase(tenant1ConnectionString, "tenant1");
                tenancy.AddSingleTenantDatabase(tenant2ConnectionString, "tenant2");
                tenancy.AddSingleTenantDatabase(tenant3ConnectionString, "tenant3");
            });
            m.DatabaseSchemaName = "things";
        }).IntegrateWithWolverine(masterDatabaseConnectionString:Servers.PostgresConnectionString);

        opts.Services.AddResourceSetupOnStartup();
    }).StartAsync();

Now, moving to message handlers or HTTP endpoints, you will have to explicitly tag either the containing class or individual messages with the [MartenStore(store type)] attribute like this simple example below:

// This will use a Marten session from the
// IPlayerStore rather than the main IDocumentStore
[MartenStore(typeof(IPlayerStore))]
public static class PlayerMessageHandler
{
    // Using a Marten side effect just like normal
    public static IMartenOp Handle(PlayerMessage message)
    {
        return MartenOps.Store(new Player{Id = message.Id});
    }
}

Boom! Even that minor sample is using transactional middleware targeting Marten and able to work with the separate IPlayerStore. This new integration includes:

  • Transactional outbox support for all configured Marten stores
  • Transactional middleware
  • The “aggregate handler workflow”
  • Marten side effects
  • Subscriptions to Marten events
  • Multi-tenancy, both “conjoined” Marten multi-tenancy and multi-tenancy through separate databases

For more information, see the documentation on this new feature.

Summary

I’m maybe a little too excited for a feature that most users will never touch, but for those who do see this, the “Critter Stack” now has first class modular monolith support across a wide range of the features that make the “Critter Stack” a desirable platform in the first place.

Sneak Peek of Strong Typed Identifiers in Marten

If you really need to have strong typed identifier support in Marten right now, here’s the long standing workaround.

Some kind of support for “strong typed identifiers” has long been a feature request for Marten from our community. I’ve even been told by a few folks that they wouldn’t consider using Marten until it did have this support. I’ve admittedly been resistant to adding this feature strictly out of (a very well founded) fear that tackling that would be a massive time sink that didn’t really improve the tool in any great way (I’m hoping to be wrong about that).

My reticence about this aside, it came up a couple times in the past week from JasperFx Software customers, and that magically ratchets up the priority quite a bit. That all being said, here’s a little preview of some ongoing work for the next Marten feature release.

Let’s say that you’re using the Vogen library for value types and want to use this custom type for the identity of an Invoice document in Marten:

[ValueObject<Guid>]
public partial struct InvoiceId;

public class Invoice
{
    // Marten will use this for the identifier
    // of the Invoice document
    public InvoiceId? Id { get; set; }
    public string Name { get; set; }
}

Jumping to some already passing tests, Marten can assign an identity to a new document is one is missing just like it would today for Guid identities:


    [Fact]
    public void store_document_will_assign_the_identity()
    {
        var invoice = new Invoice();
        theSession.Store(invoice);

        // Marten sees that there is no existing identity,
        // so it assigns a new identity 
        invoice.Id.ShouldNotBeNull();
        invoice.Id.Value.Value.ShouldNotBe(Guid.Empty);
    }

Because this actually does matter for database performance, Marten is using a sequential Guid inside of the custom InvoiceId type. Following Marten’s desire for a “it just works” development experience, Marten is able to “know” how to work with the InvoiceId type generated by Vogen without having to require any kind of explicit mapping or mandatory interfaces on the identity type — which I thought was pretty important to keep your domain code from being coupled to Marten.

Moving to basic use cases, here’s a passing test for storing and loading a new document from the database:

    [Fact]
    public async Task load_document()
    {
        var invoice = new Invoice{Name = Guid.NewGuid().ToString()};
        theSession.Store(invoice);

        await theSession.SaveChangesAsync();

        (await theSession.LoadAsync<Invoice>(invoice.Id))
            .Name.ShouldBe(invoice.Name);
    }

and a look at how the strong typed identifiers can play in LINQ expressions so far:

    [Fact]
    public async Task use_in_LINQ_where_clause()
    {
        var invoice = new Invoice{Name = Guid.NewGuid().ToString()};
        theSession.Store(invoice);

        await theSession.SaveChangesAsync();

        var loaded = await theSession.Query<Invoice>().FirstOrDefaultAsync(x => x.Id == invoice.Id);

        loaded
            .Name.ShouldBe(invoice.Name);
    }

    [Fact]
    public async Task load_many()
    {
        var invoice1 = new Invoice{Name = Guid.NewGuid().ToString()};
        var invoice2 = new Invoice{Name = Guid.NewGuid().ToString()};
        var invoice3 = new Invoice{Name = Guid.NewGuid().ToString()};
        theSession.Store(invoice1, invoice2, invoice3);

        await theSession.SaveChangesAsync();

        var results = await theSession
            .Query<Invoice>()
            .Where(x => x.Id.IsOneOf(invoice1.Id, invoice2.Id, invoice3.Id))
            .ToListAsync();
        
        results.Count.ShouldBe(3);
    }

    [Fact]
    public async Task use_in_LINQ_order_clause()
    {
        var invoice = new Invoice{Name = Guid.NewGuid().ToString()};
        theSession.Store(invoice);

        await theSession.SaveChangesAsync();

        var loaded = await theSession.Query<Invoice>().OrderBy(x => x.Id).Take(3).ToListAsync();
    }

There’s a world of use case permutations yet to go (bulk writing, numeric identities with HiLo generation, Include() queries, more LINQ scenarios, magically adding JSON serialization converters, using StrongTypedId as well), but I think we’ve got a solid start on a long asked for feature that I’ve previously been leery of building out.