Improved Declarative Persistence in Wolverine

To continue a consistent theme about how Wolverine is becoming the antidote to high ceremony Clean/Onion Architecture approaches, Wolverine 4.8 added some significant improvements to its declarative persistence support (partially after seeing how a recent JasperFx Software client was encountering a little bit of repetitive code).

A pattern I try to encourage — and many Wolverine users do like — is to make the main method of a message handler or an HTTP endpoint be the “happy path” after validation and even data lookups so that that method can be a pure method that’s mostly concerned with business or workflow logic. Wolverine can do this for you through its “compound handler” support that gets you to a low ceremony flavor of Railway Programming.

With all that out of the way, I saw a client frequently writing code something like this endpoint that would need to process a command that referenced one or more entities or event streams in their system:

public record ApproveIncident(Guid Id);

public class ApproveIncidentEndpoint
{
    // Try to load the referenced incident
    public static async Task<(Incident, ProblemDetails)> LoadAsync(
        
        // Say this is the request body, which we can *also* use in
        // LoadAsync()
        ApproveIncident command, 
        
        // Pulling in Marten
        IDocumentSession session,
        CancellationToken cancellationToken)
    {
        var incident = await session.LoadAsync<Incident>(command.Id, cancellationToken);
        if (incident == null)
        {
            return (null, new ProblemDetails { Detail = $"Incident {command.Id} cannot be found", Status = 400 });
        }

        return (incident, WolverineContinue.NoProblems);
    }

    [WolverinePost("/api/incidents/approve")]
    public SomeResponse Post(ApproveIncident command, Incident incident)
    {
        // actually do stuff knowing that the Incident is valid
    }
}

I’d ask you to mostly pay attention to the LoadAsync() method, and imagine copy & pasting dozens of times in a system. And sure, you could go back to returning IResult as a continuation from the HTTP endpoint method above, but that moves clutter back into your HTTP method and would add more manual work to mark up the method with attributes for OpenAPI metadata. Or we could improve the OpenAPI metadata generation by returning something like Task<Results<Ok<SomeResponse>, ProblemHttpResult>>, but c’mon, that’s an absolute eye sore that detracts from the readability of the code.

Instead, let’s use the newly enhanced version of Wolverine’s [Entity] attribute to simplify the code above and still get OpenAPI metadata generation that reflects both the 200 SomeResponse happy path and 400 ProblemDetails with the correct content type. That would look like this:

    [WolverinePost("/api/incidents/approve")]
    public static SomeResponse Post(
        // The request body. Wolverine doesn't require [FromBody], but it wouldn't hurt
        ApproveIncident command, 
        
        [Entity(OnMissing = OnMissing.ProblemDetailsWith400, MissingMessage = "Incident {0} cannot be found")]
        Incident incident)
    {
        // actually do stuff knowing that the Incident is valid
        return new SomeResponse();
    }

Behaviorally, at runtime that endpoint will try to load the Incident entity from whatever persistence tooling is configured for the application (Marten in the tests) using the “Id” property of the ApproveIncident object deserialized from the HTTP request body. If the data cannot be found, the HTTP requests ends with a 400 status code and a ProblemDetails response with the configured message up above. If the Incident can be found, it’s happily passed along to the main endpoint.

Not that every endpoint or message handler is really this simple, but plenty of times you would be changing a property on the incident and persisting it. We can *still* be mostly a pure function with the existing persistence helpers in Wolverine like so:

    [WolverinePost("/api/incidents/approve")]
    public static (SomeResponse, IStorageAction<Incident>) Post(
        // The request body. Wolverine doesn't require [FromBody], but it wouldn't hurt
        ApproveIncident command, 
        
        [Entity(OnMissing = OnMissing.ProblemDetailsWith400, MissingMessage = "Incident {0} cannot be found")]
        Incident incident)
    {
        incident.Approved = true;
        
        // actually do stuff knowing that the Incident is valid
        return (new SomeResponse(), Storage.Update(incident));
    }

Here’s some things I’d like you to know about that [Entity] attribute up above and how that is going to work out in real usage:

  • There is some default conventional magic going on to “decide” how to get the identity value for the entity being loaded (“IncidentId” or “Id” on the command type or request body type, then the same value in routing values for HTTP endpoints or declared query string values). This can be explicitly configured on the attribute something like [Entity(nameof(ApproveIncident.Id)]
  • Every attribute type that I’m mentioning in this post that can be applied to method parameters supports the same identity logic as I explained in the previous bullet
  • Before Wolverine 4.8, the “on missing” behavior was to simply set a 404 status code in HTTP or log that required data was missing in message handlers and quit. Wolverine 4.8 adds the ability to control the “on missing” behavior
  • This new “on missing” behavior is available on the older [Document] attribute in Wolverine.Http.Marten, and [Document] is now a direct subclass of [Entity] that can be used with either message handlers or HTTP endpoints now
  • The existing [AggregateHandler] and [Aggregate] attributes that are part of the Wolverine + Marten “aggregate handler workflow” (the “C” in CQRS) now support this “on missing” behavior, but it’s “opt in,” meaning that you would have to use [Aggregate(Required = true)] to get the gating logic. We had to make that required test opt in to avoid breaking existing behavior when folks upgraded.
  • The lighter weight [ReadAggregate] in the Marten integration also standardizes on this “OnMissing” behavior
  • Because of the confusion I was seeing from some users between [Aggregate]which is meant for writing events and is a little heavier runtime wise than [ReadAggregate], there’s a new [WriteAggregate] attribute with identical behavior to [Aggregate] and now available for message handlers as well. I think that [Aggregate] might get deprecated soon-ish to sidestep the potential confusion
  • [Entity] attribute usage is 100% supported for EF Core and RavenDb as well as Marten. Wolverine is even smart enough to select the correct DbContext type for the declared entity
  • If you coded with any of that [Entity] or Storage stuff and switched persistence tooling, your code should not have to change at all
  • There’s no runtime Reflection going on here. The usage of [Entity] is impacting Wolverine’s code generation around your message handler or HTTP endpoint methods.

The options so far for “OnMissing” behavior is this:

public enum OnMissing
{
    /// <summary>
    /// Default behavior. In a message handler, the execution will just stop after logging that the data was missing. In an HTTP
    /// endpoint the request will stop w/ an empty body and 404 status code
    /// </summary>
    Simple404,
    
    /// <summary>
    /// In a message handler, the execution will log that the required data is missing and stop execution. In an HTTP
    /// endpoint the request will stop w/ a 400 response and a ProblemDetails body describing the missing data
    /// </summary>
    ProblemDetailsWith400,
    
    /// <summary>
    /// In a message handler, the execution will log that the required data is missing and stop execution. In an HTTP
    /// endpoint the request will stop w/ a 404 status code response and a ProblemDetails body describing the missing data
    /// </summary>
    ProblemDetailsWith404,
    
    /// <summary>
    /// Throws a RequiredDataMissingException using the MissingMessage
    /// </summary>
    ThrowException
}

The Future

This new improvement to the declarative data access is meant to be part of a bigger effort to address some bigger use cases. Not every command or query is going to involve just one single entity lookup or one single Marten event stream, so what do you do when there are multiple declarations for data lookups?

I’m not sure what everyone else’s experience is, but a leading cause of performance problems in the systems I’ve helped with over the past decade has been too much chattiness between the application servers and the database. The next step with the declarative data access is to have at least the Marten integration opt into using Marten’s batch querying mechanism to improve performance by batching up requests in fewer network round trips any time there are multiple data lookups in a single HTTP endpoint or message handler.

The step after that is to also enroll our Marten integration for command handlers so that you can craft message handlers or HTTP endpoints that work against 2 or more event streams with strong consistency and transactional support while also leveraging the Marten batch querying for all the efficiency we can wring out of the tooling. I mostly want to see this behavior because I’ve seen clients who could actually use what I was just describing as a way to make their systems more efficient and remove some repetitive code.

I’ll also admit that I think this capability to have an alternative “aggregate handler workflow” that allows you to work efficiently with more than one event stream and/or projected aggregate at one time would put the Critter Stack ahead of the commercial tools pursuing “Dynamic Consistency Boundaries” with what I’ll be arguing is an easier to use alternative.

It’s already possible to work transactionally with multiple event streams at one time with strong consistency and both optimistic and exclusive version protections, but there’s opportunity for performance optimization here.

Summary

Pride goeth before destruction, and an haughty spirit before a fall.

Proverbs 16:18 in the King James version

With the quote above out of the way, let’s jump into some cocky salesmanship! My hope and vision for the Critter Stack is that it becomes the most effective tooling for building typical server side software systems. My personal vision and philosophy for making software development more productive and effective over time is to ruthlessly reduce repetitive code and eliminate code ceremony wherever possible. Our community’s take is that we can achieve improved results compared to more typical Clean/Onion/Hexagonal Architecture codebases by compressing and compacting code down without ever sacrificing performance, resiliency, or testability.

The declarative persistence helpers in this article are, I believe, a nice example of the evolving “Critter Stack Way.”

Metadata Tracking Improvements in Marten

We just released a new batch of improvements in the Marten 8.4 release that improved Marten‘s already strong support for tracking metadata on event persistence.

Override Event Metadata on Individual Events

This work was done at the behest of a JasperFx Software client. They only needed to vary header values between events, but while the hood was popped up on event metadata, we finally addressed the long awaited ability to explicitly set event timestamps.

First, we finally have the ability to allow users to modify metadata on an event by event basis including the event timestamp. This has been a long standing request from many folks to either facilitate testing scenarios or to enable easier data importing from other databases or event stores. And especially now that Marten is arguably the best event sourcing solution for .NET, folks really should have a viable path to import data from external sources.

You can do that either by grabbing the IEvent wrapper and modifying the timestamp, causation, correlation, event id (valuable for tracing event data back to external systems), or headers like this sample:

public static async Task override_metadata(IDocumentSession session)
{
    var started = new QuestStarted { Name = "Find the Orb" };

    var joined = new MembersJoined
    {
        Day = 2, Location = "Faldor's Farm", Members = new string[] { "Garion", "Polgara", "Belgarath" }
    };

    var slayed1 = new MonsterSlayed { Name = "Troll" };
    var slayed2 = new MonsterSlayed { Name = "Dragon" };

    var joined2 = new MembersJoined { Day = 5, Location = "Sendaria", Members = new string[] { "Silk", "Barak" } };

    var action = session.Events
        .StartStream<QuestParty>(started, joined, slayed1, slayed2, joined2);

    // I'm grabbing the IEvent wrapper for the first event in the action
    var wrapper = action.Events[0];
    wrapper.Timestamp = DateTimeOffset.UtcNow.Subtract(1.Hours());
    wrapper.SetHeader("category", "important");
    wrapper.Id = Guid.NewGuid(); // Just showing that you *can* override this value
    wrapper.CausationId = wrapper.CorrelationId = Activity.Current?.Id;

    await session.SaveChangesAsync();
}

Or by appending an already wrapped IEvent as I’m showing here, along with some new convenience wrapper extension methods to make the mechanics a little more declarative:

public static async Task override_metadata2(IDocumentSession session)
{
    var started = new QuestStarted { Name = "Find the Orb" };

    var joined = new MembersJoined
    {
        Day = 2, Location = "Faldor's Farm", Members = new string[] { "Garion", "Polgara", "Belgarath" }
    };

    var slayed1 = new MonsterSlayed { Name = "Troll" };
    var slayed2 = new MonsterSlayed { Name = "Dragon" };

    var joined2 = new MembersJoined { Day = 5, Location = "Sendaria", Members = new string[] { "Silk", "Barak" } };

    // The result of this is an IEvent wrapper around the
    // started data with an overridden timestamp
    // and a value for the "color" header
    var wrapper = started.AsEvent()
        .AtTimestamp(DateTimeOffset.UtcNow.Subtract(1.Hours()))
        .WithHeader("color", "blue");

    session.Events
        .StartStream<QuestParty>(wrapper, joined, slayed1, slayed2, joined2);

    await session.SaveChangesAsync();
}

The second approach is going to be necessary if you are appending events with the FetchForWriting() API (and you should be within any kind of CQRS “write” handler).

There is of course a catch. If you use the “QuickAppend” option in Marten and want to be able to override the event timestamps, you’ll need this slightly different option instead:

var builder = Host.CreateApplicationBuilder();
builder.Services.AddMarten(opts =>
{
    opts.Connection(builder.Configuration.GetConnectionString("marten"));

    // This is important!
    opts.Events.AppendMode = EventAppendMode.QuickWithServerTimestamps;
});

To avoid causing database breaking changes when upgrading, the ability to override timestamps with the “QuickAppend” option required this new “opt in” setting because this forces Marten to generate both “glue” code and a database function a little differently.

Capturing the User Name on Persisted Events

These kinds of features have to be “opt in” so that we don’t cause database changes in a minor release when people upgrade. Having to worry about “opt in” or “opt out” mechanics and backwards compatibility is both the curse and enabler of long running software tool projects like Marten.

Another request from the back log was to have a first class tracking of the user name (or process name) in events based on the current user of whatever operation appended the events. Following along with the “opt in” support for tracking correlation and causation ids, we’ll first need to opt into storing the user name with events:

var builder = Host.CreateApplicationBuilder();
builder.Services.AddMarten(opts =>
{
    opts.Connection(builder.Configuration.GetConnectionString("marten"));

    opts.Events.MetadataConfig.UserNameEnabled = true;
});

And now we can apply the user name to persisted events something like this:

public static async Task Handle(StartInvoice command, IDocumentSession session, ClaimsPrincipal principal)
{
    // Marking the session as being modified by this active user
    session.LastModifiedBy = principal.Identity.Name;
    
    // Any events persisted by this session will be tagged with the current user
    // in the database
    session.Events.StartStream(new InvoiceStarted(command.Name, command.Amount));
    await session.SaveChangesAsync();
}

And while this should probably only be used for diagnostics mostly, you can now query against the raw event data with LINQ for the user name (assuming that it’s captured of course!) like this sample from our tests:

    [Theory]
    [InlineData(JasperFx.Events.EventAppendMode.Rich)]
    [InlineData(JasperFx.Events.EventAppendMode.Quick)]
    public async Task capture_user_name_information(EventAppendMode mode)
    {
        EventAppendMode = mode;
        var streamId = Guid.NewGuid();

        theSession.LastModifiedBy = "Larry Bird";

        // Just need a time that will be easy to assert on that is in the past
        var timestamp = (DateTimeOffset)DateTime.Today.Subtract(1.Hours()).ToUniversalTime();

        var action = theSession.Events.StartStream(streamId, new AEvent(), new BEvent(), new CEvent());
        action.Events[0].UserName = "Kevin McHale";

        await theSession.SaveChangesAsync();

        using var query = theStore.QuerySession();

        var events = await query.Events.FetchStreamAsync(streamId);

        events[0].UserName.ShouldBe("Kevin McHale");
        events[1].UserName.ShouldBe("Larry Bird");
        events[2].UserName.ShouldBe("Larry Bird");

        // Should write another test, but I'm doing it here!
        var celtics = await query.Events.QueryAllRawEvents().Where(x => x.UserName == "Larry Bird").ToListAsync();
        celtics.Count.ShouldBeGreaterThanOrEqualTo(2);
    }

Summary

Projects like Marten are never, ever completed and we have no intentions of abandoning Marten anytime soon. The features above have been requested for quite awhile, but didn’t make the cut for Marten 8.0. I’m happy to see them hit now, and this could be the basis of a long waited formal support for efficient data imports to Marten from other event stores.

And of course, if there’s something that Marten or Wolverine doesn’t do today that you need, please reach out to JasperFx Software and we can talk about an engagement to build out your features.

Example of Using Alba for HTTP Testing

Before Marten took off and we pivoted to using the “Critter Stack” naming motif, the original naming theme for the JasperFx OSS tool suite were some of the small towns near where I grew up in Southwest Missouri. Alba, MO is somewhat famous as the hometown of the Boyer brothers.

I’m taking a little time this week to build out some improvements to Wolverine’s declarative data access support based on some recent client work. As this work is largely targeted at Wolverine’s HTTP support, I’m heavily leveraging Alba to help test the HTTP behavior and I thought this work would make a great example of how Alba can help you more efficiently test HTTP API code in .NET.

Now, back to Wolverine and the current work I’m in the midst of testing today. To remove a lot of the repetitive code out of this client’s HTTP API, Wolverine is going to improve the [Entity] attribute mechanics to easily customize “on missing” handling something like this simple example from tests:

    // Should 400 w/ ProblemDetails on missing
    [WolverineGet("/required/todo400/{id}")]
    public static Todo2 Get2([Entity(OnMissing = OnMissing.ProblemDetailsWith400)] Todo2 todo) 
        => todo;

With Wolverine message handlers or HTTP endpoints, the [Entity] attribute is a little bit of declarative data access that just directs Wolverine to generate some code around your method to load data for that parameter based on its type from whatever your attached data access tooling is for that application, currently supported for Marten (of course), EF Core, and RavenDb. In its current form, if Marten/EF Core/RavenDb cannot find a Todo2 entity in the database with the identity from the route argument “id”, Wolverine will just set the HTTP status code to 404 and exit.

And while I’d argue that’s a perfectly fine default behavior, a recent client wants instead to write out a ProblemDetails response describing what data referenced in the request was unavailable and return a 400 status code instead. They’re handling that with Wolverine’s Railway Programming support just fine, but I think that’s causing my client more repetitive code than I personally prefer, and Wolverine is based on the philosophy that repetitive code should be minimized as much as possible. Hence, the enhancement work hinted at above with a new OnMissing property that lets you specify exactly how an HTTP endpoint should handle the case where a requested entity is missing.

So let’s finally introduce Alba with this test harness using xUnit:

public class reacting_to_entity_attributes : IAsyncLifetime
{
    private readonly ITestOutputHelper _output;
    private IAlbaHost theHost;

    public reacting_to_entity_attributes(ITestOutputHelper output)
    {
        _output = output;
    }

    public async Task InitializeAsync()
    {
        // This probably isn't your typical Alba usage, but
        // I'm spinning up a little AspNetCore application
        // for endpoint types in the current testing assembly
        var builder = WebApplication.CreateBuilder([]);

        // Adding Marten as the target persistence provider,
        // but the attribute does work w/ EF Core too
        builder.Services.AddMarten(opts =>
        {
            // Establish the connection string to your Marten database
            opts.Connection(Servers.PostgresConnectionString);
            opts.DatabaseSchemaName = "onmissing";
        }).IntegrateWithWolverine().UseLightweightSessions();

        builder.Host.UseWolverine(opts => opts.Discovery.IncludeAssembly(GetType().Assembly));

        builder.Services.AddWolverineHttp();

        // This is using Alba, which uses WebApplicationFactory under the covers
        theHost = await AlbaHost.For(builder, app =>
        {
            app.MapWolverineEndpoints();
        });
    }

    async Task IAsyncLifetime.DisposeAsync()
    {
        if (theHost != null)
        {
            await theHost.StopAsync();
        }
    }

    // Other tests...

    [Fact]
    public async Task problem_details_400_on_missing()
    {
        var results = await theHost.Scenario(x =>
        {
            x.Get.Url("/required/todo400/nonexistent");

            x.StatusCodeShouldBe(400);
            x.ContentTypeShouldBe("application/problem+json");
        });

        var details = results.ReadAsJson<ProblemDetails>();
        details.Detail.ShouldBe("Unknown Todo2 with identity nonexistent");
    }
    
}

Just a few things to call out about the test above:

  1. Alba is using WebApplicationFactory and TestServer from AspNetCore under the covers to bootstrap an AspNetCore IHost without having to use Kestral
  2. The Alba Scenario() method is running an HTTP request all the way through the application in process
  3. Alba has declarative helpers to assert on the expected HTTP status code and content-type headers in the response, and I used those above
  4. The ReadAsJson<T>() helper just helps us deserialize the response body into a .NET type using whatever the JSON serialization configuration is within our application — and by no means should you minimize that because that’s a humongous potential source of false test results for the unwary if folks use mismatched JSON serialization settings between their application and test harness code!

For the record, that test is passing in my local branch right now after a couple iterations. Alba just happened to make the functionality pretty easy to test through both the declarative assertions and the JSON serialization helpers.

How I Prioritize OSS Bugs

I just got back from a week long vacation with the family, and I was as rested and relaxed as I can ever be, at least before I picked up a head cold on the last day. Today though, it’s time to start catching up on OSS bug reports that have come in in the past 10 days or so. I thought it might be fun to dash off my personal prioritization for the bugs that come into the Marten, Wolverine, or related projects.

Roughly, here’s an unscientific ranking of the factors that get bugs fixed sooner rather than later:

  1. Any issue that is blocking or harming a JasperFx Software client’s system
  2. Issues that already have user supplied pull requests to fix the issue. You never want to leave a pull request open too long if someone has taken the time to contribute. It still happens for a variety of reasons, but you do still try.
  3. Bugs that I find embarrassing
  4. Any problem that would likely give a new user a poor first impression of the tools
  5. Problems that I think would potentially impact many users
  6. Any other issue for a JasperFx client
  7. Issues reported by significant contributors, and I’m pretty loose with what I think of as “significant”
  8. Easy fixes just to help keep the open GitHub issue counts as low as possible because that’s something I do care about
  9. Open issues in whatever project I happen to be preparing a release for while that project has my attention
  10. Any issue that will require breaking API changes in the tool, but this subset will sometimes be prioritized to the top whenever we’re making a full point release
  11. Any issue that is going to require significant changes to the internals, but this is somewhat similar to the previous line
  12. Issues that aren’t likely to impact many users
  13. Issues reported by people being kind of a jerk that aren’t likely to impact many users

For older versions of any of the tools, like Marten 7.*, the list is much shorter:

  • For a JasperFx Software client who cannot upgrade soon, we’ll of course make fixes to the older branch and forward that fix to the current version
  • For everybody else, eh, probably not unless it’s really bad or they’ve just asked very nicely

Critter Stack Futures for the rest of 2025

It’s the halfway point of 2025 some how, and we’ve now gotten past the big Marten 8.0 and Wolverine 4.0 releases. Right before I go on vacation next week, I thought it would be a good time to jot down some thoughts about where the Critter Stack might go for the rest of 2025 and probably into 2026.

Critter Watch

The big ticket item is our ongoing work on “Critter Watch”, which will be a commercial management and observability add on for Wolverine, Marten, and any future new Critter tools. The top line pitch for Critter Watch is that it well help you know what your applications are, how they interact with each other, whether they’re healthy in production, and provide features to help heal the inevitable production problems when they happen.

The general idea is to have a standalone application deployed that acts as a management console for 1 or more Wolverine applications in our user’s environments:

Upfront for the Critter Watch MVP (and requests from a client), we’re focused on:

  • Visualizing the systems being monitored, their Wolverine and Marten configuration, and the capabilities of the systems. We’re currently researching AsyncAPI publishing and visualization as well. The whole point of this is to help teams understand how the messages in your system are handled, published, and routed.
  • Event Sourcing management, but this is mostly about managing the execution of asynchronous projections and subscriptions at runtime and being able to understand the ongoing performance or any ongoing problems
  • Dead letter queue management for Wolverine

I have less clarity over development time tooling, but we’re at least interested in having some of Critter Watch usable as an embedded tool during development.

After years of talking about this and quite a bit of envisioning, development started in earnest over the past 6 weeks with a stretch goal of having a pilot usage by the end of July for a JasperFx Software client.

I do not yet have any hard pricing numbers yet, but we are very interested in talking to anyone who would be interested in Critter Watch.

Concurrency, Concurrency, Concurrency!

I think that systems built with Event Sourcing are a little more sensitive to concurrent data reads and writes, or maybe it’s just that those problems are there all the time but more readily observable with Event Sourcing and Event Driven Architectures. In my work with JasperFx Software clients, concurrency is probably the most common subject of questions.

Mostly today you deal with this either by building in selective retry capabilities based on version conflict detection, or get fancier with queueing and message routing to eliminate the concurrent access as much as possible. Or both of course.

A great way to side step the concurrent access while not sacrificing throughput through parallelization is to use Wolverine’s support for Azure Service Bus Session Identifiers and FIFO Queues.

Which is great, but what if you’re not using Azure Service Bus? What if you’re only using local queueing? And wouldn’t it be nice if the existing Azure Service Bus FIFO support was a little less cumbersome to use in your code?

I don’t have a ton of detail, but there’s a range of internal proposals to create some new recipes for Wolverine usage to enable teams to more easily “shard” logical work between queues and within the local workers listening to queues to improve Wolverine’s handling of concurrent access without sacrificing parallel work and throughput or requiring repetitive code. Some of this is being done in collaboration with JasperFx clients.

Improving Wolverine’s Declarative Data Access

For lack of a better description, Wolverine has a feature set I’m heretofore calling “declarative data access” with the [Entity] attribute that triggers code generation within message handlers or HTTP endpoints to load requested data from Marten, EF Core, or RavenDb. And of course, there’s also what we call the “aggregate handler workflow” recipe for using the Decider pattern with Wolverine and Marten that I think is the simplest way to express business logic when using Event Sourcing in the .NET ecosystem.

To take these productivity features even farther, I think we’ll add some:

  1. More control over what action to take if an entity is missing. Today, the HTTP endpoints will just return a 404 status code if required entities can’t be found. In future versions, we’ll let you customize log or ProblemDetails messages and have more control over how Wolverine generates the “if missing” path
  2. At least for Marten, opt into Marten’s batch querying support if you are using more than one of any combination of the existing [Aggregate], [ReadAggregate], [Entity], or [Document] attributes to load data within a single HTTP endpoint or message handler as a way of improving performance by reducing network round trips to the database. And don’t sneeze at that, chattiness is a common performance killer in enterprise applications. Especially when the code is unnecessarily complicated by typical usages of Clean or Onion Architectural approaches.

If you follow Event Sourcing related topics online, you’ll hear quite a bit of buzz from some of the commercial tools about “Dynamic Consistency Boundaries” (DCB). We get asked about this with Marten occasionally, but the Marten core team’s position is that Marten doesn’t require this feature because you can already do “read” and “write” operations across multiple event streams with transactional integrity as is.

What the batch querying I just described will do for Marten though is make the full “Critter Stack” usage be more performant when you need to potentially work with more than one event stream at a time with all the transactional support and strong consistency that Marten (really PostgreSQL) already provides.

For Marten users, this is essentially making Marten’s FetchForWriting() API able to enroll in batch querying for more efficient data querying when working across streams. That work is actually well underway.

But if you prefer to use the fancier and more novel DCB approaches that aren’t even officially released yet, feel free to pay out some big bucks to use one of the commercial tools.

Smaller, But Still Important Work!

  • Partially for Critter Watch, Wolverine should support connecting to multiple brokers in a single application for each transport type. Some of this is already done, with Kafka being next up, but we need to add this to every transport
  • Improved interoperability support for Wolverine talking to non-Wolverine applications. There’s an existing pull request that goes quite a ways for this, but this might end up being more a documentation effort than anything else
  • More options in Wolverine with Marten or just Marten for streaming Marten data as JSON directly to HTTP. We have some support already of course, but there are more opportunities for expanding that
  • Exposing an MCP server off of Marten event data, but I have very little detail about what that would be. I would be very interested in partnering with a company who wanted to do this, and a JasperFx client might be working with us later this year on AI with Marten
  • Improving throughput in Marten’s event projections and subscriptions. We’ve done a lot the past couple years, but there are still some other ideas in the backlog we haven’t played yet
  • Expanding Wolverine support for more database engines, with CosmosDb the most likely contender this year. This might be contingent upon client work of course.

What about the SQL Server backed Event Store?

Yeah, I don’t know. We did a ton of work in Marten 8 to pull what will be common code out in a way that it could be reused in the SQL Server backed event store. I do not know when we might work on this as CritterWatch will take priority for now.

And finally….

And on that note I’m essentially on vacation for a week and I’ll catch up with folks in late July.

OSS Project Lessons Learned with David Giard

I got to talk to David Giard on his podcast last week about some of the lessons I’ve learned the hard way across several large OSS projects. For a little background, I got to follow through on a 15 to 20 year dream of mine to found a company called JasperFx Software LLC to build a services and product offerings around the “Critter Stack” family of open source tools (Marten and Wolverine) in the .NET ecosystem. The two main tools are doing well right now, with Marten being the most used Event Sourcing tool for .NET projects and Wolverine gaining traction as an alternative messaging tool and HTTP endpoint framework with its focus on reduced code ceremony and testable code.

The relative success of these tools came after I was the technical leader of a very large, ambitious project called FubuMVC (and FubuTransportation) that fizzled out after I probably sunk 2-3 man years of effort into it over a half decade. As David did helpfully point out, some of the now success of Marten and Wolverine was absolutely predicated on some lessons learned both positive (mostly technical) and negative (community engagement, documentation, samples) from the earlier FubuMVC experience.

Without further ado, here’s David & I:

Wire Up XUnit Logging for Crazy Integration Testing

I worked a little bit this weekend on a small new feature in Wolverine that we’ll need as part of our forthcoming “CritterWatch” tooling. What I was doing isn’t that interesting, but the killer problem was that it required me to write an integration test that would:

  1. Spin up multiple IHost instances for the same testing application
  2. Verify that Wolverine was correctly assigning running tasks to only the leader node
  3. Stop the leader node, see leadership and that same task shift to the newly elected leader
  4. Make sure that task was really only ever running on the single leader node

Needless to say, it’s a long running test and it turned out to be non trivial to get both the test harness and the necessary code exactly right. Honestly, I didn’t get this done until I stopped and integrated application logging directly into the xUnit.Net test harness (plus integrating a Wolverine specific event observer too) so I could see what the heck was going on inside all of these application instances.

So without further ado, here’s the recipe we’re using (and copy/pasting around) in Wolverine to do that. First off, we need an ILogger and ILoggerProvider implementation that will pipe logging to xUnit’s ITestOutputHelper like so:

public class XUnitLogger : ILogger
{
private readonly string _categoryName;

private readonly List<string> _ignoredStrings = new()
{
"Declared",
"Successfully processed message"
};

private readonly ITestOutputHelper _testOutputHelper;

public XUnitLogger(ITestOutputHelper testOutputHelper, string categoryName)
{
_testOutputHelper = testOutputHelper;
_categoryName = categoryName;
}

public bool IsEnabled(LogLevel logLevel)
{
return logLevel != LogLevel.None;
}

public IDisposable BeginScope<TState>(TState state)
{
return new Disposable();
}

public void Log<TState>(LogLevel logLevel, EventId eventId, TState state, Exception exception,
Func<TState, Exception, string> formatter)
{
if (exception is DivideByZeroException)
{
return;
}

if (exception is BadImageFormatException)
{
return;
}

// Obviously this is crude and you would do something different here...
if (_categoryName == "Wolverine.Transports.Sending.BufferedSendingAgent" &&
logLevel == LogLevel.Information) return;
if (_categoryName == "Wolverine.Runtime.WolverineRuntime" &&
logLevel == LogLevel.Information) return;
if (_categoryName == "Microsoft.Hosting.Lifetime" &&
logLevel == LogLevel.Information) return;
if (_categoryName == "Wolverine.Transports.ListeningAgent" &&
logLevel == LogLevel.Information) return;
if (_categoryName == "JasperFx.Resources.ResourceSetupHostService" &&
logLevel == LogLevel.Information) return;
if (_categoryName == "Wolverine.Configuration.HandlerDiscovery" &&
logLevel == LogLevel.Information) return;

var text = formatter(state, exception);
if (_ignoredStrings.Any(x => text.Contains(x))) return;

_testOutputHelper.WriteLine($"{_categoryName}/{logLevel}: {text}");

if (exception != null)
{
_testOutputHelper.WriteLine(exception.ToString());
}
}

public class Disposable : IDisposable
{
public void Dispose()
{
}
}
}

public class OutputLoggerProvider : ILoggerProvider
{
private readonly ITestOutputHelper _output;

public OutputLoggerProvider(ITestOutputHelper output)
{
_output = output;
}


public void Dispose()
{
}

public ILogger CreateLogger(string categoryName)
{
return new XUnitLogger(_output, categoryName);
}
}

And register it inside the test harness like so:

public class leader_pinned_listener : IAsyncDisposable
{
    private readonly ITestOutputHelper _output;

    public leader_pinned_listener(ITestOutputHelper output)
    {
        _output = output;
    }

    private async Task<IHost> startHost()
    {
        await dropSchemaAsync();
        
        var host =  await Host.CreateDefaultBuilder()
            .UseWolverine(opts =>
            {
                // This is where I'm adding in the custom ILoggerProvider
                opts.Services.AddSingleton<ILoggerProvider>(new OutputLoggerProvider(_output));
                
                // More configuration that isn't germane...

        return host;
    }

Hey, it’s crude, but the point here was that this kind of gnarly integration testing, and especially with a lot of asynchronous behavior, is a lot easier to get through when you have more insight into how the code you’re testing is actually behaving.

Low Ceremony Railway Programming with Wolverine

Railway Programming is an idea that came out of the F# community as a way to develop for “sad path” exception cases without having to resort to throwing .NET Exceptions as a way of doing flow control. Railway Programming works by chaining together functions with a standardized response in such a way that it’s relatively easy to abort workflows as preliminary steps are found to be invalid while still passing the results of the preceding function as the input into the next function.

Wolverine has some direct support for a quasi-Railway Programming approach by moving validation or data loading steps prior to the main message handler or HTTP endpoint logic. Let’s jump into a quick sample that works with either message handlers or HTTP endpoints using the built in HandlerContinuation enum:

public static class ShipOrderHandler
{
    // This would be called first
    public static async Task<(HandlerContinuation, Order?, Customer?)> LoadAsync(ShipOrder command, IDocumentSession session)
    {
        var order = await session.LoadAsync<Order>(command.OrderId);
        if (order == null)
        {
            return (HandlerContinuation.Stop, null, null);
        }

        var customer = await session.LoadAsync<Customer>(command.CustomerId);

        return (HandlerContinuation.Continue, order, customer);
    }

    // The main method becomes the "happy path", which also helps simplify it
    public static IEnumerable<object> Handle(ShipOrder command, Order order, Customer customer)
    {
        // use the command data, plus the related Order & Customer data to
        // "decide" what action to take next

        yield return new MailOvernight(order.Id);
    }
}

By naming convention (but you can override the method naming with attributes as you see fit), Wolverine will try to generate code that will call methods named Before/Validate/Load(Async) before the main message handler method or the HTTP endpoint method. You can use this compound handler approach to do set up work like loading data required by business logic in the main method or in this case, as validation logic that can stop further processing based on failed validation or data requirements or system state. Some Wolverine users like using these method to keep the main methods relatively simple and focused on the “happy path” and business logic in pure functions that are easier to unit test in isolation.

By returning a HandlerContinuation value either by itself or as part of a tuple returned by a BeforeValidate, or LoadAsync method, you can direct Wolverine to stop all other processing.

You have more specialized ways of doing that in HTTP endpoints by using the ProblemDetails specification to stop processing like this example that uses a Validate() method to potentially stop processing with a descriptive 400 and error message:

public record CategoriseIncident(
    IncidentCategory Category,
    Guid CategorisedBy,
    int Version
);

public static class CategoriseIncidentEndpoint
{
    // This is Wolverine's form of "Railway Programming"
    // Wolverine will execute this before the main endpoint,
    // and stop all processing if the ProblemDetails is *not*
    // "NoProblems"
    public static ProblemDetails Validate(Incident incident)
    {
        return incident.Status == IncidentStatus.Closed 
            ? new ProblemDetails { Detail = "Incident is already closed" } 
            
            // All good, keep going!
            : WolverineContinue.NoProblems;
    }
    
    // This tells Wolverine that the first "return value" is NOT the response
    // body
    [EmptyResponse]
    [WolverinePost("/api/incidents/{incidentId:guid}/category")]
    public static IncidentCategorised Post(
        // the actual command
        CategoriseIncident command, 
        
        // Wolverine is generating code to look up the Incident aggregate
        // data for the event stream with this id
        [Aggregate("incidentId")] Incident incident)
    {
        // This is a simple case where we're just appending a single event to
        // the stream.
        return new IncidentCategorised(incident.Id, command.Category, command.CategorisedBy);
    }
}

The value WolverineContinue.NoProblems tells Wolverine that everything is good, full speed ahead. Anything else will write the ProblemDetails value out to the response, return a 400 status code (or whatever you decide to use), and stop processing. Returning a ProblemDetails object hopefully makes these filter methods easy to unit test themselves.

You can also use the AspNetCore IResult as another formally supported “result” type in these filter methods like this shown below:

public static class ExamineFirstHandler
{
    public static bool DidContinue { get; set; }
    
    public static IResult Before([Entity] Todo2 todo)
    {
        return todo != null ? WolverineContinue.Result() : Results.Empty;
    }

    [WolverinePost("/api/todo/examinefirst")]
    public static void Handle(ExamineFirst command) => DidContinue = true;
}

In this case, the “special” value WolverineContinue.Result() tells Wolverine to keep going, otherwise, Wolverine will execute the IResult returned from one of these filter methods and stop all other processing for the HTTP request.

It’s maybe a shameful approach for folks who are more inline with a Functional Programming philosophy, but you could also use a signature like:

[WolverineBefore]
public static UnauthorizedHttpResult? Authorize(SomeCommand command, ClaimsPrincipal user)

In the case above, Wolverine will do nothing if the return value is null, but will execute the UnauthorizedHttpResult response if there is, and stop any further processing. There is *some* minor value to expressing the actual IResult type above because that can be used to help generate OpenAPI metadata.

Lastly, let’s think about the very common need to write an HTTP endpoint where you want to return a 404 status code if the requested data doesn’t exist. In many cases the API user is supplying the identity value for an entity, and your HTTP endpoint will first query for that data, and if it doesn’t exist, abort the processing with the 404 status code. Wolverine has some built in help for this tedious task through its unique persistence helpers as shown in this sample HTTP endpoint below:

    [WolverineGet("/orders/{id}")]
    public static Order GetOrder([Entity] Order order) => order;

Note the presence of the [Entity] attribute for the Order argument to this HTTP endpoint route. That’s telling Wolverine that that data should be loaded using the “id” route argument as the Order key from whatever persistence mechanism in your application deals with the Order entity, which could be Marten of course, an EF Core DbContext that has a mapping for Order, or Wolverine’s RavenDb integration. Unless we purposely mark [Entity(Required = false)], Wolverine.HTTP will return a 404 status code if the Order entity does not exist. The simplistic sample from Wolverine’s test suite above doesn’t do any kind of mapping from the raw Order to a view model, but the mechanics of the [Entity] loading would work equally if you also mapped the raw Order to some kind of OrderViewModel maybe.

Last Thoughts

I’m pushing Wolverine users and JasperFx clients to utilize Wolverine’s quasi Railway Programming capabilities as guard clauses to better separate out validation or error condition handling into easily spotted, atomic operations while reducing the core HTTP request or message handler to being a “happy path” operation. Especially in HTTP services where the ProblemDetails specification and integration with Wolverine fits well with this pattern and where I’d expect many HTTP client tools to already know how to work with problem details responses.

There have been a few attempts to adapt Railway Programming to C# that I’m aware of, inevitably using some kind of custom Result type that denotes success or failure with the actual results for the next function. I’ve seen some folks and OSS tools try to chain functions together with nested lambda functions within a fluent interface. I’m not a fan of any of this because I think the custom Result types just add code noise and extra mechanical work, then the fluent Interface approach can easily be nasty to debug and detracts from readability by the extra code noise. But anyway, read a lot more about this in Andrew Lock’s Series: Working with the result pattern and make up your own mind.

I’ve also seen an approach where folks used MediatR handlers for each individual step in the “railway” where each handler had to return a custom Result type with the inputs for the next handler in the series. I beg you, please don’t do this in your own system because that leads to way too much complexity, code that’s much harder to reason about because of the extra hoops and indirection, and potentially poor system performance because again, you can’t see what the code is doing and you can easily end up making unnecessarily duplicate database round trips or just being way too “chatty” to the database. And no, replacing MediatR handlers with Wolverine handlers is not going to help because the pattern was the problem and not MediatR itself.

As always, the Wolverine philosophy is that the path to long term success in enterprise-y software systems is by relentlessly eliminating code ceremony so that developers can better reason about how the system’s logic and behavior works. To a large degree, Wolverine is a reaction to the very high ceremony Clean/Onion Architecture/iDesign architectural approaches of the past 15-20 years and how hard those systems can be to deal with over time.

And as happens with just about any halfway good thing in programming, some folks overused the Railway Programming idea and there’s a little bit of pushback or backlash to the technique. I can’t find the quote to give it the real attribution, but something I’ve heard Martin Fowler say is that “we don’t know how useful an idea really can be until we push it too far, then pull back a little bit.”

Making Event Sourcing with Marten Go Faster

You’re about to start a new system with Event Sourcing using Marten, and you’re expecting your system to be hugely successful such that it’s going to handle a huge amount of data, but you’re already starting with pretty ambitious non-functional requirements for the system to be highly performant and all the screens or exposed APIs be snappy.

Basically, what you want to do is go as fast as Marten and PostgreSQL will allow. Fortunately, Marten has a series of switches and dials that can be configured to squeeze out more performance, but for a variety of historical reasons and possible drawbacks, are not the defaults for a barebones Marten configuration as shown below:

var builder = WebApplication.CreateBuilder(args);

builder.Services.AddMarten(opts =>
{
    opts.Connection(builder.Configuration.GetConnectionString("marten"));
});

Cut me some slack in my car choice for the analogy here. I’m not only an American, but I’m an American from a rural area who grew up dreaming about having my own Mustang or Camaro because that’s as far out as I could possibly imagine back then.

At this point, we have is the equivalent to a street legal passenger car, maybe the equivalent to an off the shelf Mustang:

Which probably easily goes fast enough for every day usage for the mass majority of us most of the time. But we really need a fully tricked out Mustang GTD that’s absurdly optimized to just flat out go fast:

Let’s start trimming weight off our street legal Marten setup to go faster with…

Opt into Lightweight Sessions by Default

Starting from a new system so we don’t care about breaking existing code by changing behavior, let’s opt for lightweight sessions by default:

var builder = WebApplication.CreateBuilder(args);

builder.Services.AddMarten(opts =>
{
    opts.Connection(builder.Configuration.GetConnectionString("marten"));
})
    
// Jettison some "Identity Map" weight by going lighter weight    
.UseLightweightSessions();

By default, the instances of IDocumentSession you get out of an IoC container would utilize the Identity Map feature to track loaded entities by id so that if you happened to try to load the same entity from the same session, you would get the exact same object. As I’m sure you can imagine, that means that every entity fetched by a session is stuffed into a dictionary internally (Marten uses the highly performant ImTools ImHashMap everywhere, but still), and the session also has to bounce through the dictionary before loading data as well. It’s just a little bit of overhead we can omit by opting for “Lightweight Sessions” if we don’t need that behavior by default.

We’ve always been afraid to change the default behavior here to the more efficient approach because it can absolutely lead to breaking existing code that depends on the Identity Map behavior. On the flip side, I think you should not need Identity Map mechanics if you can keep the call stacks within your code short enough that you can actually “see” where you might be trying to load the same data twice or more in the same parent operation.

On to the next thing…

Make Writes Faster with Quick Append

Next, since we again don’t have any existing code that can be broken here, let’s opt for “Quick Append” writes like so:

var builder = WebApplication.CreateBuilder(args);

builder.Services.AddMarten(opts =>
{
    opts.Connection(builder.Configuration.GetConnectionString("marten"));

    // Make event writing faster, like 2X faster in our testing
    opts.Events.AppendMode = EventAppendMode.Quick;
})
    
// Jettison some "Identity Map" weight by going lighter weight    
.UseLightweightSessions();

This will help the system be able to append new events much faster, but at the cost of not being able to use some event metadata like event versions, sequence, or timestamp information within “Inline” projections.

Again, even though this option has been clocked as being much faster, we have not wanted to make this the default because it could break existing systems for people who depend on having the rich metadata during the Inline application of projections that forces Marten to do a kind of two step process to append events. This “Quick Append” option also helps reduce concurrent access problems writing to streams and generally makes the “Async Daemon” subsystem processing asynchronous projections and subscriptions run much smoother.

We’re not out of tricks yet by any means, so let’s go on…

Use the Identity Map for Inline Aggregates

Wait, I thought you told me not to cross the streams! Yeah, about the Identity Map thing, there’s one exception where we actually do want that behavior within CQRS command handlers like this one using Wolverine and its “Aggregate Handler Workflow” integration with Marten:

    // This tells Wolverine that the first "return value" is NOT the response
    // body
    [EmptyResponse]
    [WolverinePost("/api/incidents/{incidentId:guid}/category")]
    public static IncidentCategorised Post(
        // the actual command
        CategoriseIncident command, 
        
        // Wolverine is generating code to look up the Incident aggregate
        // data for the event stream with this id
        [Aggregate("incidentId")] Incident incident)
    {
        // This is a simple case where we're just appending a single event to
        // the stream.
        return new IncidentCategorised(incident.Id, command.Category, command.CategorisedBy);
    }

In the case above, the Incident model is a projected document that’s first used by the command handler to “decide” what new events to emit. If we’re updating the Incident model with an Inline projection that tries to update the Incident model in the database at the same time it wants to append events, then it’s an advantage for performance to “just” use the original Incident model we used initially, then forwarding the new state based on the new events and persisting the results right then and there. We can opt into this optimization even for the lightweight sessions we earlier wanted to use by adopting one more UseIdentityMapForAggregates flag:

builder.Services.AddMarten(opts =>
{
    opts.Connection(builder.Configuration.GetConnectionString("marten"));

    // Make event writing faster, like 2X faster in our testing
    opts.Events.AppendMode = EventAppendMode.Quick;

    // This can cut down on the number of database round trips
    // Marten has to do during CQRS command handler execution
    opts.Events.UseIdentityMapForAggregates = true;
})
    
// Jettison some "Identity Map" weight by going lighter weight    
.UseLightweightSessions();

Note, this optimization can easily break code for folks who use some sort of stateful “Aggregate Root” approach where the state of the projected aggregate object might be mutated during the course of executing the command. As this has traditionally been a popular approach in Event Sourcing circles, we can’t make this be a default option. If you instead either make the projected aggregates like Incident either immutable or treat them as a dumb data input to your command handlers with a more Functional Programming “Decider” function approach, you can get away with the performance optimization.

And also, I strongly prefer and recommend the FP “Decider” approach to JasperFx Software clients as is and I think that folks using the older “Aggregate Root” approach tend to have more runtime bugs.

Moving on, let’s keep our database smaller…

Event Stream Archiving

By and large, you can improve system performance in almost any situation by trying to keep your database from growing too large by archiving or retiring obsolete information. Marten has first class support for “Archiving Event Streams” where you effectively just move event streams that only represent historical information and are not really active into an archived state.

Moreover, we can divide our underlying PostgreSQL storage for events into “hot” and “cold” storage by utilizing PostgreSQL’s table partitioning support like this:

builder.Services.AddMarten(opts =>
{
    opts.Connection(builder.Configuration.GetConnectionString("marten"));

    // Make event writing faster, like 2X faster in our testing
    opts.Events.AppendMode = EventAppendMode.Quick;

    // This can cut down on the number of database round trips
    // Marten has to do during CQRS command handler execution
    opts.Events.UseIdentityMapForAggregates = true;

    // Let's leverage PostgreSQL table partitioning
    // to our advantage
    opts.Events.UseArchivedStreamPartitioning = true;
})
    
// Jettison some "Identity Map" weight by going lighter weight    
.UseLightweightSessions();

If you’re aggressive with marking event streams as Archived, the PostgreSQL table partitioning can move off archived event streams into a different table partition than our active event data. This is essentially keeping the “active” event table storage relatively stable in size, and most operations will execute against this smaller table partition while still being able to access the archived data too if explicitly opt into including that.

We added this feature in a minor point 7.* release, so it had to be opt in, and I think I was too hesitant to make this a default in 8.0, so it’s still “opt in”.

Stream Compacting

Beyond archiving event streams, maybe you just want to “compact” a longer event stream so you technically retain all the existing state, but further reduce the size of your active database storage. To that end, Marten 8.0 added Stream Compacting.

Distributing Asynchronous Projections

I had been mostly talking about using projections running Inline such that the projections are updated at the same time as the events are captured. That’s sometimes applicable or desirable, but other times you’ll want to optimize the “write” operations by moving the updating of projected data to an Async projection running in the background. But now let’s say that we have quite a few asynchronous projections and several subscriptions as well. In early versions of Marten, we had to run everything in a “Hot/Cold” mode where every known projection or subscription had to run on one single “leader” node. So even if you were running your application across a dozen or more nodes, only one could be executing all of the asynchronous projections and subscriptions.

That’s obviously a potential bottleneck, so Marten 7.0 by itself introduced some ability to spread projections and subscriptions over multiple nodes. If we introduce Wolverine into the mix though, we can do quite a bit better than that by allowing Wolverine to distribute the asynchronous Marten work across our entire cluster with its ability to distribute Marten projections and subscriptions with the UseWolverineManagedEventSubscriptionDistribution option in the WolverineFx.Marten Nuget:

builder.Services.AddMarten(opts =>
{
    opts.Connection(builder.Configuration.GetConnectionString("marten"));

    // Make event writing faster, like 2X faster in our testing
    opts.Events.AppendMode = EventAppendMode.Quick;

    // This can cut down on the number of database round trips
    // Marten has to do during CQRS command handler execution
    opts.Events.UseIdentityMapForAggregates = true;

    // Let's leverage PostgreSQL table partitioning
    // to our advantage
    opts.Events.UseArchivedStreamPartitioning = true;
})
    
// Jettison some "Identity Map" weight by going lighter weight    
.UseLightweightSessions()

.IntegrateWithWolverine(opts =>
{
    opts.UseWolverineManagedEventSubscriptionDistribution = true;
});

Is there anything else for the future?

It never ends, and yes, there are still quite a few ideas in our product backlog to potentially improve performance and scalability of Marten’s Event Sourcing. Offhand, that includes looking at alternative, higher performance serializers and more options to parallelize asynchronous projections to squeeze out more throughput by sharing some data access across projections.

Summary

There are quite a few “opt in” features in Marten that will help your system perform better, but these features are “opt in” because they can be harmful if you’re not building around the assumptions these features make about how your code works. The good news though is that you’ll be able to better utilize these features if you follow the Critter Stack’s recommended practices by striving for shorter code stacks (i.e., how many jumps between methods and classes does your code make when receiving a system input like a message or HTTP request) so your code is easier to reason about anyway, and avoiding mutating projected aggregate data outside of Marten.

Marten 8.0, Wolverine 4.0, and even Lamar 15.0 are out!

It’s a pretty big “Critter Stack” community release day today, as:

  1. Marten has its 8.0 release
  2. Wolverine got a 4.0 release
  3. Lamar, the spiritual successor to StructureMap, had a corresponding 15.0 release
  4. And underneath those tools, the new JasperFx & JasperFx.Events library went 1.0 and the supporting Weasel library that provides some low level functionality went 8.0

Before getting into the highlights, let me start by thanking the Critter Stack Core team for all their support, contributions to both the code and documentation, and for being a constant sounding board for me and source of ideas and advice:

Next, I’d like to thank our Critter Stack community for all the interest and the continuous help we get with suggestions, pull requests that improve the tools, and especially for the folks who take the time to create actionable bug reports because that’s half the battle of getting problems fixed. And while there are plenty of days when I wish there wasn’t a veritable pack of raptors prowling around the projects probing for weaknesses in the projects, I cannot overstate the importance for an OSS project to have user and community feedback.

Alright, on to some highlights.

The big changes are that we consolidated several smaller shared libraries into one bigger shared JasperFx library and also combined some smaller libraries like Marten.CommandLine, Weasel.CommandLine, and Lamar.Diagnostics into Marten, Weasel, and Lamar respectfully. That’s hopefully going to help folks get to command line utilities quicker and easier, and the Critter Stack tools do get some value out of those command line utilities.

We’ve now got a shared model to configure behavioral differences at “Development” vs “Production” time for both Marten and Wolverine all at one time like this:

// These settings would apply to *both* Marten and Wolverine
// if you happen to be using both
builder.Services.CritterStackDefaults(x =>
{
    x.ServiceName = "MyService";
    x.TenantIdStyle = TenantIdStyle.ForceLowerCase;
    
    // You probably won't have to configure this often,
    // but if you do, this applies to both tools
    x.ApplicationAssembly = typeof(Program).Assembly;
    
    x.Production.GeneratedCodeMode = TypeLoadMode.Static;
    x.Production.ResourceAutoCreate = AutoCreate.None;

    // These are defaults, but showing for completeness
    x.Development.GeneratedCodeMode = TypeLoadMode.Dynamic;
    x.Development.ResourceAutoCreate = AutoCreate.CreateOrUpdate;
});

It might be awhile before this pays off for us, but everything from the last couple paragraphs is also meant to speed up the development of additional Event Sourcing “Critter” tools to expand beyond PostgreSQL — not that we’re even slightly backing off our investment in the do everything PostgreSQL database!

For Marten 8.0, we’ve done a lot to make projections easier to use with explicit code, and added a new Stream Compacting feature for yet more scalability.

For Wolverine 4.0, we’ve improved Wolverine’s ability to support modular monolith architectures that might utilize multiple Marten stores or EF Core DbContext services targeting the same database or even different databases. More on this soon.

Wolverine 4.0 also gets some big improvements for EF Core users with a new Multi-Tenancy with EF Core feature.

Both Wolverine and Marten got some streamlined Open Telemetry span naming changes that were suggested by Pascal Senn of ChiliCream who collaborates with JasperFx for a mutual client.

For both Wolverine and Lamar 15, we added a little more full support for the [FromKeyedService] and “keyed services” in the .NET Core DI abstractions like this for a Wolverine handler:

    // From a test, just showing that you *can* do this
    // *Not* saying you *should* do that very often
    public static void Handle(UseMultipleThings command, 
        [FromKeyedServices("Green")] IThing green,
        [FromKeyedServices("Red")] IThing red)
    {
        green.ShouldBeOfType<GreenThing>();
        red.ShouldBeOfType<RedThing>();
    }

And inside of Lamar itself, any dependency from a constructor function that has this:

// Lamar will inject the IThing w/ the key "Red" here
public record ThingUser([FromKeyedServices("Red")] IThing Thing);

Granted, Lamar already had its own version of keyed services and even an equivalent to the [FromKeyedService] attribute long before this was added to the .NET DI abstractions and ServiceProvider conforming container, but .NET is Microsoft’s world and lowly OSS projects pretty well have to conform to their abstractions sometimes.

Just for the record, StructureMap had an equivalent to keyed services in its first production release way back in 2004 back when David Fowler was probably in middle school making googly eyes at Rihanna.

What’s Next for the Critter Stack?

Honestly, I had to cut some corners on documentation to get the releases out for a JasperFx Software client, so I’ll be focused on that for most of this week. And of course, plenty of open issues and some outstanding pull requests didn’t make the release, so those hopefully get addressed in the next couple minor releases.

For the bigger picture, I think the rest of this year is:

  1. “CritterWatch”, our long planned, not moving fast enough for my taste, management and observability console for both Marten and Wolverine.
  2. Improvements to Marten’s performance and scalability for Event Sourcing. We did a lot in that regard last year throughout Marten 7.*, but there’s another series of ideas to increase the throughput even farther.
  3. Wolverine is getting a lot of user contributions right now, and I expect that especially the asynchronous messaging support will continue to grow. I would like to see us add CosmosDb support to Wolverine by the end of the year. By and large, I would like to increase Wolverine’s community usage over all by trying to grow the tool beyond just folks already using Marten — but the Marten + Wolverine combination will hopefully continue to improve.
  4. More Critters? We’re still talking about a SQL Server backed Event Store, with CosmosDb being a later alternative

Wrapping Up

As for the wisdom of ever again making a release cycle where the entire Critter Stack has a major release at the exact same time, this:

Finally, a lot of things didn’t make the release that folks wanted, heck that I wanted, but at some point it becomes expensive for a project to have a long running branch for “vNext” and you have to make the release. I’m hopeful that even though these major releases didn’t add a ton of new functionality that they set us up with the right foundation for where the tools go next.

I also know that folks will have plenty of questions and probably even inevitably run into problems or confusion with the new releases — especially until we can catch up on documentation — but I stole time from the family to get this stuff out this weekend and I’ll probably not be able to respond to anyone but JasperFx customers on Monday. Finally, in the meantime, right after every big push, I promise to start responding to whatever problems folks will have, but: