Sneak Peek at the SignalR Integration in Wolverine 5.0

Earlier this week I did a live stream on the upcoming Wolverine 5.0 release where I just lightly touched on the concept for our planned SignalR integration with Wolverine. While there wasn’t that much to show yesterday, a big pull request just landed and I think the APIs and the approach has gelled enough that it’s worth a sneak peak.

First though, the new SignalR transport in Wolverine is being built now to support our planned “CritterWatch” tool shown below:

As it’s planned out right now, the “CritterWatch” server application communicating via SignalR to constantly push updated information to any open browser dashboards about system performance. On the other side of things, CritterWatch users will be able to submit quite a number of commands or queries from the browser to CritterWatch, when will then have to relay commands and queries to the various “Critter Stack” applications being monitored through asynchronous messaging. And of course, we expect the responses or status updates to be constantly flowing from the monitored services to CritterWatch which will then relay information or updates to the browsers, again by SignalR.

Long story short, there’s going to be a lot of asynchronous messaging back and forth between the three logical applications above, and this is where a new SignalR transport for Wolverine comes into play. Having the SignalR transport gives us a standardized way to send a number of different logical messages from the browser to the server and take advantage of everything that the normal Wolverine execution pipeline gives us, including relatively clean handler code compared to other messaging or “mediator” tools, baked in observability and traceability, and Wolverine’s error resiliency. Going back the other way, the SignalR transport gives us a standardized way to publish information right back to the client from our server.

Enough of that, let’s jump into some code. From the integration testing code, let’s say we’ve got a small web app configured like this:

var builder = WebApplication.CreateBuilder();

builder.WebHost.ConfigureKestrel(opts =>
{
    opts.ListenLocalhost(Port);
});

// Note to self: take care of this in the call
// to UseSignalR() below
builder.Services.AddSignalR();
builder.Host.UseWolverine(opts =>
{
    opts.ServiceName = "Server";
    
    // Hooking up the SignalR messaging transport
    // in Wolverine
    opts.UseSignalR();

    // These are just some messages I was using
    // to do end to end testing
    opts.PublishMessage<FromFirst>().ToSignalR();
    opts.PublishMessage<FromSecond>().ToSignalR();
    opts.PublishMessage<Information>().ToSignalR();
});

var app = builder.Build();

// Syntactic sure, really just doing:
// app.MapHub<WolverineHub>("/messages");
app.MapWolverineSignalRHub();

await app.StartAsync();

// Remember this, because I'm going to use it in test code
// later
theWebApp = app;

With that configuration, when you call IMessageBus.PublishAsync(new Information("here's something you should know")); in your system, Wolverine will be routing that message through SignalR, where it will be received in a client with the default “ReceiveMessage” operation. The JSON delivered to the client will be wrapped with the CloudEvents specification like this:

{ “type”: “information”, “data”: { “message”: “here’s something you should know” } }

Likewise, Wolverine will expect messages posted to the server from the browser to be embedded in that lightweight CloudEvents compliant wrapper.

We are not coincidentally adding CloudEvents support for extended interoperability in Wolverine 5.0 as well.

For testing, the new WolverineFx.SignalR Nuget will also have a separate messaging transport using the SignalR Client just to facilitate testing, and you can see that usage in some of the testing code:

// This starts up a new host to act as a client to the SignalR
// server for testing
public async Task<IHost> StartClientHost(string serviceName = "Client")
{
    var host = await Host.CreateDefaultBuilder()
        .UseWolverine(opts =>
        {
            opts.ServiceName = serviceName;
            
            // Just pointing at the port where Kestrel is
            // hosting our server app that is running
            // SignalR
            opts.UseClientToSignalR(Port);
            
            opts.PublishMessage<ToFirst>().ToSignalRWithClient(Port);
            
            opts.PublishMessage<RequiresResponse>().ToSignalRWithClient(Port);
            
            opts.Publish(x =>
            {
                x.MessagesImplementing<WebSocketMessage>();
                x.ToSignalRWithClient(Port);
            });
        }).StartAsync();
    
    _clientHosts.Add(host);

    return host;
}

And now to show a little Wolverine-esque spin, let’s say that you have a handler being invoked by a browser sending a message through SignalR to a Wolverine server application, and as part of that handler, you need to send a response message right back to the original calling SignalR connection to the right browser instance.

Conveniently enough, you have this helper to do exactly that in a pretty declarative way:

    public static ResponseToCallingWebSocket<WebSocketResponse> Handle(RequiresResponse msg) 
        => new WebSocketResponse(msg.Name).RespondToCallingWebSocket();

And just for fun, here’s the test that proves the above code works:

[Fact]
public async Task send_to_the_originating_connection()
{
    var green = await StartClientHost("green");
    var red = await StartClientHost("red");
    var blue = await StartClientHost("blue");

    var tracked = await red.TrackActivity()
        .IncludeExternalTransports()
        .AlsoTrack(theWebApp)
        .SendMessageAndWaitAsync(new RequiresResponse("Leo Chenal"));

    var record = tracked.Executed.SingleRecord<WebSocketResponse>();
    
    // Verify that the response went to the original calling client
    record.ServiceName.ShouldBe("red");
    record.Message.ShouldBeOfType<WebSocketResponse>().Name.ShouldBe("Leo Chenal");
}

And for one least trick, let’s say you want to work with grouped connections in SignalR so you can send messages to a subset of connected clients. In this case, I went down the Wolverine “Side Effect” route, as you can see in these example handlers:

// Declaring that you need the connection that originated
// this message to be added to the named SignalR client group
public static AddConnectionToGroup Handle(EnrollMe msg) 
    => new(msg.GroupName);

// Declaring that you need the connection that originated this
// message to be removed from the named SignalR client group
public static RemoveConnectionToGroup Handle(KickMeOut msg) 
    => new(msg.GroupName);

// The message wrapper here sends the raw message to
// the named SignalR client group
public static object Handle(BroadCastToGroup msg) 
    => new Information(msg.Message)
        .ToWebSocketGroup(msg.GroupName);

I should say that all of the code samples are taken from our test coverage. At this point the next step is to pull this into our CritterWatch codebase to prove out the functionality. The first thing up with that is building out the server side of what will be CritterWatch’s “Dead Letter Queue Console” for viewing, querying, and managing the DLQ records for any of the Wolverine applications being monitored by CritterWatch.

For more context, here’s the live stream on Wolverine 5:

Live Stream Previewing Wolverine 5.0 on Thursday

I’ll be doing a live stream tomorrow (Thursday) August 4th to preview some of the new improvements coming soon with Wolverine 5.0. The highlights are:

  • The new “Partitioned Sequential Messaging” feature and why you’re going to love this feature that’s going to help make Wolverine based systems much more able to sidestep problems with concurrency
  • Improvements to the code generation and IoC usage within Wolverine.HTTP
  • The new SignalR transport and integration, and how we think this is going to make it easier to build asynchronous workflows between web clients and your backend services
  • More powerful interoperability w/ non-Wolverine services
  • How the Marten integration with Wolverine is going to be more performant by reducing network chattiness
  • Some thoughts about improving the code start times for Wolverine and Marten

And of course anything else folks want to discuss on the live stream as well.

Check it out here, and the recording will be up later tomorrow anyway:

Operations that Span Multiple Event Streams with the Critter Stack

Let’s just say that Marten incurs some serious benefits to being on top of PostgreSQL and its very strong support for transactional integrity as opposed to some of the high profile commercial Event Sourcing tools who are spending a lot of time and energy on their “Dynamic Consistency Boundary” concept because they lack the ACID compliant transactions that Marten gets for free by riding on top of PostgreSQL.

Marten has long had the ability to support both reading and appending to multiple event streams at one time with guarantees about data consistency and even the ability to achieve strongly consistent transactional writes across multiple streams at one time. Wolverine just added some syntactic sugar to make cross-stream command handlers be more declarative with its “aggregate handler workflow” integration with Marten.

Using the canonical example of a use case where you move money from one account to another account and need both changes to be persisted in one atomic transaction. Let’s start with a simplified domain model of events and a “self-aggregatingAccount type like this:

public record AccountCreated(double InitialAmount);
public record Debited(double Amount);
public record Withdrawn(double Amount);

public class Account
{
    public Guid Id { get; set; }
    public double Amount { get; set; }

    public static Account Create(IEvent<AccountCreated> e)
        => new Account { Id = e.StreamId, Amount = e.Data.InitialAmount};

    public void Apply(Debited e) => Amount += e.Amount;
    public void Apply(Withdrawn e) => Amount -= e.Amount;
}

Moving on, here’s what a command handler could be that handles a TransferMoney command that impacts two different accounts:

public record TransferMoney(Guid FromId, Guid ToId, double Amount);

public static class TransferMoneyEndpoint
{
    [WolverinePost("/accounts/transfer")]
    public static void Post(
        TransferMoney command,

        [Aggregate(nameof(TransferMoney.FromId))] IEventStream<Account> fromAccount,
        
        [Aggregate(nameof(TransferMoney.ToId))] IEventStream<Account> toAccount)
    {
        // Would already 404 if either referenced account does not exist
        if (fromAccount.Aggregate.Amount >= command.Amount)
        {
            fromAccount.AppendOne(new Withdrawn(command.Amount));
            toAccount.AppendOne(new Debited(command.Amount));
        }
    }
}

The IEventStream<T> abstraction comes from Marten’s FetchForWriting() API that is our recommended way to interact with Marten streams in typical command handlers. This API is used underneath Wolverine’s “aggregate handler workflow”, but normally hidden from user written code if you’re only working with one stream at a time. In this case though, we’ll need to work with the raw IEventStream<T> objects that both wrap the projected aggregation of each Account as well as providing a point where we can explicitly append events separately to each event stream. FetchForWriting() guarantees that you get the most up to date information for the Account view of each event stream regardless of how you have configured Marten’s ProjectionLifecycle for Account (kind of an important detail here!).

The typical Marten transactional middleware within Wolverine is calling SaveChangesAsync() for us on the Marten unit of work IDocumentSession for the command. If there’s enough funds in the “From” account, this command will append a Withdrawn event to the “From” account and a Debited event to the “To” account. If either account has been written to between fetching the original information, Marten will reject the changes and throw its ConcurrencyException as an optimistic concurrency check.

In unit testing, we could write a unit test for the “happy path” where you have enough funds to cover the transfer like this:

public class when_transfering_money
{
    [Fact]
    public void happy_path_have_enough_funds()
    {
        // StubEventStream<T> is a type that was recently added to Marten
        // specifically to facilitate testing logic like this
        var fromAccount = new StubEventStream<Account>(new Account { Amount = 1000 }){Id = Guid.NewGuid()};
        var toAccount = new StubEventStream<Account>(new Account { Amount = 100}){Id = Guid.NewGuid()});
        
        TransferMoneyEndpoint.Post(new TransferMoney(fromAccount.Id, toAccount.Id, 100), fromAccount, toAccount);

        // Now check the events we expected to be appended
        fromAccount.Events.Single().ShouldBeOfType<Withdrawn>().Amount.ShouldBe(100);
        toAccount.Events.Single().ShouldBeOfType<Debited>().Amount.ShouldBe(100);
    }
}

Alright, so there’s a few remaining items we still need to improve over time:

  1. Today there’s no way to pass in the expected starting version of each individual stream
  2. There’s some ongoing work to allow Wolverine to intelligently parallelize work between business entities or event streams while doing work sequentially within a business entity or event stream to side step concurrency problems
  3. We’re working toward making Wolverine utilize Marten’s batch querying support any time you use Wolverine’s declarative persistence helpers against Marten and request more than one item from Marten. You can use Marten’s batch querying with its FetchForWriting() API today if you just drop down to the lower level and work directly against Marten, but wouldn’t it be nice if Wolverine would just do that automatically for you in cases like the TransferMoney command handler above? We think this will be a significant performance improvement because network round trips are evil.

I covered this example at the end of a live stream we did last week on Event Sourcing with the Critter Stack:

Need Some Feedback on Near Term Wolverine Work

That’s supposed to be a play on a Wolverine as Winnie the Pooh in his “thinking spot”

I’m wrestling a little bit with whether the new features and changes coming into Wolverine very soon are worthy of a 5.0 release even though 4.0 was just a couple months ago. I’d love any and all feedback about this. I’d also like to ask for help from the community to kick the tires on any alpha/beta/RC releases we might make with these changes.

Wolverine development is unusually busy right now as new feature requests are streaming in from JasperFx customers and users as Wolverine usage has increased quite a bit this year. We’re only a couple months out from the Wolverine 4.0 release (and Marten 8.0 that was a lot bigger). I wrote about Critter Stack futures just a month ago, but things have already changed since then, so let’s do this again.

Right now, here are the major initiatives happening or planned for the near future for Wolverine in what I think is probably the priority order:

TPL DataFlow to Channels

I’m actively working on replacing both Marten & Wolverine’s dependency on the TPL Dataflow library with System.Threading.Channels. This is something I wanted to do for 4.0, but there wasn’t enough time. Because of some issues with TPL DataFlow a JasperFx client hit under load and the planned “concurrency resistant parallelism” feature work I’ll discuss next, I wanted to start using Channels now. I’m a little concerned that this change by itself justifies a Wolverine 5.0 release even though the public APIs aren’t changing. I would expect some improvement in performance from this change, but I don’t have hard numbers yet. What do you think? I’ll have this done in a local branch by the end of the day.

“Concurrency Resistant Parallelism”

For lack of a better name, we’re planning some “concurrency resistant parallelism” features for Wolverine. Roughly, this is teaching Wolverine about how to better parallelize *or* order messages in a system so that you can maximize throughput (parallelism) without incurring concurrent writes to resources or entities that are sensitive to concurrent write problems (*cough* Marten event streams *cough*). I’d ask you to just look at the GitHub issue I linked. This is to maximize throughput for an important JasperFx client who frequently gets bursts of messages related to the same event stream, but also, this has been a frequent issue for quite a few users and we hope this would be a hugely strategic addition to Wolverine

Interoperability

Improving the interoperability options for Wolverine. and non-Wolverine applications. There’s already some work underway, but I think this might be a substantial effort out of sheer permutations. At a minimum, I’m hoping we have OOTB compatibility against both NServiceBus & MassTransit for all supported message transports in Wolverine and not just Rabbit MQ like we do today. Largely based on a pull request from the community, we’ll also make it easier to build out custom interoperability with non-Wolverine applications. And then lastly, there’s enough interest in CloudEvents to push through that as well.

Integrating with Marten’s Batch Querying / Optimizing Multi-Event Stream Operations

Make the “Critter Stack” tool the best Event Store / Event Driven Architecture platform on the freaking planet for working with multiple event streams at the same time. Mostly because it would just be flat out sexy, I’m interested in enhancing Wolverine’s integration with Marten to be able to opt into Marten’s batch querying API under the covers when you use the declarative persistence options or the aggregate handler workflow in Wolverine. This would be beneficial by:

  • Improving performance because network chattiness is very commonly an absolute performance killer in enterprise-y systems — especially for teams that get a little too academic with Clean/Onion Architecture approachs
  • Be what we hope will be a superior alternative for working with multiple event streams at one time in terms of usability, testability, and performance than the complex “Dynamic Consistency Boundary” idea coming out of some of the commercial event store tool companies right now
  • Further Wolverine’s ability to craft much simpler Post-Clean Architecture codebases for better productivity and longer term maintenance. Seriously, I really do believe that Clean/Onion Architecture approaches absolutely strangle systems in the longer term because the code easily becomes too difficult to reason about.

IoC Usage

Improve Wolverine’s integration with IoC containers, especially for HTTP usage. I think I’d like to consider introducing an “opt out” setting where Wolverine asserts and fails on bootstrapping if any message handler or HTTP endpoint can’t use Wolverine’s inlined code generation and has to revert to service location unless users explicitly say they will allow it.

Wolverine.HTTP Improvements

Expanded support in Wolverine.HTTP for [AsParameters] usage, probably some rudimentary “content negotiation,” multi-part uploads. Really just filling some current holes in Wolverine.HTTP‘s current support as more people use that library.

SignalR

A formal SignalR integration for Wolverine, which will most likely drop out of our ongoing “Critter Watch” development. Think about having a first class transport option for Wolverine that will let you quickly integrate messages to and from a web application via SignalR

Cold Start Optimization

Optimizing the Wolverine “Cold Start Time.” I think that’s self explanatory. This work might span into Marten and even Lamar as well. I’m not going to commit to AOT compatibility in the Critter Stack this year because I like actually getting to see my family sometimes, but this work might get us closer to that for next year.

Improved Declarative Persistence in Wolverine

To continue a consistent theme about how Wolverine is becoming the antidote to high ceremony Clean/Onion Architecture approaches, Wolverine 4.8 added some significant improvements to its declarative persistence support (partially after seeing how a recent JasperFx Software client was encountering a little bit of repetitive code).

A pattern I try to encourage — and many Wolverine users do like — is to make the main method of a message handler or an HTTP endpoint be the “happy path” after validation and even data lookups so that that method can be a pure method that’s mostly concerned with business or workflow logic. Wolverine can do this for you through its “compound handler” support that gets you to a low ceremony flavor of Railway Programming.

With all that out of the way, I saw a client frequently writing code something like this endpoint that would need to process a command that referenced one or more entities or event streams in their system:

public record ApproveIncident(Guid Id);

public class ApproveIncidentEndpoint
{
    // Try to load the referenced incident
    public static async Task<(Incident, ProblemDetails)> LoadAsync(
        
        // Say this is the request body, which we can *also* use in
        // LoadAsync()
        ApproveIncident command, 
        
        // Pulling in Marten
        IDocumentSession session,
        CancellationToken cancellationToken)
    {
        var incident = await session.LoadAsync<Incident>(command.Id, cancellationToken);
        if (incident == null)
        {
            return (null, new ProblemDetails { Detail = $"Incident {command.Id} cannot be found", Status = 400 });
        }

        return (incident, WolverineContinue.NoProblems);
    }

    [WolverinePost("/api/incidents/approve")]
    public SomeResponse Post(ApproveIncident command, Incident incident)
    {
        // actually do stuff knowing that the Incident is valid
    }
}

I’d ask you to mostly pay attention to the LoadAsync() method, and imagine copy & pasting dozens of times in a system. And sure, you could go back to returning IResult as a continuation from the HTTP endpoint method above, but that moves clutter back into your HTTP method and would add more manual work to mark up the method with attributes for OpenAPI metadata. Or we could improve the OpenAPI metadata generation by returning something like Task<Results<Ok<SomeResponse>, ProblemHttpResult>>, but c’mon, that’s an absolute eye sore that detracts from the readability of the code.

Instead, let’s use the newly enhanced version of Wolverine’s [Entity] attribute to simplify the code above and still get OpenAPI metadata generation that reflects both the 200 SomeResponse happy path and 400 ProblemDetails with the correct content type. That would look like this:

    [WolverinePost("/api/incidents/approve")]
    public static SomeResponse Post(
        // The request body. Wolverine doesn't require [FromBody], but it wouldn't hurt
        ApproveIncident command, 
        
        [Entity(OnMissing = OnMissing.ProblemDetailsWith400, MissingMessage = "Incident {0} cannot be found")]
        Incident incident)
    {
        // actually do stuff knowing that the Incident is valid
        return new SomeResponse();
    }

Behaviorally, at runtime that endpoint will try to load the Incident entity from whatever persistence tooling is configured for the application (Marten in the tests) using the “Id” property of the ApproveIncident object deserialized from the HTTP request body. If the data cannot be found, the HTTP requests ends with a 400 status code and a ProblemDetails response with the configured message up above. If the Incident can be found, it’s happily passed along to the main endpoint.

Not that every endpoint or message handler is really this simple, but plenty of times you would be changing a property on the incident and persisting it. We can *still* be mostly a pure function with the existing persistence helpers in Wolverine like so:

    [WolverinePost("/api/incidents/approve")]
    public static (SomeResponse, IStorageAction<Incident>) Post(
        // The request body. Wolverine doesn't require [FromBody], but it wouldn't hurt
        ApproveIncident command, 
        
        [Entity(OnMissing = OnMissing.ProblemDetailsWith400, MissingMessage = "Incident {0} cannot be found")]
        Incident incident)
    {
        incident.Approved = true;
        
        // actually do stuff knowing that the Incident is valid
        return (new SomeResponse(), Storage.Update(incident));
    }

Here’s some things I’d like you to know about that [Entity] attribute up above and how that is going to work out in real usage:

  • There is some default conventional magic going on to “decide” how to get the identity value for the entity being loaded (“IncidentId” or “Id” on the command type or request body type, then the same value in routing values for HTTP endpoints or declared query string values). This can be explicitly configured on the attribute something like [Entity(nameof(ApproveIncident.Id)]
  • Every attribute type that I’m mentioning in this post that can be applied to method parameters supports the same identity logic as I explained in the previous bullet
  • Before Wolverine 4.8, the “on missing” behavior was to simply set a 404 status code in HTTP or log that required data was missing in message handlers and quit. Wolverine 4.8 adds the ability to control the “on missing” behavior
  • This new “on missing” behavior is available on the older [Document] attribute in Wolverine.Http.Marten, and [Document] is now a direct subclass of [Entity] that can be used with either message handlers or HTTP endpoints now
  • The existing [AggregateHandler] and [Aggregate] attributes that are part of the Wolverine + Marten “aggregate handler workflow” (the “C” in CQRS) now support this “on missing” behavior, but it’s “opt in,” meaning that you would have to use [Aggregate(Required = true)] to get the gating logic. We had to make that required test opt in to avoid breaking existing behavior when folks upgraded.
  • The lighter weight [ReadAggregate] in the Marten integration also standardizes on this “OnMissing” behavior
  • Because of the confusion I was seeing from some users between [Aggregate]which is meant for writing events and is a little heavier runtime wise than [ReadAggregate], there’s a new [WriteAggregate] attribute with identical behavior to [Aggregate] and now available for message handlers as well. I think that [Aggregate] might get deprecated soon-ish to sidestep the potential confusion
  • [Entity] attribute usage is 100% supported for EF Core and RavenDb as well as Marten. Wolverine is even smart enough to select the correct DbContext type for the declared entity
  • If you coded with any of that [Entity] or Storage stuff and switched persistence tooling, your code should not have to change at all
  • There’s no runtime Reflection going on here. The usage of [Entity] is impacting Wolverine’s code generation around your message handler or HTTP endpoint methods.

The options so far for “OnMissing” behavior is this:

public enum OnMissing
{
    /// <summary>
    /// Default behavior. In a message handler, the execution will just stop after logging that the data was missing. In an HTTP
    /// endpoint the request will stop w/ an empty body and 404 status code
    /// </summary>
    Simple404,
    
    /// <summary>
    /// In a message handler, the execution will log that the required data is missing and stop execution. In an HTTP
    /// endpoint the request will stop w/ a 400 response and a ProblemDetails body describing the missing data
    /// </summary>
    ProblemDetailsWith400,
    
    /// <summary>
    /// In a message handler, the execution will log that the required data is missing and stop execution. In an HTTP
    /// endpoint the request will stop w/ a 404 status code response and a ProblemDetails body describing the missing data
    /// </summary>
    ProblemDetailsWith404,
    
    /// <summary>
    /// Throws a RequiredDataMissingException using the MissingMessage
    /// </summary>
    ThrowException
}

The Future

This new improvement to the declarative data access is meant to be part of a bigger effort to address some bigger use cases. Not every command or query is going to involve just one single entity lookup or one single Marten event stream, so what do you do when there are multiple declarations for data lookups?

I’m not sure what everyone else’s experience is, but a leading cause of performance problems in the systems I’ve helped with over the past decade has been too much chattiness between the application servers and the database. The next step with the declarative data access is to have at least the Marten integration opt into using Marten’s batch querying mechanism to improve performance by batching up requests in fewer network round trips any time there are multiple data lookups in a single HTTP endpoint or message handler.

The step after that is to also enroll our Marten integration for command handlers so that you can craft message handlers or HTTP endpoints that work against 2 or more event streams with strong consistency and transactional support while also leveraging the Marten batch querying for all the efficiency we can wring out of the tooling. I mostly want to see this behavior because I’ve seen clients who could actually use what I was just describing as a way to make their systems more efficient and remove some repetitive code.

I’ll also admit that I think this capability to have an alternative “aggregate handler workflow” that allows you to work efficiently with more than one event stream and/or projected aggregate at one time would put the Critter Stack ahead of the commercial tools pursuing “Dynamic Consistency Boundaries” with what I’ll be arguing is an easier to use alternative.

It’s already possible to work transactionally with multiple event streams at one time with strong consistency and both optimistic and exclusive version protections, but there’s opportunity for performance optimization here.

Summary

Pride goeth before destruction, and an haughty spirit before a fall.

Proverbs 16:18 in the King James version

With the quote above out of the way, let’s jump into some cocky salesmanship! My hope and vision for the Critter Stack is that it becomes the most effective tooling for building typical server side software systems. My personal vision and philosophy for making software development more productive and effective over time is to ruthlessly reduce repetitive code and eliminate code ceremony wherever possible. Our community’s take is that we can achieve improved results compared to more typical Clean/Onion/Hexagonal Architecture codebases by compressing and compacting code down without ever sacrificing performance, resiliency, or testability.

The declarative persistence helpers in this article are, I believe, a nice example of the evolving “Critter Stack Way.”

Critter Stack Futures for the rest of 2025

It’s the halfway point of 2025 some how, and we’ve now gotten past the big Marten 8.0 and Wolverine 4.0 releases. Right before I go on vacation next week, I thought it would be a good time to jot down some thoughts about where the Critter Stack might go for the rest of 2025 and probably into 2026.

Critter Watch

The big ticket item is our ongoing work on “Critter Watch”, which will be a commercial management and observability add on for Wolverine, Marten, and any future new Critter tools. The top line pitch for Critter Watch is that it well help you know what your applications are, how they interact with each other, whether they’re healthy in production, and provide features to help heal the inevitable production problems when they happen.

The general idea is to have a standalone application deployed that acts as a management console for 1 or more Wolverine applications in our user’s environments:

Upfront for the Critter Watch MVP (and requests from a client), we’re focused on:

  • Visualizing the systems being monitored, their Wolverine and Marten configuration, and the capabilities of the systems. We’re currently researching AsyncAPI publishing and visualization as well. The whole point of this is to help teams understand how the messages in your system are handled, published, and routed.
  • Event Sourcing management, but this is mostly about managing the execution of asynchronous projections and subscriptions at runtime and being able to understand the ongoing performance or any ongoing problems
  • Dead letter queue management for Wolverine

I have less clarity over development time tooling, but we’re at least interested in having some of Critter Watch usable as an embedded tool during development.

After years of talking about this and quite a bit of envisioning, development started in earnest over the past 6 weeks with a stretch goal of having a pilot usage by the end of July for a JasperFx Software client.

I do not yet have any hard pricing numbers yet, but we are very interested in talking to anyone who would be interested in Critter Watch.

Concurrency, Concurrency, Concurrency!

I think that systems built with Event Sourcing are a little more sensitive to concurrent data reads and writes, or maybe it’s just that those problems are there all the time but more readily observable with Event Sourcing and Event Driven Architectures. In my work with JasperFx Software clients, concurrency is probably the most common subject of questions.

Mostly today you deal with this either by building in selective retry capabilities based on version conflict detection, or get fancier with queueing and message routing to eliminate the concurrent access as much as possible. Or both of course.

A great way to side step the concurrent access while not sacrificing throughput through parallelization is to use Wolverine’s support for Azure Service Bus Session Identifiers and FIFO Queues.

Which is great, but what if you’re not using Azure Service Bus? What if you’re only using local queueing? And wouldn’t it be nice if the existing Azure Service Bus FIFO support was a little less cumbersome to use in your code?

I don’t have a ton of detail, but there’s a range of internal proposals to create some new recipes for Wolverine usage to enable teams to more easily “shard” logical work between queues and within the local workers listening to queues to improve Wolverine’s handling of concurrent access without sacrificing parallel work and throughput or requiring repetitive code. Some of this is being done in collaboration with JasperFx clients.

Improving Wolverine’s Declarative Data Access

For lack of a better description, Wolverine has a feature set I’m heretofore calling “declarative data access” with the [Entity] attribute that triggers code generation within message handlers or HTTP endpoints to load requested data from Marten, EF Core, or RavenDb. And of course, there’s also what we call the “aggregate handler workflow” recipe for using the Decider pattern with Wolverine and Marten that I think is the simplest way to express business logic when using Event Sourcing in the .NET ecosystem.

To take these productivity features even farther, I think we’ll add some:

  1. More control over what action to take if an entity is missing. Today, the HTTP endpoints will just return a 404 status code if required entities can’t be found. In future versions, we’ll let you customize log or ProblemDetails messages and have more control over how Wolverine generates the “if missing” path
  2. At least for Marten, opt into Marten’s batch querying support if you are using more than one of any combination of the existing [Aggregate], [ReadAggregate], [Entity], or [Document] attributes to load data within a single HTTP endpoint or message handler as a way of improving performance by reducing network round trips to the database. And don’t sneeze at that, chattiness is a common performance killer in enterprise applications. Especially when the code is unnecessarily complicated by typical usages of Clean or Onion Architectural approaches.

If you follow Event Sourcing related topics online, you’ll hear quite a bit of buzz from some of the commercial tools about “Dynamic Consistency Boundaries” (DCB). We get asked about this with Marten occasionally, but the Marten core team’s position is that Marten doesn’t require this feature because you can already do “read” and “write” operations across multiple event streams with transactional integrity as is.

What the batch querying I just described will do for Marten though is make the full “Critter Stack” usage be more performant when you need to potentially work with more than one event stream at a time with all the transactional support and strong consistency that Marten (really PostgreSQL) already provides.

For Marten users, this is essentially making Marten’s FetchForWriting() API able to enroll in batch querying for more efficient data querying when working across streams. That work is actually well underway.

But if you prefer to use the fancier and more novel DCB approaches that aren’t even officially released yet, feel free to pay out some big bucks to use one of the commercial tools.

Smaller, But Still Important Work!

  • Partially for Critter Watch, Wolverine should support connecting to multiple brokers in a single application for each transport type. Some of this is already done, with Kafka being next up, but we need to add this to every transport
  • Improved interoperability support for Wolverine talking to non-Wolverine applications. There’s an existing pull request that goes quite a ways for this, but this might end up being more a documentation effort than anything else
  • More options in Wolverine with Marten or just Marten for streaming Marten data as JSON directly to HTTP. We have some support already of course, but there are more opportunities for expanding that
  • Exposing an MCP server off of Marten event data, but I have very little detail about what that would be. I would be very interested in partnering with a company who wanted to do this, and a JasperFx client might be working with us later this year on AI with Marten
  • Improving throughput in Marten’s event projections and subscriptions. We’ve done a lot the past couple years, but there are still some other ideas in the backlog we haven’t played yet
  • Expanding Wolverine support for more database engines, with CosmosDb the most likely contender this year. This might be contingent upon client work of course.

What about the SQL Server backed Event Store?

Yeah, I don’t know. We did a ton of work in Marten 8 to pull what will be common code out in a way that it could be reused in the SQL Server backed event store. I do not know when we might work on this as CritterWatch will take priority for now.

And finally….

And on that note I’m essentially on vacation for a week and I’ll catch up with folks in late July.

Low Ceremony Railway Programming with Wolverine

Railway Programming is an idea that came out of the F# community as a way to develop for “sad path” exception cases without having to resort to throwing .NET Exceptions as a way of doing flow control. Railway Programming works by chaining together functions with a standardized response in such a way that it’s relatively easy to abort workflows as preliminary steps are found to be invalid while still passing the results of the preceding function as the input into the next function.

Wolverine has some direct support for a quasi-Railway Programming approach by moving validation or data loading steps prior to the main message handler or HTTP endpoint logic. Let’s jump into a quick sample that works with either message handlers or HTTP endpoints using the built in HandlerContinuation enum:

public static class ShipOrderHandler
{
    // This would be called first
    public static async Task<(HandlerContinuation, Order?, Customer?)> LoadAsync(ShipOrder command, IDocumentSession session)
    {
        var order = await session.LoadAsync<Order>(command.OrderId);
        if (order == null)
        {
            return (HandlerContinuation.Stop, null, null);
        }

        var customer = await session.LoadAsync<Customer>(command.CustomerId);

        return (HandlerContinuation.Continue, order, customer);
    }

    // The main method becomes the "happy path", which also helps simplify it
    public static IEnumerable<object> Handle(ShipOrder command, Order order, Customer customer)
    {
        // use the command data, plus the related Order & Customer data to
        // "decide" what action to take next

        yield return new MailOvernight(order.Id);
    }
}

By naming convention (but you can override the method naming with attributes as you see fit), Wolverine will try to generate code that will call methods named Before/Validate/Load(Async) before the main message handler method or the HTTP endpoint method. You can use this compound handler approach to do set up work like loading data required by business logic in the main method or in this case, as validation logic that can stop further processing based on failed validation or data requirements or system state. Some Wolverine users like using these method to keep the main methods relatively simple and focused on the “happy path” and business logic in pure functions that are easier to unit test in isolation.

By returning a HandlerContinuation value either by itself or as part of a tuple returned by a BeforeValidate, or LoadAsync method, you can direct Wolverine to stop all other processing.

You have more specialized ways of doing that in HTTP endpoints by using the ProblemDetails specification to stop processing like this example that uses a Validate() method to potentially stop processing with a descriptive 400 and error message:

public record CategoriseIncident(
    IncidentCategory Category,
    Guid CategorisedBy,
    int Version
);

public static class CategoriseIncidentEndpoint
{
    // This is Wolverine's form of "Railway Programming"
    // Wolverine will execute this before the main endpoint,
    // and stop all processing if the ProblemDetails is *not*
    // "NoProblems"
    public static ProblemDetails Validate(Incident incident)
    {
        return incident.Status == IncidentStatus.Closed 
            ? new ProblemDetails { Detail = "Incident is already closed" } 
            
            // All good, keep going!
            : WolverineContinue.NoProblems;
    }
    
    // This tells Wolverine that the first "return value" is NOT the response
    // body
    [EmptyResponse]
    [WolverinePost("/api/incidents/{incidentId:guid}/category")]
    public static IncidentCategorised Post(
        // the actual command
        CategoriseIncident command, 
        
        // Wolverine is generating code to look up the Incident aggregate
        // data for the event stream with this id
        [Aggregate("incidentId")] Incident incident)
    {
        // This is a simple case where we're just appending a single event to
        // the stream.
        return new IncidentCategorised(incident.Id, command.Category, command.CategorisedBy);
    }
}

The value WolverineContinue.NoProblems tells Wolverine that everything is good, full speed ahead. Anything else will write the ProblemDetails value out to the response, return a 400 status code (or whatever you decide to use), and stop processing. Returning a ProblemDetails object hopefully makes these filter methods easy to unit test themselves.

You can also use the AspNetCore IResult as another formally supported “result” type in these filter methods like this shown below:

public static class ExamineFirstHandler
{
    public static bool DidContinue { get; set; }
    
    public static IResult Before([Entity] Todo2 todo)
    {
        return todo != null ? WolverineContinue.Result() : Results.Empty;
    }

    [WolverinePost("/api/todo/examinefirst")]
    public static void Handle(ExamineFirst command) => DidContinue = true;
}

In this case, the “special” value WolverineContinue.Result() tells Wolverine to keep going, otherwise, Wolverine will execute the IResult returned from one of these filter methods and stop all other processing for the HTTP request.

It’s maybe a shameful approach for folks who are more inline with a Functional Programming philosophy, but you could also use a signature like:

[WolverineBefore]
public static UnauthorizedHttpResult? Authorize(SomeCommand command, ClaimsPrincipal user)

In the case above, Wolverine will do nothing if the return value is null, but will execute the UnauthorizedHttpResult response if there is, and stop any further processing. There is *some* minor value to expressing the actual IResult type above because that can be used to help generate OpenAPI metadata.

Lastly, let’s think about the very common need to write an HTTP endpoint where you want to return a 404 status code if the requested data doesn’t exist. In many cases the API user is supplying the identity value for an entity, and your HTTP endpoint will first query for that data, and if it doesn’t exist, abort the processing with the 404 status code. Wolverine has some built in help for this tedious task through its unique persistence helpers as shown in this sample HTTP endpoint below:

    [WolverineGet("/orders/{id}")]
    public static Order GetOrder([Entity] Order order) => order;

Note the presence of the [Entity] attribute for the Order argument to this HTTP endpoint route. That’s telling Wolverine that that data should be loaded using the “id” route argument as the Order key from whatever persistence mechanism in your application deals with the Order entity, which could be Marten of course, an EF Core DbContext that has a mapping for Order, or Wolverine’s RavenDb integration. Unless we purposely mark [Entity(Required = false)], Wolverine.HTTP will return a 404 status code if the Order entity does not exist. The simplistic sample from Wolverine’s test suite above doesn’t do any kind of mapping from the raw Order to a view model, but the mechanics of the [Entity] loading would work equally if you also mapped the raw Order to some kind of OrderViewModel maybe.

Last Thoughts

I’m pushing Wolverine users and JasperFx clients to utilize Wolverine’s quasi Railway Programming capabilities as guard clauses to better separate out validation or error condition handling into easily spotted, atomic operations while reducing the core HTTP request or message handler to being a “happy path” operation. Especially in HTTP services where the ProblemDetails specification and integration with Wolverine fits well with this pattern and where I’d expect many HTTP client tools to already know how to work with problem details responses.

There have been a few attempts to adapt Railway Programming to C# that I’m aware of, inevitably using some kind of custom Result type that denotes success or failure with the actual results for the next function. I’ve seen some folks and OSS tools try to chain functions together with nested lambda functions within a fluent interface. I’m not a fan of any of this because I think the custom Result types just add code noise and extra mechanical work, then the fluent Interface approach can easily be nasty to debug and detracts from readability by the extra code noise. But anyway, read a lot more about this in Andrew Lock’s Series: Working with the result pattern and make up your own mind.

I’ve also seen an approach where folks used MediatR handlers for each individual step in the “railway” where each handler had to return a custom Result type with the inputs for the next handler in the series. I beg you, please don’t do this in your own system because that leads to way too much complexity, code that’s much harder to reason about because of the extra hoops and indirection, and potentially poor system performance because again, you can’t see what the code is doing and you can easily end up making unnecessarily duplicate database round trips or just being way too “chatty” to the database. And no, replacing MediatR handlers with Wolverine handlers is not going to help because the pattern was the problem and not MediatR itself.

As always, the Wolverine philosophy is that the path to long term success in enterprise-y software systems is by relentlessly eliminating code ceremony so that developers can better reason about how the system’s logic and behavior works. To a large degree, Wolverine is a reaction to the very high ceremony Clean/Onion Architecture/iDesign architectural approaches of the past 15-20 years and how hard those systems can be to deal with over time.

And as happens with just about any halfway good thing in programming, some folks overused the Railway Programming idea and there’s a little bit of pushback or backlash to the technique. I can’t find the quote to give it the real attribution, but something I’ve heard Martin Fowler say is that “we don’t know how useful an idea really can be until we push it too far, then pull back a little bit.”

Making Event Sourcing with Marten Go Faster

You’re about to start a new system with Event Sourcing using Marten, and you’re expecting your system to be hugely successful such that it’s going to handle a huge amount of data, but you’re already starting with pretty ambitious non-functional requirements for the system to be highly performant and all the screens or exposed APIs be snappy.

Basically, what you want to do is go as fast as Marten and PostgreSQL will allow. Fortunately, Marten has a series of switches and dials that can be configured to squeeze out more performance, but for a variety of historical reasons and possible drawbacks, are not the defaults for a barebones Marten configuration as shown below:

var builder = WebApplication.CreateBuilder(args);

builder.Services.AddMarten(opts =>
{
    opts.Connection(builder.Configuration.GetConnectionString("marten"));
});

Cut me some slack in my car choice for the analogy here. I’m not only an American, but I’m an American from a rural area who grew up dreaming about having my own Mustang or Camaro because that’s as far out as I could possibly imagine back then.

At this point, we have is the equivalent to a street legal passenger car, maybe the equivalent to an off the shelf Mustang:

Which probably easily goes fast enough for every day usage for the mass majority of us most of the time. But we really need a fully tricked out Mustang GTD that’s absurdly optimized to just flat out go fast:

Let’s start trimming weight off our street legal Marten setup to go faster with…

Opt into Lightweight Sessions by Default

Starting from a new system so we don’t care about breaking existing code by changing behavior, let’s opt for lightweight sessions by default:

var builder = WebApplication.CreateBuilder(args);

builder.Services.AddMarten(opts =>
{
    opts.Connection(builder.Configuration.GetConnectionString("marten"));
})
    
// Jettison some "Identity Map" weight by going lighter weight    
.UseLightweightSessions();

By default, the instances of IDocumentSession you get out of an IoC container would utilize the Identity Map feature to track loaded entities by id so that if you happened to try to load the same entity from the same session, you would get the exact same object. As I’m sure you can imagine, that means that every entity fetched by a session is stuffed into a dictionary internally (Marten uses the highly performant ImTools ImHashMap everywhere, but still), and the session also has to bounce through the dictionary before loading data as well. It’s just a little bit of overhead we can omit by opting for “Lightweight Sessions” if we don’t need that behavior by default.

We’ve always been afraid to change the default behavior here to the more efficient approach because it can absolutely lead to breaking existing code that depends on the Identity Map behavior. On the flip side, I think you should not need Identity Map mechanics if you can keep the call stacks within your code short enough that you can actually “see” where you might be trying to load the same data twice or more in the same parent operation.

On to the next thing…

Make Writes Faster with Quick Append

Next, since we again don’t have any existing code that can be broken here, let’s opt for “Quick Append” writes like so:

var builder = WebApplication.CreateBuilder(args);

builder.Services.AddMarten(opts =>
{
    opts.Connection(builder.Configuration.GetConnectionString("marten"));

    // Make event writing faster, like 2X faster in our testing
    opts.Events.AppendMode = EventAppendMode.Quick;
})
    
// Jettison some "Identity Map" weight by going lighter weight    
.UseLightweightSessions();

This will help the system be able to append new events much faster, but at the cost of not being able to use some event metadata like event versions, sequence, or timestamp information within “Inline” projections.

Again, even though this option has been clocked as being much faster, we have not wanted to make this the default because it could break existing systems for people who depend on having the rich metadata during the Inline application of projections that forces Marten to do a kind of two step process to append events. This “Quick Append” option also helps reduce concurrent access problems writing to streams and generally makes the “Async Daemon” subsystem processing asynchronous projections and subscriptions run much smoother.

We’re not out of tricks yet by any means, so let’s go on…

Use the Identity Map for Inline Aggregates

Wait, I thought you told me not to cross the streams! Yeah, about the Identity Map thing, there’s one exception where we actually do want that behavior within CQRS command handlers like this one using Wolverine and its “Aggregate Handler Workflow” integration with Marten:

    // This tells Wolverine that the first "return value" is NOT the response
    // body
    [EmptyResponse]
    [WolverinePost("/api/incidents/{incidentId:guid}/category")]
    public static IncidentCategorised Post(
        // the actual command
        CategoriseIncident command, 
        
        // Wolverine is generating code to look up the Incident aggregate
        // data for the event stream with this id
        [Aggregate("incidentId")] Incident incident)
    {
        // This is a simple case where we're just appending a single event to
        // the stream.
        return new IncidentCategorised(incident.Id, command.Category, command.CategorisedBy);
    }

In the case above, the Incident model is a projected document that’s first used by the command handler to “decide” what new events to emit. If we’re updating the Incident model with an Inline projection that tries to update the Incident model in the database at the same time it wants to append events, then it’s an advantage for performance to “just” use the original Incident model we used initially, then forwarding the new state based on the new events and persisting the results right then and there. We can opt into this optimization even for the lightweight sessions we earlier wanted to use by adopting one more UseIdentityMapForAggregates flag:

builder.Services.AddMarten(opts =>
{
    opts.Connection(builder.Configuration.GetConnectionString("marten"));

    // Make event writing faster, like 2X faster in our testing
    opts.Events.AppendMode = EventAppendMode.Quick;

    // This can cut down on the number of database round trips
    // Marten has to do during CQRS command handler execution
    opts.Events.UseIdentityMapForAggregates = true;
})
    
// Jettison some "Identity Map" weight by going lighter weight    
.UseLightweightSessions();

Note, this optimization can easily break code for folks who use some sort of stateful “Aggregate Root” approach where the state of the projected aggregate object might be mutated during the course of executing the command. As this has traditionally been a popular approach in Event Sourcing circles, we can’t make this be a default option. If you instead either make the projected aggregates like Incident either immutable or treat them as a dumb data input to your command handlers with a more Functional Programming “Decider” function approach, you can get away with the performance optimization.

And also, I strongly prefer and recommend the FP “Decider” approach to JasperFx Software clients as is and I think that folks using the older “Aggregate Root” approach tend to have more runtime bugs.

Moving on, let’s keep our database smaller…

Event Stream Archiving

By and large, you can improve system performance in almost any situation by trying to keep your database from growing too large by archiving or retiring obsolete information. Marten has first class support for “Archiving Event Streams” where you effectively just move event streams that only represent historical information and are not really active into an archived state.

Moreover, we can divide our underlying PostgreSQL storage for events into “hot” and “cold” storage by utilizing PostgreSQL’s table partitioning support like this:

builder.Services.AddMarten(opts =>
{
    opts.Connection(builder.Configuration.GetConnectionString("marten"));

    // Make event writing faster, like 2X faster in our testing
    opts.Events.AppendMode = EventAppendMode.Quick;

    // This can cut down on the number of database round trips
    // Marten has to do during CQRS command handler execution
    opts.Events.UseIdentityMapForAggregates = true;

    // Let's leverage PostgreSQL table partitioning
    // to our advantage
    opts.Events.UseArchivedStreamPartitioning = true;
})
    
// Jettison some "Identity Map" weight by going lighter weight    
.UseLightweightSessions();

If you’re aggressive with marking event streams as Archived, the PostgreSQL table partitioning can move off archived event streams into a different table partition than our active event data. This is essentially keeping the “active” event table storage relatively stable in size, and most operations will execute against this smaller table partition while still being able to access the archived data too if explicitly opt into including that.

We added this feature in a minor point 7.* release, so it had to be opt in, and I think I was too hesitant to make this a default in 8.0, so it’s still “opt in”.

Stream Compacting

Beyond archiving event streams, maybe you just want to “compact” a longer event stream so you technically retain all the existing state, but further reduce the size of your active database storage. To that end, Marten 8.0 added Stream Compacting.

Distributing Asynchronous Projections

I had been mostly talking about using projections running Inline such that the projections are updated at the same time as the events are captured. That’s sometimes applicable or desirable, but other times you’ll want to optimize the “write” operations by moving the updating of projected data to an Async projection running in the background. But now let’s say that we have quite a few asynchronous projections and several subscriptions as well. In early versions of Marten, we had to run everything in a “Hot/Cold” mode where every known projection or subscription had to run on one single “leader” node. So even if you were running your application across a dozen or more nodes, only one could be executing all of the asynchronous projections and subscriptions.

That’s obviously a potential bottleneck, so Marten 7.0 by itself introduced some ability to spread projections and subscriptions over multiple nodes. If we introduce Wolverine into the mix though, we can do quite a bit better than that by allowing Wolverine to distribute the asynchronous Marten work across our entire cluster with its ability to distribute Marten projections and subscriptions with the UseWolverineManagedEventSubscriptionDistribution option in the WolverineFx.Marten Nuget:

builder.Services.AddMarten(opts =>
{
    opts.Connection(builder.Configuration.GetConnectionString("marten"));

    // Make event writing faster, like 2X faster in our testing
    opts.Events.AppendMode = EventAppendMode.Quick;

    // This can cut down on the number of database round trips
    // Marten has to do during CQRS command handler execution
    opts.Events.UseIdentityMapForAggregates = true;

    // Let's leverage PostgreSQL table partitioning
    // to our advantage
    opts.Events.UseArchivedStreamPartitioning = true;
})
    
// Jettison some "Identity Map" weight by going lighter weight    
.UseLightweightSessions()

.IntegrateWithWolverine(opts =>
{
    opts.UseWolverineManagedEventSubscriptionDistribution = true;
});

Is there anything else for the future?

It never ends, and yes, there are still quite a few ideas in our product backlog to potentially improve performance and scalability of Marten’s Event Sourcing. Offhand, that includes looking at alternative, higher performance serializers and more options to parallelize asynchronous projections to squeeze out more throughput by sharing some data access across projections.

Summary

There are quite a few “opt in” features in Marten that will help your system perform better, but these features are “opt in” because they can be harmful if you’re not building around the assumptions these features make about how your code works. The good news though is that you’ll be able to better utilize these features if you follow the Critter Stack’s recommended practices by striving for shorter code stacks (i.e., how many jumps between methods and classes does your code make when receiving a system input like a message or HTTP request) so your code is easier to reason about anyway, and avoiding mutating projected aggregate data outside of Marten.

Marten 8.0, Wolverine 4.0, and even Lamar 15.0 are out!

It’s a pretty big “Critter Stack” community release day today, as:

  1. Marten has its 8.0 release
  2. Wolverine got a 4.0 release
  3. Lamar, the spiritual successor to StructureMap, had a corresponding 15.0 release
  4. And underneath those tools, the new JasperFx & JasperFx.Events library went 1.0 and the supporting Weasel library that provides some low level functionality went 8.0

Before getting into the highlights, let me start by thanking the Critter Stack Core team for all their support, contributions to both the code and documentation, and for being a constant sounding board for me and source of ideas and advice:

Next, I’d like to thank our Critter Stack community for all the interest and the continuous help we get with suggestions, pull requests that improve the tools, and especially for the folks who take the time to create actionable bug reports because that’s half the battle of getting problems fixed. And while there are plenty of days when I wish there wasn’t a veritable pack of raptors prowling around the projects probing for weaknesses in the projects, I cannot overstate the importance for an OSS project to have user and community feedback.

Alright, on to some highlights.

The big changes are that we consolidated several smaller shared libraries into one bigger shared JasperFx library and also combined some smaller libraries like Marten.CommandLine, Weasel.CommandLine, and Lamar.Diagnostics into Marten, Weasel, and Lamar respectfully. That’s hopefully going to help folks get to command line utilities quicker and easier, and the Critter Stack tools do get some value out of those command line utilities.

We’ve now got a shared model to configure behavioral differences at “Development” vs “Production” time for both Marten and Wolverine all at one time like this:

// These settings would apply to *both* Marten and Wolverine
// if you happen to be using both
builder.Services.CritterStackDefaults(x =>
{
    x.ServiceName = "MyService";
    x.TenantIdStyle = TenantIdStyle.ForceLowerCase;
    
    // You probably won't have to configure this often,
    // but if you do, this applies to both tools
    x.ApplicationAssembly = typeof(Program).Assembly;
    
    x.Production.GeneratedCodeMode = TypeLoadMode.Static;
    x.Production.ResourceAutoCreate = AutoCreate.None;

    // These are defaults, but showing for completeness
    x.Development.GeneratedCodeMode = TypeLoadMode.Dynamic;
    x.Development.ResourceAutoCreate = AutoCreate.CreateOrUpdate;
});

It might be awhile before this pays off for us, but everything from the last couple paragraphs is also meant to speed up the development of additional Event Sourcing “Critter” tools to expand beyond PostgreSQL — not that we’re even slightly backing off our investment in the do everything PostgreSQL database!

For Marten 8.0, we’ve done a lot to make projections easier to use with explicit code, and added a new Stream Compacting feature for yet more scalability.

For Wolverine 4.0, we’ve improved Wolverine’s ability to support modular monolith architectures that might utilize multiple Marten stores or EF Core DbContext services targeting the same database or even different databases. More on this soon.

Wolverine 4.0 also gets some big improvements for EF Core users with a new Multi-Tenancy with EF Core feature.

Both Wolverine and Marten got some streamlined Open Telemetry span naming changes that were suggested by Pascal Senn of ChiliCream who collaborates with JasperFx for a mutual client.

For both Wolverine and Lamar 15, we added a little more full support for the [FromKeyedService] and “keyed services” in the .NET Core DI abstractions like this for a Wolverine handler:

    // From a test, just showing that you *can* do this
    // *Not* saying you *should* do that very often
    public static void Handle(UseMultipleThings command, 
        [FromKeyedServices("Green")] IThing green,
        [FromKeyedServices("Red")] IThing red)
    {
        green.ShouldBeOfType<GreenThing>();
        red.ShouldBeOfType<RedThing>();
    }

And inside of Lamar itself, any dependency from a constructor function that has this:

// Lamar will inject the IThing w/ the key "Red" here
public record ThingUser([FromKeyedServices("Red")] IThing Thing);

Granted, Lamar already had its own version of keyed services and even an equivalent to the [FromKeyedService] attribute long before this was added to the .NET DI abstractions and ServiceProvider conforming container, but .NET is Microsoft’s world and lowly OSS projects pretty well have to conform to their abstractions sometimes.

Just for the record, StructureMap had an equivalent to keyed services in its first production release way back in 2004 back when David Fowler was probably in middle school making googly eyes at Rihanna.

What’s Next for the Critter Stack?

Honestly, I had to cut some corners on documentation to get the releases out for a JasperFx Software client, so I’ll be focused on that for most of this week. And of course, plenty of open issues and some outstanding pull requests didn’t make the release, so those hopefully get addressed in the next couple minor releases.

For the bigger picture, I think the rest of this year is:

  1. “CritterWatch”, our long planned, not moving fast enough for my taste, management and observability console for both Marten and Wolverine.
  2. Improvements to Marten’s performance and scalability for Event Sourcing. We did a lot in that regard last year throughout Marten 7.*, but there’s another series of ideas to increase the throughput even farther.
  3. Wolverine is getting a lot of user contributions right now, and I expect that especially the asynchronous messaging support will continue to grow. I would like to see us add CosmosDb support to Wolverine by the end of the year. By and large, I would like to increase Wolverine’s community usage over all by trying to grow the tool beyond just folks already using Marten — but the Marten + Wolverine combination will hopefully continue to improve.
  4. More Critters? We’re still talking about a SQL Server backed Event Store, with CosmosDb being a later alternative

Wrapping Up

As for the wisdom of ever again making a release cycle where the entire Critter Stack has a major release at the exact same time, this:

Finally, a lot of things didn’t make the release that folks wanted, heck that I wanted, but at some point it becomes expensive for a project to have a long running branch for “vNext” and you have to make the release. I’m hopeful that even though these major releases didn’t add a ton of new functionality that they set us up with the right foundation for where the tools go next.

I also know that folks will have plenty of questions and probably even inevitably run into problems or confusion with the new releases — especially until we can catch up on documentation — but I stole time from the family to get this stuff out this weekend and I’ll probably not be able to respond to anyone but JasperFx customers on Monday. Finally, in the meantime, right after every big push, I promise to start responding to whatever problems folks will have, but:

Symbolically Important Wolverine 3.13.4 Release

We were able to publish the Wolverine 3.13.4 release this morning with a handful of important fixes for error retries in a modular monolith architecture, recovering from Rabbit MQ connection interruptions, and Azure Service Bus, Kafka, and Amazon SQS fixes.

The awesome part of this release was how much of it, including a huge fix from Hamed Sabzian, came from the community (e.g. “not me”). Even one of the issues I addressed only came with some significant help from users building reproduction projects. Another issue was reported by a JasperFx Software customer who we’re working with for some new multi-tenancy functionality.

Beyond just the symbolic show of community engagement and involvement with Wolverine, this release hopefully marks the end of new development with Wolverine 3.*. There’s now a maintenance branch for 3.0, but Wolverine’s main branch is now the forthcoming 4.0 release that should hit by Monday next week.

Thank you to all the contributors to this release and recent releases, and that absolutely includes folks who took the time to open actionable issues and create reproduction steps for those issues.