The “Critter Stack” had a huge 2024, and I listed off some of the highlights of the improvements we made in Critter Stack Year in Review for 2024. For 2025, we’ve reordered our priority order from what I was writing last summer. I think we might genuinely focus more on sample applications, tutorials, and videos early this year than we do on coding new features.
There’s also a separate post on JasperFx Software in 2025. Please do remember that JasperFx Software is available for either ongoing support contracts for Marten and/or Wolverine and consulting engagements to help you wring the most possible value out of the tools — or to just help you with any old server side .NET architecture you have.
Marten
At this point, I believe that Marten is by far and away the most robust and most productive tooling for Event Sourcing in the .NET ecosystem. Moreover, if you believe Nuget download numbers, it’s also the most heavily used Event Sourcing tooling in .NET. I think most of the potential growth for Marten this year will simply be a result of developers hopefully being more open to using Event Sourcing as that technique becomes better known. I don’t have hard numbers to back this up, but my feeling is that Marten’s main competitor is shops choosing to roll their own Event Sourcing frameworks in house rather than any other specific tool.
I think we’re putting off the planned Marten 8.0 release for now. Instead, we’ll mostly be focused on dealing with whatever issues come up from our users and JasperFx clients with Marten 7 for the time being.
More sample applications and matching tutorials for Marten
Possibly adding a “Marten Events to EF Core” projection model?
Formal support for PostgreSQL PostGIS spatial data? I don’t know what that means yet though
When we’re able to reconsider Marten 8 this year, that will include:
A reorganization of the JasperFx building blocks to remove duplication between Marten, Wolverine, and other tools
Stream-lining the Projection API
Yet more scalability and performance improvements to the async daemon. There’s some potential features that we’re discussing with JasperFx clients that might drive this work
After the insane pace of Marten changes we made last year, I see Marten development and the torrid pace of releases (hopefully) slowing quite a bit in 2025.
Wolverine
Wolverine doesn’t yet have anywhere near the usage of Marten and exists in a much more crowded tooling space to boot. I’m hopeful that we can greatly increase Wolverine usage in 2025 by further differentiating it from its competitor tools by focusing on how Wolverine allows teams to write backend systems with much lower ceremony code without sacrificing testability, robustness, or maintainability.
We’re shelving any thoughts about a Wolverine 4.0 release early this year, but that’s opened the flood gates for planned enhancements to Wolverine 3.*:
Wolverine 3.6 is heavily in flight for release this month, and will be a pretty large release bringing some needed improvements for Wolverine within “Modular Monolith” usage, yet more special sauce for lower “Vertical Slice Architecture” usage, enhancements to the “aggregate handler workflow” integration with Marten, and improved EF Core integration
Multi-Tenancy support for EF Core in line with what Wolverine can already do with its Marten integration
CosmosDb integration for Transactional Inbox/Outbox support, saga storage, transactional middleware
More options for runtime message routing
Authoring more sample applications to show off how Wolverine allows for a different coding model than other messaging or mediator or HTTP endpoint tools
I think there’s a lot of untapped potential for Wolverine, and I’ll personally be focused on growing its usage in the community this year. I’m hoping the better EF Core integration, having more database options, and maybe even yet more messaging options can help us grow.
I honestly don’t know what is going to happen with Wolverine & Aspire. Aspire doesn’t really play nicely with frameworks like Wolverine right now, and I think it would take custom Wolverine/Aspire adapter libraries to get a truly good experience. My strong preference right now is to just use Docker Compose for local development, but it’s Microsoft’s world and folks like me building OSS tools just have to live in it.
Ermine & Other New Critters
Sigh, “Ermine” is the code name for a long planned port of Marten’s event sourcing functionality to Sql Server. I would still love to see this happen in 2025, but it’s going to be pushed off for a little bit. With plenty of input from other Marten contributors, I’ve done some preliminary work trying to centralize plenty of Marten’s event sourcing internals to a potentially shared assembly.
We’ve also at least considered extending Marten’s style of event sourcing to other databases, with CosmosDb, RavenDb, DynamoDb, SqlLite, and Oracle (people still use it apparently) being kicked around as options.
“Critter Watch”
This is really a JasperFx Software initiative to create a commercial tool that will be a dedicated management portal and performance monitoring tool (meant to be used in conjunction with Grafana/Prometheus/et al) for the “Critter Stack”. I’ll share concrete details of this when there are some, but Babu & I plan to be working in earnest on “Critter Watch” in the 1st quarter.
Note about Blogging
I’m planning to blog much less in the coming year and focus more on either writing more robust tutorials or samples within technical documentation sites and finally joining the modern world and moving to YouTube or Twitch video content creation.
While there’s still just a handful of technical deliverables I’m trying to get out in this calendar year, I’m admittedly running on mental fumes rolling into the holiday season. Thinking back about how much was delivered for the “Critter Stack” (Marten, Weasel, and Wolverine) this year is making me feel a lot better about giving myself some mental recharge time during the holidays. Happily for me, most of the advances in the Critter Stack this year were either from the community (i.e., not me) or done in collaboration and with the sponsorship of JasperFx Software customers for their systems.
Marten 7.0 brought a new “partial update” model based on native PostgreSQL functions that no longer required the PLv8 add on. Hat tip to Babu Annamalai for that feature!
The very basic database execution pipeline underneath Marten was largely rewritten to be far more parsimonious with how it uses database connections and to take advantage of more efficient Npgsql usage. That included using the very latest improvements to Npgsql for batching queries and moving to positional parameters instead of named parameters. Small ball optimizations for sure, but being more parsimonious with connections has been advantageous
Marten’s “quick append” model sacrifices a little bit of metadata tracking for a whole lot of throughput improvements (we’ve measured a 50% improvement) when appending events. This mode will be a default in Marten 8. This also helps stabilize “event skipping” in the async daemon under heavy loads. I think this was a big win that we need to broadcast more
Random optimizations in the “inline projection” model in Marten to reduce database round trips
Performance optimizations for CQRS command handlers where you want to fetch the final state of a projected aggregate that has been “advanced” as part of the command handler. Mostly in Marten, but there’s a helper in Wolverine too.
Marten’s async daemon feature for running asynchronous projections was rewritten in Marten 7.0 with some throughput improvements and a little better ability to spread work across a clustered application
Wolverine 3.0 brought a full rewrite of its leader election system that seems to have made a huge improvement in its ability to deal with stale nodes and failover. Much to my relief.
Marten got a big feature to allow for dynamic addition of tenant databases as part of its multi-tenancy through separate databases model. Wolverine got into the action as it is also able to follow suit and spin up a transactional inbox/outbox for dynamically registered tenant databases at runtime with no downtime.
The PostgreSQL backed messaging transport can be “per tenant” for multi-tenancy
Complex Workflows
I’m probably way too sloppy or at least not being precise about the differences between stateful sagas and process managers and tend to call any stateful, long lived workflow a “saga”. I’m not losing any sleep over that.
Marten 7.0 brought a near rewrite of Marten’s LINQ subsystem that closed a lot of gaps in functionality that we previously had. It also spawned plenty of regression bugs that we’ve had to address in the meantime, but the frequency of LINQ related issues has dramatically fallen
“Sticky” message listeners so that only one node in a cluster listens to a certain messaging endpoint. This is super helpful for processes that are stateful. This also helps for multi-tenancy.
Wolverine got a GCP Pubsub transport
And we finally released the Pulsar transport
Way more options for Rabbit MQ conventional message routing
Rabbit MQ header exchange support
Test Automation Support
Hey, the “Critter Stack” community takes testability, test automation, and TDD very seriously. To that end, we’ve invested a lot into test automation helpers this year.
Quite a few random little extension methods on IHost here and there for test automation
Strong Typed Identifiers
Despite all my griping along the way and frankly threatening bodily harm to the authors of some of the most popular libraries for strong typed identifiers, Marten has gotten a lot of first class support for strong typed identifiers in both the document database and event store features. There will surely be more to come because it’s a permutation hell problem where people stumble into yet more scenarios with these damn things.
But whatever, we finally have it. And quite a bit of the most time consuming parts of that work has been de facto paid for by JasperFx clients, which takes a lot of the salt out of the wound for me!
Modular Monolith Usage
This is going to be a major area of improvement for Wolverine here at the tail end of the year because suddenly everybody and their little brother wants to use this architectural pattern in ways that aren’t yet great with Wolverine.
There was actually quite a few more refinements made to both tools, but I’ve exhausted the time I allotted myself to write this, so let’s wrap up.
Summary
Last January I wrote that an aspiration for 2024 was to:
Continue to push Marten & Wolverine to be the best possible technical platform for building event driven architectures
At this point I believe that the “Critter Stack” is already the best set of technical tooling in the .NET ecosystem for building a system using an Event Driven Architecture, especially if Event Sourcing is a significant part of your persistence strategy. There are other messaging frameworks that have more messaging options, but Wolverine already does vastly more to help you productively write code that’s testable, resilient, easier to reason about, and well instrumented than older messaging tools in the .NET space. Likewise, Wolverine.HTTP is the lowest ceremony coding model for ASP.Net Core web service development, and the only one that has a first class transactional outbox integration. In terms of just Event Sourcing, I do not believe that Marten has any technical peer in the .NET ecosystem.
But of course there are plenty of things we can do better, and we’re not standing still in 2025 by any means. After some rest, I’ll pop back in January with some aspirations and theoretical roadmap for the “Critter Stack” in 2025. Details then, but expect that to include more database options and yes, long simmering plans for commercialization. And the overarching technical goal in 2025 for the “Critter Stack” is to be the best technical platform on the planet for Event Driven Architecture development.
JasperFx Software is completely open for business to help you get the best possible results with the “Critter Stack” tools or really any type of server side .NET development efforts. A lot of what I’m writing about is inspired by work we’ve done with our ongoing clients.
I think I’m at the point where I believe and say that leaning on asynchronous messaging is the best way to create truly resilient back end systems. And when I mean “resilient” here, I mean the system is best able to recover from errors it encounters at runtime or performance degradation or even from subsystems being down and still function without human intervention. A system incorporating asynchronous messaging and at least some communication through queues can apply retry policies for errors and utilize patterns like circuit breakers or dead letter queues to avoid losing in flight work.
There’s more to this of course, like:
Being able to make finer grained error handling policies around individual steps
Dead letter queues and replay of messages
Not having “temporal coupling” between systems or subsystems
Back pressure mechanics
Even maybe being able to better reason about the logical processing steps in an asynchronous model with formal messaging as opposed to just really deep call stacks in purely synchronous code
Wolverine certainly comes with a full range of messaging options and error handling options for resiliency, but a key feature that does lead to Wolverine adoption is its support for the transactional outbox (and inbox) pattern.
What’s the Transactional Outbox all about?
The transactional outbox pattern is an important part of your design pattern toolkit for almost any type of backend system that involves both database persistence and asynchronous work or asynchronous messaging. If you’re not already familiar with the pattern, just consider this message handler (using Wolverine) from a banking system that uses both Wolverine’s transactional middleware and transactional outbox integration (with Marten and PostgreSQL):
public Task<Account> LoadAsync(IDocumentSession session, DebitAccount command)
=> session.LoadAsync<Acount>(command.AccountId);
[Transactional]
public static async Task Handle(
DebitAccount command,
Account account,
IDocumentSession session,
IMessageContext messaging)
{
account.Balance -= command.Amount;
// This just marks the account as changed, but
// doesn't actually commit changes to the database
// yet. That actually matters as I hopefully explain
session.Store(account);
// Conditionally trigger other, cascading messages
if (account.Balance > 0 && account.Balance < account.MinimumThreshold)
{
await messaging.SendAsync(new LowBalanceDetected(account.Id));
}
else if (account.Balance < 0)
{
await messaging.SendAsync(new AccountOverdrawn(account.Id), new DeliveryOptions{DeliverWithin = 1.Hours()});
// Give the customer 10 days to deal with the overdrawn account
await messaging.ScheduleAsync(new EnforceAccountOverdrawnDeadline(account.Id), 10.Days());
}
// "messaging" is a Wolverine IMessageContext or IMessageBus service
// Do the deliver within rule on individual messages
await messaging.SendAsync(new AccountUpdated(account.Id, account.Balance),
new DeliveryOptions { DeliverWithin = 5.Seconds() });
}
You’ll notice up above that the handler both:
Modifies a banking account based on the command and persists those changes to the database
Potentially sends out messages in regard to that account
What the “outbox” is doing for us around this message handler is guaranteeing that:
The outgoing messages I registered with the IMessageBus service above are only actually sent to messaging brokers or local queues after the database transaction is successful. Think of the messaging outbox as kind of queueing the outgoing messages as part of your unit of work (which is really implemented by the Marten IDocumentSession up above.
The outgoing messages are actually persisted to the same database as the account data as part of a native database transactions
As part of a background process, the Wolverine outbox subsystem will make sure the message gets recovered and sent event if — and hate to tell you, but this absolutely does happen in the real world — the running process somehow shuts down unexpectedly between the database transaction succeeding and the messages actually getting successfully sent through local Wolverine queues or remotely sent through messaging brokers like Rabbit MQ or Azure Service Bus.
Also as part of the background processing, Wolverine’s outbox is also making sure that persisted, outgoing messages really do get sent out eventually in the case of the messaging broker being temporarily unavailable or network issues — and this is 100% something that actually happens in production, so the ability to recover messages is an awfully important feature for building robust systems.
To sum things up, a good implementation of the transactional outbox pattern in your system can be a great way to make your system be more resilient and “self heal” in the face of inevitable problems in production. As important, the usage of a transactional outbox can do a lot to prevent subtle race condition bugs at runtime from messages getting processed against inconsistent database state before database transactions have completed — and folks, this also absolutely happens in real systems. Ask me how I know:-)
Alright, now that we’ve established what it is, let’s look at some ways in which Wolverine makes its transactional outbox easy to adopt and use — and we’ll show a simpler version of the message handler above, but we just have to introduce more Wolverine concepts.
Setting up the Outbox in Wolverine
If you are using the full “Critter Stack” combination of Marten + Wolverine, you just add both Marten & Wolverine to your application and tie them together with the IntegrateWithWolverine() call from the WolverineFx.Marten Nuget as shown below:
var builder = WebApplication.CreateBuilder(args);
// Adds in some command line diagnostics
builder.Host.ApplyOaktonExtensions();
builder.Services.AddAuthentication("Test");
builder.Services.AddAuthorization();
builder.Services.AddMarten(opts =>
{
// You always have to tell Marten what the connection string to the underlying
// PostgreSQL database is, but this is the only mandatory piece of
// configuration
var connectionString = builder.Configuration.GetConnectionString("postgres");
opts.Connection(connectionString);
})
// This adds middleware support for Marten as well as the
// transactional middleware support we'll introduce in a little bit...
.IntegrateWithWolverine();
builder.Host.UseWolverine();
That does of course require some PostgreSQL tables for the Wolverine outbox storage to function, but Wolverine in this case is able to pull the connection and schema information (the schema can be overridden if you choose) from its Marten integration. In normal development mode, Wolverine — like Marten — is able to apply database migrations itself on the fly so you can just work.
var builder = WebApplication.CreateBuilder(args);
// Just the normal work to get the connection string out of
// application configuration
var connectionString = builder.Configuration.GetConnectionString("sqlserver");
// If you're okay with this, this will register the DbContext as normally,
// but make some Wolverine specific optimizations at the same time
builder.Services.AddDbContextWithWolverineIntegration<ItemsDbContext>(
x => x.UseSqlServer(connectionString), "wolverine");
// Add DbContext that is not integrated with outbox
builder.Services.AddDbContext<ItemsDbContextWithoutOutbox>(
x => x.UseSqlServer(connectionString));
builder.Host.UseWolverine(opts =>
{
// Setting up Sql Server-backed message storage
// This requires a reference to Wolverine.SqlServer
opts.PersistMessagesWithSqlServer(connectionString, "wolverine");
// Set up Entity Framework Core as the support
// for Wolverine's transactional middleware
opts.UseEntityFrameworkCoreTransactions();
// Enrolling all local queues into the
// durable inbox/outbox processing
opts.Policies.UseDurableLocalQueues();
});
Likewise, Wolverine is able to build the necessary schema objects for SQL Server on application startup so that the outbox integration “just works” in local development or testing environments. I should note that in all cases, Wolverine provides command line tools to export SQL scripts for these schema objects that you could use within database migration tools like Grate.
Outbox Usage within Message Handlers
Honestly, just to show a lower ceremony version of a Wolverine handler, let’s take the message handler from up above and use Wolverine’s “cascading message” capability to express the same logic for choosing which messages to send out as well as expression the database operation.
Before I show the handler, let me call out a couple things first:
Wolverine has an “auto transaction” middleware policy you can opt into to apply transaction handling for Marten, EF Core, or RavenDb around your handler code. This is helpful to keep your handler code simpler and often to allow you to write synchronous code
The “outbox” sending kicks in with any messages sent to an endpoint (local queue, Rabbit MQ exchange, AWS SQS queue, Kafka topic) that is configured as “durable” in Wolverine. You can read more about the Wolverine routing here. Do know though that within any application or even within a single handler, you can mix and match durable routes with “fire and forget” endpoints as desired.
There’s another concept in Wolverine called “side effects” that I’m going to use just to say “I want this document stored as part of this logical transaction.” It’s yet another thing in Wolverine’s bag of tricks to help you write pure functions for message handlers as a way to maximize the testability of your application code.
public static class DebitAccountHandler
{
public static Task<Account> LoadAsync(IDocumentSession session, DebitAccount command)
=> session.LoadAsync<Account>(command.AccountId);
public static async (IMartenOp, OutgoingMessages) Handle(
DebitAccount command,
Account account)
{
account.Balance -= command.Amount;
// This just tracks outgoing, or "cascading" messages
var messages = new OutgoingMessages();
// Conditionally trigger other, cascading messages
if (account.Balance > 0 && account.Balance < account.MinimumThreshold)
{
messages.Add(new LowBalanceDetected(account.Id));
}
else if (account.Balance < 0)
{
messages.Add(new AccountOverdrawn(account.Id), new DeliveryOptions{DeliverWithin = 1.Hours()});
// Give the customer 10 days to deal with the overdrawn account
messages.Delay(new EnforceAccountOverdrawnDeadline(account.Id), 10.Days());
}
// Do the deliver within rule on individual messages
messages.Add(new AccountUpdated(account.Id, account.Balance),
new DeliveryOptions { DeliverWithin = 5.Seconds() });
return (MartenOps.Store(account), messages);
}
}
When Wolverine executes the DebitCommand, it’s trying to commit a single database transaction with the contents of the Account entity being persisted and any outgoing messages in that OutgoingMessages collection that are routed to a durable Wolverine endpoint. When the transaction succeeds, Wolverine “releases” the outgoing messages to the sending agents within the application, where the persisted message data gets deleted from the database when Wolverine is able to successfully send the message.
Outbox Usage within MVC Core Controllers
Like all messaging frameworks in the .NET space that I’m aware of, the transactional outbox mechanics are pretty well transparent from message handler code. More recently though, the .NET ecosystem has caught up (finally) with the need to expose transactional outbox mechanics outside of a message handler.
A very common use cases is needing to both make database writes and trigger asynchronous work through messages from HTTP web services. For this example, let’s assume the usage of MVC Core Controller classes, but the mechanics I’m showing are similar for Minimal API or other alternative endpoint models in the ASP.Net Core ecosystem.
Assuming the usage of Marten + Wolverine, you can send messages with an outbox through the IMartenOutbox service that somewhat wraps the two tools together like this:
[HttpPost("/orders/itemready")]
public async Task Post(
[FromBody] MarkItemReady command,
[FromServices] IDocumentSession session,
[FromServices] IMartenOutbox outbox
)
{
// This is important!
outbox.Enroll(session);
// Fetch the current value of the Order aggregate
var stream = await session
.Events
// We're also opting into Marten optimistic concurrency checks here
.FetchForWriting<Order>(command.OrderId, command.Version);
var order = stream.Aggregate;
if (order.Items.TryGetValue(command.ItemName, out var item))
{
item.Ready = true;
// Mark that the this item is ready
stream.AppendOne(new ItemReady(command.ItemName));
}
else
{
// Some crude validation
throw new InvalidOperationException($"Item {command.ItemName} does not exist in this order");
}
// If the order is ready to ship, also emit an OrderReady event
if (order.IsReadyToShip())
{
// Publish a cascading command to do whatever it takes
// to actually ship the order
// Note that because the context here is enrolled in a Wolverine
// outbox, the message is registered, but not "released" to
// be sent out until SaveChangesAsync() is called down below
await outbox.PublishAsync(new ShipOrder(command.OrderId));
stream.AppendOne(new OrderReady());
}
// This will also persist and flush out any outgoing messages
// registered into the context outbox
await session.SaveChangesAsync();
}
With EF Core + Wolverine, it’s similar, but just a touch more ceremony using IDbContextOutbox<T> as a convenience wrapper around an EF Core DbContext:
[HttpPost("/items/create2")]
public async Task Post(
[FromBody] CreateItemCommand command,
[FromServices] IDbContextOutbox<ItemsDbContext> outbox)
{
// Create a new Item entity
var item = new Item
{
Name = command.Name
};
// Add the item to the current
// DbContext unit of work
outbox.DbContext.Items.Add(item);
// Publish a message to take action on the new item
// in a background thread
await outbox.PublishAsync(new ItemCreated
{
Id = item.Id
});
// Commit all changes and flush persisted messages
// to the persistent outbox
// in the correct order
await outbox.SaveChangesAndFlushMessagesAsync();
}
I personally think the usage of the outbox outside of Wolverine message handlers is a little bit more awkward than I’d ideally prefer (I also feel this way about the NServiceBus or MassTransit equivalents of this usage, but it’s nice that both of those tools do have this important functionality too), so let’s introduce Wolverine’s HTTP endpoint model to write lower ceremony code while still opting into outbox mechanics from web services.
Outbox Usage within Wolverine HTTP
This is beyond annoying, but the libraries and namespaces in Wolverine are all named “Wolverine.*”, but the Nuget packages are named “WolverineFx.*” because some clown is squatting on the “Wolverine” name in Nuget and we didn’t realize that until it was too late and we’d committed to the projection name. Grr.
Wolverine also has an add on model in the WolverineFx.Http Nuget that allows you to use the basics of the Wolverine runtime execution model for HTTP services. One of the advantages of Wolverine.HTTP endpoints is the same kind of pure function model as the message handlers that I believe to be a much lower ceremony programming model than MVC Core or even Minimal API.
Maybe more valuable though, Wolverine.HTTP endpoints support the exact same transactional middleware and outbox integration as the message handlers. That also allows us to use “cascading messages” to publish messages out of our HTTP endpoint handlers without having to deal with asynchronous code or injecting IoC services. Just plain old pure functions in many cases like so:
public static class TodoCreationEndpoint
{
[WolverinePost("/todoitems")]
public static (TodoCreationResponse, TodoCreated) Post(CreateTodo command, IDocumentSession session)
{
var todo = new Todo { Name = command.Name };
// Just telling Marten that there's a new entity to persist,
// but I'm assuming that the transactional middleware in Wolverine is
// handling the asynchronous persistence outside of this handler
session.Store(todo);
// By Wolverine.Http conventions, the first "return value" is always
// assumed to be the Http response, and any subsequent values are
// handled independently
return (
new TodoCreationResponse(todo.Id),
new TodoCreated(todo.Id),
);
}
}
The Wolverine.HTTP model gives us a way to build HTTP endpoints with Wolverine’s typical, low ceremony coding model (most of the OpenAPI metadata can be gleaned from the method signatures of the endpoints, further obviating the need for repetitive ceremony code that so frequently litters MVC Core code) with easy usage of Wolverine’s transactional outbox.
I should also point out that even if you aren’t using any kind of message storage or durable endpoints, Wolverine will not actually send messages until any database transaction has completed successfully. Think of this as a non-durable, in memory outbox built into your HTTP endpoints.
Summary
The transactional outbox pattern is a valuable tool for helping create resilient systems, and Wolverine makes it easy to use within your system code. I’m frequently working with clients who aren’t utilizing a transactional outbox even when they’re using asynchronous work or trying to cascade work as “domain events” published from other transactions. It’s something I always call out when I see it, but it’s frequently hard to introduce all new infrastructure in existing projects or within tight timelines — and let’s be honest, timelines are always tight.
I think my advice is to be aware of this need upfront when you are picking out the technologies you’re going to use as the foundation for your architecture. To be blunt, a lot of shops I think are naively opting into MediatR as a core tool without realizing the important functionality it is completely missing in order to build a resilient system — like a transactional outbox. You can, and many people do, complement MediatR with a real messaging tool like MassTransit.
Instead, you could just use Wolverine that basically does both “mediator” and asynchronous messaging with one programming model of handlers and does so with a potentially lower ceremony and higher productivity coding model than any of those other tools in .NET.
The new feature shown in this post was built by JasperFx Software as part of a client engagement. This is exactly the kind of novel or challenging issue we frequently help our clients solve. If there’s something in your shop’s ongoing efforts where you could use some extra technical help, reach out to sales@jasperfx.net and we’ll be happy to talk with you.
Wolverine 3.4 was released today with a large new feature for multi-tenancy through asynchronous messaging. This feature set was envisioned for usage in an IoT system using the full “Critter Stack” (Marten and Wolverine) where “our system” is centralized in the cloud, but has to communicate asynchronously with physical devices deployed at different client sites:
The system in question already uses Marten’s support for separating per tenant information into separate PostgreSQL databases. Wolverine itself works with Marten’s multi-tenancy to make that a seamless process within Wolverine messaging workflows. All of that arguably quite robust already support was envisioned to be running within either HTTP web services or asynchronous messaging workflows completely controlled by the deployed application and its peer services. What’s new with Wolverine 3.4 is the ability to isolate the communication with remote client (tenant) devices and the centralized, cloud deployed “our system.”
We can isolate the traffic between each client site and our system first by using a separate Rabbit MQ broker or at least a separate virtual host per tenant as implied in the code sample from the docs below:
var builder = Host.CreateApplicationBuilder();
builder.UseWolverine(opts =>
{
// At this point, you still have to have a *default* broker connection to be used for
// messaging.
opts.UseRabbitMq(new Uri(builder.Configuration.GetConnectionString("main")))
// This will be respected across *all* the tenant specific
// virtual hosts and separate broker connections
.AutoProvision()
// This is the default, if there is no tenant id on an outgoing message,
// use the default broker
.TenantIdBehavior(TenantedIdBehavior.FallbackToDefault)
// Or tell Wolverine instead to just quietly ignore messages sent
// to unrecognized tenant ids
.TenantIdBehavior(TenantedIdBehavior.IgnoreUnknownTenants)
// Or be draconian and make Wolverine assert and throw an exception
// if an outgoing message does not have a tenant id
.TenantIdBehavior(TenantedIdBehavior.TenantIdRequired)
// Add specific tenants for separate virtual host names
// on the same broker as the default connection
.AddTenant("one", "vh1")
.AddTenant("two", "vh2")
.AddTenant("three", "vh3")
// Or, you can add a broker connection to something completel
// different for a tenant
.AddTenant("four", new Uri(builder.Configuration.GetConnectionString("rabbit_four")));
// This Wolverine application would be listening to a queue
// named "incoming" on all virtual hosts and/or tenant specific message
// brokers
opts.ListenToRabbitQueue("incoming");
opts.ListenToRabbitQueue("incoming_global")
// This opts this queue out from being per-tenant, such that
// there will only be the single "incoming_global" queue for the default
// broker connection
.GlobalListener();
// More on this in the docs....
opts.PublishMessage<Message1>()
.ToRabbitQueue("outgoing").GlobalSender();
});
With this solution, we now have a “global” Rabbit MQ broker we can use for all internal communication or queueing within “our system”, and a separate Rabbit MQ virtual host for each tenant. At runtime, when a message tagged with a tenant id is published out of “our system” to a “per tenant” queue or exchange, Wolverine is able to route it to the correct virtual host for that tenant id. Likewise, Wolverine is listening to the queue named “incoming” on each virtual host (plus the global one), and automatically tags messages coming from the per tenant virtual host queues with the correct tenant id to facilitate the full Marten/Wolverine workflow downstream as the incoming messages are handled.
Now, let’s switch it up and use Azure Service Bus instead to basically do the same thing. This time though, we can register additional tenants to use a separate Azure Service Bus fully qualified namespace or connection string:
var builder = Host.CreateApplicationBuilder();
builder.UseWolverine(opts =>
{
// One way or another, you're probably pulling the Azure Service Bus
// connection string out of configuration
var azureServiceBusConnectionString = builder
.Configuration
.GetConnectionString("azure-service-bus");
// Connect to the broker in the simplest possible way
opts.UseAzureServiceBus(azureServiceBusConnectionString)
// This is the default, if there is no tenant id on an outgoing message,
// use the default broker
.TenantIdBehavior(TenantedIdBehavior.FallbackToDefault)
// Or tell Wolverine instead to just quietly ignore messages sent
// to unrecognized tenant ids
.TenantIdBehavior(TenantedIdBehavior.IgnoreUnknownTenants)
// Or be draconian and make Wolverine assert and throw an exception
// if an outgoing message does not have a tenant id
.TenantIdBehavior(TenantedIdBehavior.TenantIdRequired)
// Add new tenants by registering the tenant id and a separate fully qualified namespace
// to a different Azure Service Bus connection
.AddTenantByNamespace("one", builder.Configuration.GetValue<string>("asb_ns_one"))
.AddTenantByNamespace("two", builder.Configuration.GetValue<string>("asb_ns_two"))
.AddTenantByNamespace("three", builder.Configuration.GetValue<string>("asb_ns_three"))
// OR, instead, add tenants by registering the tenant id and a separate connection string
// to a different Azure Service Bus connection
.AddTenantByConnectionString("four", builder.Configuration.GetConnectionString("asb_four"))
.AddTenantByConnectionString("five", builder.Configuration.GetConnectionString("asb_five"))
.AddTenantByConnectionString("six", builder.Configuration.GetConnectionString("asb_six"));
// This Wolverine application would be listening to a queue
// named "incoming" on all Azure Service Bus connections, including the default
opts.ListenToAzureServiceBusQueue("incoming");
// This Wolverine application would listen to a single queue
// at the default connection regardless of tenant
opts.ListenToAzureServiceBusQueue("incoming_global")
.GlobalListener();
// Likewise, you can override the queue, subscription, and topic behavior
// to be "global" for all tenants with this syntax:
opts.PublishMessage<Message1>()
.ToAzureServiceBusQueue("message1")
.GlobalSender();
opts.PublishMessage<Message2>()
.ToAzureServiceBusTopic("message2")
.GlobalSender();
});
This is a lot to take in, but the major point is to keep client messages completely separate from each other while also enabling the seamless usage of multi-tenanted workflows all the way through the Wolverine & Marten pipeline. As we deal with the inevitable teething pains, the hope is that the behavioral code within the Wolverine message handlers never has to be concerned with any kind of per-tenant bookkeeping. For more information, see:
And as I typed all of that out, I do fully realize that there would be some value in having a comprehensive “Multi-Tenancy with the Critter Stack” guide in one place.
Summary
I honestly don’t know if this feature set gets a lot of usage, but it came out of what’s been a very productive collaboration with JasperFx’s original customer as we’ve worked together on their IoT system. Quite a bit of improvements to Wolverine have come about as a direct reaction to friction or opportunities that we’ve spotted with our collaboration.
As far as multi-tenancy goes, I think the challenges for the Critter Stack toolset has been to give our users all the power they need to keep data and now messaging completely separate across tenants while relentlessly removing repetitive code ceremony or usability issues. My personal philosophy is that lower ceremony code is an important enabler of successful software development efforts over time.
“Lightweight” just meaning “it doesn’t have a lot of features yet”
To get started, first add this Nuget to your system:
dotnet add WolverineFx.Pulsar
And just like that, you’re ready to start adding publishing rules and subscriptions to Pulsar topics in a very idiomatic Wolverine way:
var builder = Host.CreateApplicationBuilder();
builder.UseWolverine(opts =>
{
opts.UsePulsar(c =>
{
var pulsarUri = builder.Configuration.GetValue<Uri>("pulsar");
c.ServiceUrl(pulsarUri);
// Any other configuration you want to apply to your
// Pulsar client
});
// Publish messages to a particular Pulsar topic
opts.PublishMessage<Message1>()
.ToPulsarTopic("persistent://public/default/one")
// And all the normal Wolverine options...
.SendInline();
// Listen for incoming messages from a Pulsar topic
opts.ListenToPulsarTopic("persistent://public/default/two")
// And all the normal Wolverine options...
.Sequential();
});
It’s a minimal implementation for right now (no conventional routing topology for example), but we’ll happily enhance this transport option if there’s interest. To be honest, the Pulsar transport has been hanging out inside the Wolverine codebase for years, but never got released for whatever reason. Someone asked about this awhile back, so here we go!
Assuming that the US still exists tomorrow and I’m not trying to move my family to Canada, I’ll follow up with Wolverine’s new, fully robust transport option for Google Pubsub.
JasperFx Software frequently helps our customers wring better performance or scalability out of our customer’s systems. A somewhat frequent opportunity for improving the responsiveness and throughput of systems is merely identifying ways to batch up requests from middle tier, server side code to the backing database or databases. There’s a certain amount of overhead in making any network round trips between processes, and it often pays off in terms of performance to batch up queries or commands to reduce the number of network round trips.
Today I’m merely going to focus on Marten as a persistence tool and a bit on Wolverine as “Mediator” and show some ways that Marten reduces network round trips. Just know though that this general idea of reducing network round trips by batching up database queries or commands is certainly going to apply to improving performance with any other persistence tooling.
Batching Writes
First off, let’s just look at doing a mixed bag of “writes” with a Marten session to add, delete, or modify user data:
public static async Task modify_some_users(IDocumentSession session)
{
// Mixed bag of document operations
session.Insert(new User{FirstName = "Hans", LastName = "Gruber"});
session.Store(new User{FirstName = "John", LastName = "McClane"});
session.DeleteWhere<User>(x => x.LastName == "Miller");
session.Patch<User>(x => x.LastName == "May").Set(x => x.Nickname, "Mayday");
// Let's append some events too just for fun!
session.Events.StartStream<User>(new UserCreated("Harry", "Ellis"));
// Commit all the changes
await session.SaveChangesAsync();
}
What’s important to note in the code up above is that all the logical operations to insert, “upsert”, delete, patch, or start event streams is batched up into a single database round trip when session.SaveChangesAsync() is called. In the early days of Marten we tried a lot of different things to improve throughput in Marten, including alternative serializers, reducing string concatenation, code generation techniques, and alternative data structures internally. Our consistent finding was that the single biggest improvements always came from reducing network round trips, with alternative JSON serializers being a distant second, and every other factor far behind that.
If you’re curious about the technical underpinnings, Marten 7+ is creating a single NpgsqlBatch for all the commands and even using positional parameters because that’s a touch more efficient for the interaction with PostgreSQL.
Moving to another example, let’s say that you have workflow where you need to apply logical changes to a batch of Item entities using a mix of Marten and Wolverine. Here’s a first, naive cut at this handler:
public static class ApproveItemsHandler
{
// I'm passing in CancellationToken because:
// a. It's probably a good idea anyway
// b. That's how Wolverine "enforces" message timeouts
public static async Task HandleAsync(
ApproveItems message,
IDocumentSession session,
CancellationToken token)
{
foreach (var id in message.Ids)
{
var existing = await session.LoadAsync<Item>(id, token);
if (existing != null)
{
existing.Approved = true;
session.Store(existing);
}
}
await session.SaveChangesAsync(token);
}
}
Now, let’s assume that we could easily be getting 100-1000 different ids of Item entities to approve at any one time, which would make this operation chatty and potentially slow. Let’s make it a little worse though and add in Wolverine as a “mediator” to handle each individual Item inline:
public static class ApproveItemHandler
{
public static async Task HandleAsync(
ApproveItem message,
IDocumentSession session,
CancellationToken token)
{
var existing = await session.LoadAsync<Item>(message.Id, token);
if (existing == null) return;
existing.Approved = true;
await session.SaveChangesAsync(token);
}
}
public static class ApproveItemsHandler
{
// I'm passing in CancellationToken because:
// a. It's probably a good idea anyway
// b. That's how Wolverine "enforces" message timeouts
public static async Task HandleAsync(
ApproveItems message,
IMessageBus bus,
CancellationToken token)
{
foreach (var id in message.Ids)
{
await bus.InvokeAsync(new ApproveItem(id), token);
}
}
}
In terms of performance, the second version is even worse. We compounded the existing chattiness problem with looking up each Item individually by separating out the database “writes” to separate database calls and separate transactions within “Wolverine as Mediator” usage through that InvokeAsync()call. You should be aware that when you use any kind of in process “Mediator” tool like Wolverine, MediatR, Brighter, or MassTransit’s in process mediator functionality that each call to InvokeAsync() involves a certain amount of overhead and very likely means a nested transaction that gets committed independently from the parent message handling or HTTP request that triggered the InvokeAsync() call. I think I might go so far as to say that calling IMessageBus.InvokeAsync() from another message handler is a “guilty until proven innocent” type of approach.
I’d of course argue here that the performance may or may not end up being a big deal, but not having a transactional boundary around the original message processing can easily lead to inconsistent state in our system if any of the individual Item updates fail.
Let’s make one last version of this batch approve item handler with an eye toward reducing network round trips and keeping a strongly consistent transaction boundary around all the approvals (meaning they all succeed or all fail, no in between “who knows what really happened” state):
public static class ApproveItemsHandler
{
// I'm passing in CancellationToken because:
// a. It's probably a good idea anyway
// b. That's how Wolverine "enforces" message timeouts
public static async Task HandleAsync(
ApproveItems message,
IDocumentSession session,
CancellationToken token)
{
// Find all the related items in *one* network round trip
var items = await session.LoadManyAsync<Item>(token, message.Ids);
foreach (var item in items)
{
item.Approved = true;
session.Store(item);
}
await session.SaveChangesAsync().ConfigureAwait(false);
}
}
In the usage above, we’re making one database call to fetch the matching Item entities, and updating all of the impacted Item entities in a single batched database command within the IDocumentSession.SaveChangesAsync(). This version should almost always be much faster than the earlier versions where we issued individual queries for each Item, plus we have better transactional consistency in the case of system errors.
Lastly of course for the sake of completeness, we could just do this with one network round trip:
public static class ApproveItemsHandler
{
// Assuming here that Wolverine "auto-transaction"
// middleware is in place
public static void Handle(
ApproveItems message,
IDocumentSession session)
{
session
.Patch<Item>(x => x.Id.IsOneOf(message.Ids))
.Set(x => x.Approved, true);
}
}
That last version eliminates the usage of current state to validate the operation first or give us any indication of what exactly was changed, but hey, that’s the fastest possible way to code this with Marten and it might be suitable sometimes in your own system.
Batch Querying
Marten has strong support for batch querying where you can combine any number of disparate queries in a batch to the database, and read the results one at a time afterward. Here’s an example from the Marten documentation, but just know that session in this case is a Marten IQuerySession:
// Start a new IBatchQuery from an active session
var batch = session.CreateBatchQuery();
// Fetch a single document by its Id
var user1 = batch.Load<User>("username");
// Fetch multiple documents by their id's
var admins = batch.LoadMany<User>().ById("user2", "user3");
// User-supplied sql
var toms = batch.Query<User>("where first_name == ?", "Tom");
// Where with Linq
var jills = batch.Query<User>().Where(x => x.FirstName == "Jill").ToList();
// Any() queries
var anyBills = batch.Query<User>().Any(x => x.FirstName == "Bill");
// Count() queries
var countJims = batch.Query<User>().Count(x => x.FirstName == "Jim");
// The Batch querying supports First/FirstOrDefault/Single/SingleOrDefault() selectors:
var firstInternal = batch.Query<User>().OrderBy(x => x.LastName).First(x => x.Internal);
// Kick off the batch query
await batch.Execute();
// All of the query mechanisms of the BatchQuery return
// Task's that are completed by the Execute() method above
var internalUser = await firstInternal;
Debug.WriteLine($"The first internal user is {internalUser.FirstName} {internalUser.LastName}");
That’s a little more code and complexity than you might have otherwise if you just make the queries independently, but there’s some significant performance gains to be made from batching queries.
This is a much, much longer discussion than I have ambition for today, but the rampant usage of repository abstractions around raw persistence tooling like Marten has a tendency to knock out more powerful functionality like query batching. That’s especially compounded with “noun-centric” code organization where you may have IOrderRepository and IInvoiceRepository wrapping your raw persistence tooling, but yet frequently have logical operations that deal with both Order and Invoice data at the same time. With Wolverine especially, I’m pushing JasperFx clients and our users to try to get away with eschewing these kinds of abstractions and leaning hard into Wolverine’s “A-Frame Architecture” approach so you can utilize the full power of Marten (or EF Core or RavenDb or whatever else you actually use).
What I can tell you is that for a current JasperFx client, we’re looking in the long run to collapse and simplify and inline their current usage of Railway Programming and MediatR-calling-other-MediatR handlers as a way to enable us to utilize query batching to optimize some of their very complicated operations that today end up being very chatty between the server and database.
Including Related Entities when Querying
There are plenty of times you’ll have an operation in your system that needs information from multiple, related entity types. Marten provides its version of Include() in its LINQ provider as a way to batch query related documents in fewer network round trips, and hence better performance like this example from the tests:
[Fact]
public async Task simple_include_for_a_single_document()
{
var user = new User();
var issue = new Issue { AssigneeId = user.Id, Title = "Garage Door is busted" };
using var session = theStore.IdentitySession();
session.Store<object>(user, issue);
await session.SaveChangesAsync();
using var query = theStore.QuerySession();
// The following query will fetch both the Issue document
// and the related User document for the Issue in one
// network round trip
User included = null;
var issue2 = query
.Query<Issue>()
.Include<User>(x => included = x).On(x => x.AssigneeId)
.Single(x => x.Title == issue.Title);
included.ShouldNotBeNull();
included.Id.ShouldBe(user.Id);
issue2.ShouldNotBeNull();
}
I’ll refer you to the documentation for more alternative usages, but just know that Marten has this capability and it’s a valuable way to improve performance in your system by reducing the number of network roundtrips between your code and the backend.
Marten’s Include() functionality was originally inspired/copied from RavenDb. We’ve unfortunately had some confusion in the past from folks coming over from EF Core where its Include() means something very different.Oh, and just to pull aside the curtain, it’s not doing any kind of JOIN behind the scenes, but a temporary table + multiple SELECT() statements.
Summary
I just wanted to get a handful of things across in this post:
Network round trips can easily be expensive and a contributing factor in poor system performance. Reducing the number of network round trips by batching queries can sometimes pay off overall even if that sometimes means more complex code
Marten has several features specifically meant to improve system performance by batching database queries that you can utilize. Both Marten and Wolverine are absolutely built with this philosophy of reducing network round trips as much as possible
Any coding or architectural strategy that results in excessive layering, long call stacks (A calls B that calls C that calls D that finally calls to a database), or really just obfuscates your understanding of how system operations lead to increased numbers of network round trips can easily be harmful to your system’s performance because you can’t easily “see” what your system is really doing
Yesterday I blogged about a small, convenience feature we snuck into he release of Wolverine 3.0 last week for a JasperFx Software customer I wrote about in Combo HTTP Endpoint and Message Handler with Wolverine 3.0. Today I’d like to show some additions to Wolverine 3.0 just to improve its ability to send responses back to the original sending application or raise other messages in response to problems.
One of Wolverine’s main functions is to be an asynchronous messaging framework where we expect messages to come into our Wolverine systems through messaging brokers like Azure Service Bus or Rabbit MQ or AWS SQS from another system (or you can message to yourself too of course). A frequent question from users is what if there’s a message that can’t be processed for some reason and there’s a need to send a message back to the originating system or to create some kind of alert message to a support person to intervene?
Let’s start with the assumption that at least some problems can be found with validation rules early in message processing such that you can determine early that a message is not able to be processed — and if this happens, send a message back to the original sender telling it (or a person) so. In the Wolverine documentation, we have this middleware for looking up account information for any message that implements an IAccountCommand interface:
// This is *a* way to build middleware in Wolverine by basically just
// writing functions/methods. There's a naming convention that
// looks for Before/BeforeAsync or After/AfterAsync
public static class AccountLookupMiddleware
{
// The message *has* to be first in the parameter list
// Before or BeforeAsync tells Wolverine this method should be called before the actual action
public static async Task<(HandlerContinuation, Account?)> LoadAsync(
IAccountCommand command,
ILogger logger,
// This app is using Marten for persistence
IDocumentSession session,
CancellationToken cancellation)
{
var account = await session.LoadAsync<Account>(command.AccountId, cancellation);
if (account == null)
{
logger.LogInformation("Unable to find an account for {AccountId}, aborting the requested operation", command.AccountId);
}
return (account == null ? HandlerContinuation.Stop : HandlerContinuation.Continue, account);
}
}
Now, let’s change the middleware up above to send a notification message back to whatever the original sender is if the referenced account cannot be found. For the first attempt, let’s do it by directly injecting IMessageContext (IMessageBus, but with some specific API additions we need in this case) from Wolverine like so:
public static class AccountLookupMiddleware
{
// The message *has* to be first in the parameter list
// Before or BeforeAsync tells Wolverine this method should be called before the actual action
public static async Task<(HandlerContinuation, Account?)> LoadAsync(
IAccountCommand command,
ILogger logger,
// This app is using Marten for persistence
IDocumentSession session,
IMessageContext bus,
CancellationToken cancellation)
{
var account = await session.LoadAsync<Account>(command.AccountId, cancellation);
if (account == null)
{
logger.LogInformation("Unable to find an account for {AccountId}, aborting the requested operation", command.AccountId);
// Send a message back to the original sender, whatever that happens to be
await bus.RespondToSenderAsync(new InvalidAccount(command.AccountId));
return (HandlerContinuation.Stop, null);
}
return (HandlerContinuation.Continue, account);
}
}
Okay, hopefully not that bad. Now though, let’s utilize Wolverine’s OutgoingMessages type to relay that message with this functionally equivalent code:
public static class AccountLookupMiddleware
{
// The message *has* to be first in the parameter list
// Before or BeforeAsync tells Wolverine this method should be called before the actual action
public static async Task<(HandlerContinuation, Account?, OutgoingMessages)> LoadAsync(
IAccountCommand command,
ILogger logger,
// This app is using Marten for persistence
IDocumentSession session,
CancellationToken cancellation)
{
var messages = new OutgoingMessages();
var account = await session.LoadAsync<Account>(command.AccountId, cancellation);
if (account == null)
{
logger.LogInformation("Unable to find an account for {AccountId}, aborting the requested operation", command.AccountId);
messages.RespondToSender(new InvalidAccount(command.AccountId));
return (HandlerContinuation.Stop, null, messages);
}
// messages would be empty here
return (HandlerContinuation.Continue, account, messages);
}
}
As of Wolverine 3.0, you’re now able to send messages from “before / validate” middleware by either using IMessageBus/IMessageContext or OutgoingMessages. This is in addition to the older functionality to possibly send messages on certain message failures, as shown below in a sample from the Wolverine documentation on custom error handling policies:
You’ve got options! Wolverine does have a concept of “respond to sender” if you’re sending messages between Wolverine applications that will let you easily send a new message inside a message handler or message handler exception handling policy back to the original sender. This functionality also works, admittedly in a limited capacity, with interoperability between MassTransit and Wolverine through Rabbit MQ.
With the release of Wolverine 3.0 last week, we snuck in a small feature at the last minute that was a request from a JasperFx Software customer. Specifically, they had a couple instances of a logical message type that needed to be handled both from Wolverine’s Rabbit MQ message transport, and also from the request body of an HTTP endpoint inside their BFF application.
You can certainly beat this problem a couple different ways:
Use the Wolverine message handler as a mediator from within an HTTP endpoint. I’m not a fan of this approach because of the complexity, but it’s very common in .NET world of course.
Just delegate from an HTTP endpoint in Wolverine directly to the (in this case) static method message handler. Simpler mechanically, and we’ve done that a few times, but there’s a wrinkle coming of course.
One of the things that Wolverine’s HTTP endpoint model does is allow you to quickly make little one off validation rules using the ProblemDetails specification that’s great for one off validations that don’t fit cleanly into Fluent Validation usage (which is also supported by Wolverine for both message handlers and HTTP endpoints). Our client was using that pattern on HTTP endpoints, but wanted to expose the same logic — and validation logic — as a message handler while still retaining the validation rules and ProblemDetails response for HTTP.
As of the Wolverine 3.0 release last week, you can now use the ProblemDetails logic with message handlers as a one off validation test if you are using Wolverine.Http as well as Wolverine core. Let’s jump right to an example of a class to both handle a message as a message handler in Wolverine and handle the same message body as an HTTP web service with a custom validation rule using ProblemDetails for the results:
public record NumberMessage(int Number);
public static class NumberMessageHandler
{
// More likely, these one off validation rules do some kind of database
// lookup or use other services, otherwise you'd just use Fluent Validation
public static ProblemDetails Validate(NumberMessage message)
{
// Hey, this is contrived, but this is directly from
// Wolverine.Http test suite code:)
if (message.Number > 5)
{
return new ProblemDetails
{
Detail = "Number is bigger than 5",
Status = 400
};
}
// All good, keep on going!
return WolverineContinue.NoProblems;
}
// Look at this! You can use this as an HTTP endpoint too!
[WolverinePost("/problems2")]
public static void Handle(NumberMessage message)
{
Debug.WriteLine("Handled " + message);
Handled = true;
}
public static bool Handled { get; set; }
}
What’s significant about this class is that it’s a perfectly valid message handler that will be discovered by Wolverine as a message handler. Because of the presence of the [WolverinePost] attribute, Wolverine.HTTP will discover this as well and independently create an AspNetCore Endpoint route for this method.
If the Validate method returns a non-“No problems” response:
As a message handler, Wolverine will log a JSON serialized value of the ProblemDetails and stop all further processing
As an HTTP endpoint, Wolverine.HTTP will write the ProblemDetails out to the HTTP response, set the status code and content-type headers appropriately, and stop all further processing
Arguably, Wolverine’s entire schtick and raison d’être is to provide a much lower code ceremony development experience than other .NET server side development tools. I think the code above is a great example of how Wolverine really does this. Especially if you know that Wolverine.HTTP is able to glean and enhance the OpenAPI metadata created for the endpoint above to reflect the possible status code 400 and application/problem+json content type response, compare the Wolverine approach above to a more typical .NET “vertical slice architecture” approach that is probably using MVC Core controllers or Minimal API registrations with plenty of OpenAPI-related code noise to delegate to MediatR message handlers with all of its attendant code ceremony.
Besides code ceremony, I’d also point out that the functions you write for Wolverine up above are much more often going to be pure functions and/or synchronous for much easier unit testing than you can with other tools. Lastly, and I’ll try to show this in a follow up blog post about Wolverine’s middleware strategy, Wolverine’s execution pipeline results in fewer object allocations than IoC-centric tools like MediatR or MassTransit or MVC Core / Minimal API do at runtime.
Just as the title says, Wolverine 3.0 is live and published to Nuget! I believe that this release addresses some of Wolverine’s prior weaknesses and adds some powerful new features requested by our users. The journey for Wolverine right now is to be the singular most effective set of tooling for building robust, maintainable, and testable server side code in the .NET ecosystem. If you’re wondering about the value proposition of Wolverine as any combination of mediator, in process message bus, asynchronous messaging framework, or alternative HTTP web service framework, it’s that Wolverine will help you be successful with substantially less code because Wolverine helps you much more to simplify the code inside of message handlers or HTTP endpoint methods than other comparable .NET tooling.
Enough of the salesmanship, before I go any farther, let me thank quite a few folks for their contributions to Wolverine:
Babu Annamalai
JT for all his work on Rabbit MQ for this release and a whole host of other contributions to the “Critter Stack” including leveling us up on Discord usage
Jesse for making quite a few suggestions that wound up being usability improvements
Haefele for his contributions
Erik Shafer for helping with project communications
JasperFx Software‘s clients across the globe for making it possible for me to work on the “Critter Stack” and push it forward (a lot of features and functionality in this release were built at the behest of JasperFx clients)
And finally, even though this doesn’t show up in GitHub contributor numbers sometimes, everyone who has taken the time to write up actionable bug reports or feature requests. That is an absolutely invaluable element of successful OSS community projects
The major new features or changes in this release are:
Wolverine is no longer directly coupled to Lamar and can now used with at least ServiceProvider and theoretically any other IoC tool that conforms to the .NET DI standards — but I’d highly recommend that you stick to the well lit paths of ServiceProvider or Lamar. Not that many people cared, but the ones who did cared about this a lot
You can now bootstrap Wolverine with HostApplicationBuilder or any .NET bootstrapper that supports IServiceCollection some how, some way. Wolverine is no longer limited to only IHostBuilder
Wolverine’s leadership election and node assignment subsystem got a pretty substantial overhaul. The result is much simpler code and far, far better behavior and reliability. This was arguably the biggest weakness of Wolverine < 3.0
“Sticky” message handling when you need to handle a single message type in multiple handlers with “sticky” assignments to particular queues or listeners.
An options for RavenDb persistence including the transactional inbox/outbox, scheduled messaging, and saga persistence
Additions to the Rabbit MQ support including the ability to use header exchanges
Lightweight saga storage for either PostgreSQL or SQL Server that works without either Marten or EF Core
And plenty of small “reduce paper cuts and repetitive code” changes here and there. The documentation website also got some review and refinement as well.
What’s next, because there’s always a next…
There will be bug reports, and we’ll try to deal with them as quickly. There’s a GCP PubSub transport option brewing in the community that may hit soon. It’s somewhat likely there will be a CosmosDb integration for Wolverine message storage, sagas, and scheduled messages this year. There were some last minute scope cuts for productivity that maybe gets addressed with follow up releases to Wolverine 3.0, but more likely in 4.0.
Mostly though, Wolverine 3.0 might be somewhat short lived as Wolverine 4.0 work (and Marten 8) will hopefully start as early as next week as the “Critter Stack” community and JasperFx Software tries to implement what I’ve been calling the “Critter Stack 2025” goals heading into 1st quarter 2025.
I’m logging off for the rest of the night (at least from work), and I know there’ll be a list of questions or problems in the morning (the joy of being 5-7 hours behind most of your users and clients), but for now:
I’m working with a JasperFx Software client who is in the beginning stages of building a pretty complex, multi-step file import process that is going to involve several different services. For the sake of example code in this post, let’s say that we have the (much simplified from my client’s actual logical workflow) workflow from the diagram above:
External partners (or customers) are sending us an Excel sheet with records that our system will need to process and utilize within our downstream systems (invoices? payments? people? transactions?)
For the sake of improved throughput, the incoming file is broken into batches of records so the smaller batches can be processed in parallel
Each batch needs to be validated by the “Validation Service”
When each batch has been completely validated:
If there are any errors, send a rejection summary about the entire file to the original external partner
If there are no errors, try to send each record batch to “Downstream System #1”
When each batch has been completely accepted or rejected by “Downstream System #1”
If there are any rejections, send a rejection summary about the entire file to the original external partner
If all batches are accepted by “Downstream System #1”, try to send each record batch to “Downstream System #2”
When each batch has been completely accepted or rejected by “Downstream System #2”
If there are any rejections, send a rejection summary about the entire file to the original external partner and a message to “Downstream System #1” to reverse each previously accepted records in the file
If all batches are accepted by “Downstream System #2”, send a successful receipt message to the original external partner and archive the intermediate state
Right off the bat, I think we can identify a couple needs and challenges:
We need some way to track the current, in process state of an individual file and where all the various batches are in that process
At every point, make decisions about what to do next in the workflow based on the current state of the file based on incremental process. And to make this as clear as possible, I think it’s extremely valuable to be able to clearly write, read, unit test, and reason about this workflow code without any significant coupling to the surrounding infrastructure.
The whole system should be resilient in the face of the expected transient hiccups like a database getting overwhelmed or a downstream system being temporarily down and “work” should never get lost or hopefully even require human intervention at runtime
Especially for large files, we absolutely better be prepared for some challenging concurrency issues when lots of incoming messages attempt to update that central file import processing state
Make it all performance too of course!
Alright, so we’re definitely using both Marten for persistence and Wolverine for the workflow and messaging between services for all of this. The first basic approach for the state management is to use Wolverine’s stateful saga support with Marten. In that case we might have a saga type in Marten something like this:
// Again, express the stages in terms of your
// business domain instead of technical terms,
// but you'll do better than me on this front!
public enum FileImportStage
{
Validating,
Downstream1,
Downstream2,
Completed
}
// As long as it's JSON serialization friendly, you can happily
// tighten up the access here all you want, but I went for quick and simple
public class FileImportSaga :
// Only necessary marker type for Wolverine here
Saga,
// Opts into tracked version concurrency for Marten
// We probably want in this case
IRevisioned
{
// Identity for this saga within our system
public Guid Id { get; set; }
public string FileName { get; set; }
public string PartnerTrackingNumber { get; set; }
public DateTimeOffset Created { get; set; } = DateTimeOffset.UtcNow;
public List<RecordBatchTracker> RecordBatches { get; set; } = new();
public FileImportStage Stage { get; set; } = FileImportStage.Validating;
// Much more in just a bit...
}
Inside our system, we can start a new FileImportSaga and launch the first set of messages to validate each batch of records with this handler that reacts to a request to import a new file:
public record ImportFile(string fileName);
// This could have been done inside the FileImportSaga as well,
// but I think I'd rather keep that focused on the state machine
// and workflow logic
public static class FileImportHandler
{
public static async Task<(FileImportSaga, OutgoingMessages)> Handle(
ImportFile command,
IFileImporter importer,
CancellationToken token)
{
var saga = await importer.ReadAsync(command.fileName, token);
var messages = new OutgoingMessages();
messages.AddRange(saga.CreateValidationMessages());
return (saga, messages);
}
}
public interface IFileImporter
{
Task<FileImportSaga> ReadAsync(string fileName, CancellationToken token);
}
Let’s say that we’re receiving messages back from the Validation Message like this:
public record ValidationResult(Guid Id, Guid BatchId, ValidationMessage[] Messages);
public record ValidationMessage(int RecordNumber, string Message);
Quick note, if Wolverine is handling the messaging in the downstream systems, it’s helping make this easier by tracking the saga id in message metadata from upstream to downstream and back to the upstream through response messages. Otherwise you’d have to track the saga id on the incoming messages.
We could process the validation results in our saga one at a time like so:
// Use Wolverine's cascading message feature here for the next steps
public IEnumerable<object> Handle(ValidationResult validationResult)
{
var currentBatch = RecordBatches
.FirstOrDefault(x => x.Id == validationResult.BatchId);
// We'd probably rig up Wolverine error handling so that it either discards
// a message in this case or immediately moves it to the dead letter queue
// because there's no sense in trying to retry a message that can never be
// processed successfully
if (currentBatch == null) throw new UnknownBatchException(Id, validationResult.BatchId);
currentBatch.ReadValidationResult(validationResult);
var currentValidationStatus = determineValidationStatus();
switch (currentValidationStatus)
{
case RecordStatus.Pending:
yield break;
case RecordStatus.Accepted:
Stage = FileImportStage.Downstream1;
foreach (var batch in RecordBatches)
{
yield return new RequestDownstream1Processing(Id, batch.Id, batch.Records);
}
break;
case RecordStatus.Rejected:
// This saga is complete
MarkCompleted();
// Tell the original sender that this file is rejected
// I'm assuming that Wolverine will get the right information
// back to the original sender somehhow
yield return BuildRejectionMessage();
break;
}
}
private RecordStatus determineValidationStatus()
{
if (RecordBatches.Any(x => x.ValidationStatus == RecordStatus.Pending))
{
return RecordStatus.Pending;
}
if (RecordBatches.Any(x => x.ValidationStatus == RecordStatus.Rejected))
{
return RecordStatus.Rejected;
}
return RecordStatus.Accepted;
}
First off, I’m going to argue that the way that Wolverine supports its stateful sagas and its cascading message feature make the workflow logic pretty easy to unit test in isolation from all the infrastructure. That part is good, right? But what’s maybe not great is that we could easily be getting a bunch of those ValidationResult messages back for the same file at the same time because they’re handled in parallel, so we really need to be prepared for that.
We could rely on the Wolverine/Marten combination’s support for optimistic concurrency and just retry ValidationResult messages that fail because of caught ConcurrencyException, but that’s potentially thrashing the database and the application pretty hard. We could also solve this problem in a “sledgehammer to crack a nut” kind of way by using Wolverine’s strictly ordered listener approach that would force the file import status messages to be processed in order on a single running node:
builder.Host.UseWolverine(opts =>
{
opts.UseRabbitMq(builder.Configuration.GetConnectionString("rabbitmq"));
opts.ListenToRabbitQueue("file-import-updates")
// Single file, serialized access across the
// entire running application cluster!
.ListenWithStrictOrdering();
});
That solves the concurrency issue in a pretty hard core way, but it’s not going to terribly performant because you’ve eliminated all concurrency between different files and you’re making the system constantly load, then save the FileImportSaga data for intermediate steps. Let’s adjust this and incorporate Wolverine’s new message batching feature.
First off, let’s add a new validation batch message like so:
public record ValidationResultBatch(Guid Id, ValidationResult[] Results);
And a new message handler on our saga type for that new message type:
public IEnumerable<object> Handle(ValidationResultBatch batch)
{
var groups = batch.Results.GroupBy(x => x.BatchId);
foreach (var group in groups)
{
var currentBatch = RecordBatches
.FirstOrDefault(x => x.Id == group.Key);
foreach (var result in group)
{
currentBatch.ReadValidationResult(result);
}
}
return DetermineNextStepsAfterValidation();
}
// I pulled this out as a helper, but also, it's something
// you probably want to unit test in isolation on just the FileImportSaga
// class to nail down the workflow logic w/o having to do an integration
// test
public IEnumerable<object> DetermineNextStepsAfterValidation()
{
var currentValidationStatus = determineValidationStatus();
switch (currentValidationStatus)
{
case RecordStatus.Pending:
yield break;
case RecordStatus.Accepted:
Stage = FileImportStage.Downstream1;
foreach (var batch in RecordBatches)
{
yield return new RequestDownstream1Processing(Id, batch.Id, batch.Records);
}
break;
case RecordStatus.Rejected:
// This saga is complete
MarkCompleted();
// Tell the original sender that this file is rejected
// I'm assuming that Wolverine will get the right information
// back to the original sender somehhow
yield return BuildRejectionMessage();
break;
}
}
And lastly, we need to tell Wolverine how to do the message batching, which I’ll do first with this code:
public class ValidationResultBatcher : IMessageBatcher
{
public IEnumerable<Envelope> Group(IReadOnlyList<Envelope> envelopes)
{
var groups = envelopes
.GroupBy(x => x.Message.As<ValidationResult>().Id)
.ToArray();
foreach (var group in groups)
{
var message = new ValidationResultBatch(group.Key, group.OfType<ValidationResult>().ToArray());
// It's important here to pass along the group of envelopes that make up
// this batched message for Wolverine's transactional inbox/outbox
// tracking
yield return new Envelope(message, group);
}
}
public Type BatchMessageType => typeof(ValidationResultBatch);
}
Then lastly, in your Wolverine configuration in your Program file (or a helper method that’s called from Program), you’d tell Wolverine about the batching strategy like so:
builder.Host.UseWolverine(opts =>
{
// Other Wolverine configuration...
opts.BatchMessagesOf<ValidationResult>(x =>
{
x.Batcher = new ValidationResultBatcher();
x.BatchSize = 100;
});
});
With the message batching, you’re potentially putting less load on the database and improving performance by simply making fewer reads and writes over all. You might still have some concurrency concerns, so you have more options to control the parallelization of the ValidationResultBatch messages running locally like this in your UseWolverine() configuration:
opts.LocalQueueFor<ValidationResultBatch>()
// You *could* do this to completely prevent
// concurrency issues
.Sequential()
// Or depend on some level of retries on concurrency
// exceptions and let it parallelize work by file
.MaximumParallelMessages(5);
We could choose to accept some risk of concurrent access to an individual FileImportSaga (unlikely after the batching, but still), so let’s add some better optimistic concurrency checking with our friend Marten. For any given Saga type that’s persisted with Marten, just implement the IRevisioned interface to let Wolverine know to opt into Marten’s concurrency protection like so:
public class FileImportSaga :
// Only necessary marker type for Wolverine here
Saga,
// Opts into tracked version concurrency for Marten
// We probably want in this case
IRevisioned
That’s it, that’s all you need to do. What this does for you is create a check by Wolverine & Marten together that during the processing of any message on a FileImportSaga that no other message was successfully processed against that FileImportSaga between loading the initial copy of the saga at the time the transaction is committed. If Marten detects a concurrency violation upon the commit, it rejects the transaction and throws a ConcurrencyException. We can handle that with a series of retries to just have Wolverine retry the message from the new state with this error handling policy that I’m going to make specific to our FileImportSaga like so:
public class FileImportSaga :
// Only necessary marker type for Wolverine here
Saga,
// Opts into tracked version concurrency for Marten
// We probably want in this case
IRevisioned
{
public static void Configure(HandlerChain chain)
{
// Retry the message over again at least 3 times
// with the specified wait times
chain.OnException<ConcurrencyException>()
.RetryWithCooldown(100.Milliseconds(), 250.Milliseconds(), 250.Milliseconds());
}
// ... the rest of FileImportSaga
So now we’ve got the beginnings of a multi-step process using Wolverine’s stateful saga support. We’ve also taken some care to protect our file import process against concurrency concerns. And we’ve done all of this in a way where we can quite handily test the workflow logic by just doing state-based tests against the FileImportSaga with no database or message broker infrastructure in sight before we waste any time trying to debug the whole shebang.
Summary
The key takeaway I hope you get from this is that the full Critter Stack has some significant tooling to help you build complex, multi-step workflows. Pair that with the easy getting started stories that both tools have, and I think you have a toolset that allows you to quickly start while also scaling up to more complex needs when you need that.
As so very often happens, this blog post was bigger than I thought it would be, and I’m breaking it up into a series of a follow ups. In the next version of this post, we’ll take the same logical FileImportSaga and do the logical workflow tracking with Marten event sourcing to track the state and use some cool new Marten functionality for the workflow logic inside of Marten projections.
This might take a bit to get to, but I’ll also revisit this original implementation and talk about some extra Marten functionality to further optimize performance by baking in archiving through Marten soft-deletes and its support for PostgreSQL table partitioning.
So historically I’m actually pretty persnickety about being precise about technical terms and design pattern names, but I’m admittedly sloppy about calling something a “Saga” when maybe it’s technically a “Process Manager” and I got jumped online about that by a celebrity programmer. Sorry, not sorry?