Like Vertical Slice Architecture? Meet Wolverine.Http!

Before you read any of this, just know that it’s perfectly possible to mix and match Wolverine.HTTP, MVC Core controllers, and Minimal API endpoints in the same application.

Edit: The documentation links were all wrong when I pushed this late at night of course, so:

If you’ve built ASP.NET Core applications of any size, you’ve probably run into the same friction: MVC controllers that balloon with constructor-injected dependencies, or Minimal API handlers that accumulate scattered app.MapGet(...) calls across multiple files. And if you’ve reached for a Mediator library to impose some structure, you’ve added a layer of abstraction that — while familiar — brings its own ceremony and a seam that can make unit testing harder than it should be.

Wolverine.HTTP is a different model. It’s a first-class HTTP framework built on top of ASP.NET Core that’s designed from the ground up for vertical slice architecture, has built-in transactional outbox support, and delivers a middleware story that is arguably more powerful than IEndpointFilter. And it doesn’t need a separate “Mediator” library because the Wolverine HTTP endpoints very naturally support a “Vertical Slice” style without so many moving parts as the average “check out my vertical slice architecture template!” approach online.

Moreover, Wolverine.HTTP has first class support for resilient messaging through Wolverine’s transactional outbox and asynchronous messaging. No other HTTP endpoint library in .NET has any such smooth integration.

What Is Vertical Slice Architecture?

The core idea is organizing code by feature rather than by technical layer. Instead of a Controllers/ folder, a Services/ folder, and a Repositories/ folder that all have to be navigated to understand one feature, you co-locate everything that belongs to a single use case: the request type, the handler, and any supporting types.

The payoff is locality. When a bug is filed against “create order”, you open one file. When a feature is deleted, you delete one file. There’s no hunting across layers.

Wolverine.HTTP is a natural fit for this style. A Wolverine HTTP endpoint is just a static class — no base class, no constructor injection, no framework coupling. The framework discovers it by scanning for [WolverineGet][WolverinePost][WolverinePut][WolverineDelete], and [WolverinePatch] attributes.

And because of the world we live in now, I have to mention that there is already plenty of anecdotal evidence that AI assisted coding works better with the “vertical slice” approach than it does against heavily layered approaches.

Getting Started

Install the NuGet package:

dotnet add package WolverineFx.Http

Wire it up in Program.cs:

var builder = WebApplication.CreateBuilder(args);
builder.Host.UseWolverine();
builder.Services.AddWolverineHttp();
var app = builder.Build();
app.MapWolverineEndpoints();
return await app.RunJasperCommands(args;

A Complete Vertical Slice

Here’s what a full feature slice looks like with Wolverine.HTTP. Request type, response type, and handler all in one place:

// The request
public record CreateTodo(string Name);
// The response
public record TodoCreated(int Id);
// The handler — a plain static class, no base class required
public static class CreateTodoEndpoint
{
[WolverinePost("/todoitems")]
public static async Task<IResult> Post(
CreateTodo command,
IDocumentSession session) // injected by Wolverine from the IoC container
{
var todo = new Todo { Name = command.Name };
session.Store(todo);
return Results.Created($"/todoitems/{todo.Id}", todo);
}
}

Compare that to what this would look like in MVC Core with a service layer and constructor injection. The Wolverine version is shorter, has no framework coupling in the handler method itself, and every dependency is explicit in the method signature. There’s no hidden state, and the method is trivially unit-testable in isolation.

For reading data, it’s even cleaner:

public static class TodoEndpoints
{
[WolverineGet("/todoitems")]
public static Task<IReadOnlyList<Todo>> Get(IQuerySession session)
=> session.Query<Todo>().ToListAsync();
[WolverineGet("/todoitems/{id}")]
public static Task<Todo?> GetTodo(int id, IQuerySession session, CancellationToken cancellation)
=> session.LoadAsync<Todo>(id, cancellation);
[WolverineDelete("/todoitems/{id}")]
public static void Delete(int id, IDocumentSession session)
=> session.Delete<Todo>(id);
}

No controller. No service interface. No repository abstraction. Just the feature.

No Separate Mediator Needed

One of the most common patterns in .NET vertical slice architecture is using a Mediator library like MediatR to dispatch commands from controllers to handlers. Wolverine makes this unnecessary — it handles both HTTP routing and in-process message dispatch with the same execution pipeline.

If you’re coming from MediatR, the key difference is that there’s no IRequest<T> base type to implement, no IRequestHandler<TRequest, TResponse> to wire up, and no _mediator.Send(command) call to thread through your controllers. The HTTP endpoint is the handler. When you also want to dispatch a message for async processing, you just return it from the method (more on that below).

See our converting from MediatR guide for a detailed side-by-side comparison.

If you’re coming from MVC Core controllers or Minimal API, we have migration guides for both:

The Outbox: The Feature That Changes Everything

Here is where Wolverine.HTTP really pulls ahead. In any event-driven architecture, HTTP endpoints frequently need to do two things atomically: save data to the database and publish a message or event. If you do these as two separate operations and something crashes between them, you’ve lost a message — or worse, written corrupted state.

The standard solution is a transactional outbox: write the message to the same database transaction as the data change, then have a background process deliver it reliably.

With plain IMessageBus in a Minimal API handler, you’re responsible for the outbox mechanics yourself. With Wolverine.HTTP, the outbox is automatic. Any message returned from an endpoint method is enrolled in the same transaction as the handler’s database work.

The simplest pattern uses tuple return values. Wolverine recognizes any message types in the return tuple and routes them through the outbox:

public static class CreateTodoEndpoint
{
[WolverinePost("/todoitems")]
public static (Todo todo, TodoCreated created) Post(
CreateTodo command,
IDocumentSession session)
{
var todo = new Todo { Name = command.Name };
session.Store(todo);
// Both the HTTP response (Todo) and the outbox message (TodoCreated)
// are committed in the same transaction. No message is lost.
return (todo, new TodoCreated(todo.Id));
}
}

The Todo becomes the HTTP response body. The TodoCreated message goes into the outbox and is delivered durably after the transaction commits. The database write and the message write are atomic — no coordinator needed.

If you need to publish multiple messages, use OutgoingMessages:

[WolverinePost("/orders")]
public static (OrderCreated, OutgoingMessages) Post(CreateOrder command, IDocumentSession session)
{
var order = new Order(command);
session.Store(order);
var messages = new OutgoingMessages
{
new OrderConfirmationEmail(order.CusmerId),
new ReserveInventory(order.Items),
new NotifyWarehouse(order.Id)
};
return (new OrderCreated(order.Id), messages);
}

All four database and message operations commit together. This is the kind of correctness that is genuinely difficult to achieve with raw IMessageBus calls in Minimal API, and it comes for free in Wolverine.HTTP.

Middleware: Better Than IEndpointFilter

ASP.NET Core Minimal API introduced IEndpointFilter as its extensibility hook — a way to run logic before and after an endpoint handler. It works, but it has a few rough edges: you write a class that implements an interface with a single InvokeAsync method that receives an EndpointFilterInvocationContext, and you have to dig values out by index or type from the context object. It’s not especially readable, and composing multiple filters is verbose.

Wolverine.HTTP’s middleware model is different. Middleware is just a class with Before and After methods that can take any of the same parameters the endpoint handler can take — including the request body, IoC services, HttpContext, and even values produced by earlier middleware. Wolverine generates the glue code at compile time (via source generation), so there’s no runtime reflection and no boxing.

Here’s a stopwatch middleware that times every request:

public class StopwatchMiddleware
{
private readonly Stopwatch _stopwatch = new();
public void Before() => _stopwatch.Start();
public void Finally(ILogger logger, HttpContext context)
{
_stopwatch.Stop();
logger.LogDebug(
"Request for route {Route} ran in {Duration}ms",
context.Request.Path,
_stopwatch.ElapsedMilliseconds);
}
}

A middleware method can also return IResult to conditionally stop the request. If the returned IResult is WolverineContinue.Result(), processing continues. Anything else — Results.Unauthorized()Results.NotFound()Results.Problem(...) — short-circuits the handler and writes the response immediately:

public class FakeAuthenticationMiddleware
{
public static IResult Before(IAmAuthenticated message)
{
return message.Authenticated
? WolverineContinue.Result() // keep going
: Results.Unauthorized(); // stop here
}
}

This same pattern powers Wolverine’s built-in FluentValidation middleware — every validation failure becomes a ProblemDetails response with no boilerplate in the handler itself.

The IHttpPolicy interface lets you apply middleware conventions across many endpoints at once:

public class RequireApiKeyPolicy : IHttpPolicy
{
public void Apply(IReadOnlyList<HttpChain> chains, GenerationRules rules, IServiceContainer container)
{
foreach (var chain in chains.Where(c => c.Method.Tags.Contains("api")))
{
chain.Middleware.Insert(0, new MethodCall(typeof(ApiKeyMiddleware), nameof(ApiKeyMiddleware.Before)));
}
}
}

Policies are registered during bootstrapping:

app.MapWolverineEndpoints(opts =>
{
opts.AddPolicy<RequireApiKeyPolicy>();
})

ASP.NET Core Middleware: Everything Still Works

Wolverine.HTTP is built on top of ASP.NET Core, not around it. Every piece of standard ASP.NET Core middleware works exactly as you’d expect — Wolverine endpoints are just routes in the middleware pipeline.

Authentication and Authorization work via the standard [Authorize] and [AllowAnonymous] attributes:

public static class OrderEndpoints
{
[WolverineGet("/orders")]
[Authorize]
public static Task<IReadOnlyList<Order>> GetAll(IQuerySession session)
=> session.Query<Order>().ToListAsync();
[WolverinePost("/orders")]
[Authorize(Roles = "admin")]
public static (Order, OrderCreated) Post(CreateOrder command, IDocumentSession session)
{
// ...
}
}

You can also require authorization on a set of routes at bootstrapping time:

app.MapWolverineEndpoints(opts =>
{
opts.ConfigureEndpoints(chain =>
{
chain.Metadata.RequireAuthorization();
});
});

Output caching via [OutputCache]:

[WolverineGet("/products/{id}")]
[OutputCache(Duration = 60)]
public static Task<Product?> Get(int id, IQuerySession session)
=> session.LoadAsync<Product>(id)

Rate limiting via [EnableRateLimiting]:

builder.Services.AddRateLimiter(options =>
{
options.AddFixedWindowLimiter("per-user", opt =>
{
opt.PermitLimit = 100;
opt.Window = TimeSpan.FromMinutes(1);
});
options.RejectionStatusCode = 429;
});
app.UseRateLimiter();
// In your endpoint class:
[WolverinePost("/api/orders")]
[EnableRateLimiting("per-user")]
public static (Order, OrderCreated) Post(CreateOrder command, IDocumentSession session)
{
// ...
}

The UseRateLimiter() call in the pipeline hooks standard ASP.NET Core rate limiting middleware, and the [EnableRateLimiting] attribute wires up the policy exactly as it does for Minimal API or MVC — no Wolverine-specific configuration required.

OpenAPI / Swagger Support

Wolverine.HTTP integrates with Swashbuckle and the newer Microsoft.AspNetCore.OpenApi package. Endpoints are discovered as standard ASP.NET Core route metadata, so Swagger UI works out of the box. You can use [Tags][ProducesResponseType], and [EndpointSummary] to enrich the generated spec:

[Tags("Orders")]
[WolverinePost("/api/orders")]
[ProducesResponseType<Order>(201)]
[ProducesResponseType(400)]
public static (CreationResponse<Guid>, OrderStarted) Post(CreateOrder command, IDocumentSession session)
{
// ...
}

Summary

Wolverine.HTTP gives you a cleaner foundation for vertical slice architecture in .NET:

  • No Mediator library needed — Wolverine handles both HTTP routing and in-process dispatch in the same pipeline
  • Discoverability built in for vertical slices — which is an advantage over Minimal API + Mediator style “vertical slices”
  • Lower ceremony than MVC controllers — static classes, method injection, no base types
  • Built-in outbox — messages returned from endpoints commit atomically with the database transaction
  • Better middleware than IEndpointFilter — Before/After methods with full dependency injection and IResult for conditional short-circuiting
  • Full ASP.NET Core compatibility — authentication, authorization, rate limiting, output caching, and all other middleware work without changes

If you’re starting a new project or looking to reduce complexity in an existing one, Wolverine.HTTP is worth a close look.

EF Core is Better with Wolverine

TL;DR: Wolverine has a pretty good development and production time story for developers using EF Core and that is constantly being improved.

Wolverine was explicitly restarted 3-4 years back specifically to combine with Marten as a complete end to end solution for Event Sourcing and CQRS with asynchronous messaging support. While that “Critter Stack” strategy has definitely paid off, vastly more .NET developers and systems are using EF Core as their primary persistence mechanism. And since I’d personally like to see Wolverine get much more usage and see JasperFx Software continue to grow, we’ve made a serious effort to improve the development time experience with EF Core and Wolverine.

To get started using EF Core with Wolverine, install this Nuget:

dotnet add package WolverineFx.EntityFrameworkCore

I should say, that’s not expressly necessary, but all of the development time accelerators, middleware, and transactional inbox/outbox integration we’re about to utilize require that library.

Let’s just get started with a simple Wolverine bootstrapping configuration that is going to use a single EF Core DbContext (for now, Wolverine happily supports using multiple DbContext types in a single application) and SQL Server for the Wolverine message persistence we’ll need for transactional outbox support later:

var builder = Host.CreateApplicationBuilder();
var connectionString = builder.Configuration.GetConnectionString("sqlserver")!;
// Register a DbContext or multiple DbContext types as normal
builder.Services.AddDbContext<ItemsDbContext>(
x => x.UseSqlServer(connectionString),
// This is actually a significant performance gain
// for Wolverine's sake
optionsLifetime:ServiceLifetime.Singleton);
// Register Wolverine
builder.UseWolverine(opts =>
{
// You'll need to independently tell Wolverine where and how to
// store messages as part of the transactional inbox/outbox
opts.PersistMessagesWithSqlServer(connectionString);
// Adding EF Core transactional middleware, saga support,
// and EF Core support for Wolverine storage operations
opts.UseEntityFrameworkCoreTransactions();
});
// Rest of your bootstrapping...

With that in place, let’s look at a simple message handler that uses our ItemsDbContext:

public static class CreateItemCommandHandler
{
public static ItemCreated Handle(
// This would be the message
CreateItemCommand command,
// Any other arguments are assumed
// to be service dependencies
ItemsDbContext db)
{
// Create a new Item entity
var item = new Item
{
Name = command.Name
};
// Add the item to the current
// DbContext unit of work
db.Items.Add(item);
// This event being returned
// by the handler will be automatically sent
// out as a "cascading" message
return new ItemCreated
{
Id = item.Id
};
}
}

In the handler above, you’ll notice there’s no synchronous calls at all, and that’s because we’ve turned on Wolverine’s transactional middleware for EF Core that will handle the actual transaction management. You’ll also notice that we’re using Wolverine’s cascading messages syntax to kick out an ItemCreated domain event upon the successful completion of this handler. With the EF Core transactional middleware, that is also handling any integration with Wolverine’s transactional outbox for reliable messaging. Absolutely nothing else for you to do in that handler to enable any of that behavior, and we can shove off some of the typically ugly async/await mechanics into Wolverine itself while keeping our actual application behavior cleaner.

Now let’s go a little farther and utilize some Wolverine optimizations for our EF Core usage and change the service registration up above to this:

// If you're okay with this, this will register the DbContext as normally,
// but make some Wolverine specific optimizations at the same time
builder.Services.AddDbContextWithWolverineIntegration<ItemsDbContext>(
x => x.UseSqlServer(connectionString), "wolverine");

That version of the integration optimizes application performance by fine tuning the service lifetimes in a way that improves Wolverine’s internal usage of the DbContext type, and adds direct mappings for Wolverine’s internal inbox and outbox storage. By using a “Wolverine optimized DbContext” like this, Wolverine is able to improve your system’s performance by allowing EF Core to batch the SQL commands for your application code and Wolverine’s transactional outbox storage in a single database round trip — and that’s important because the single most common killer of performance in enterprise applications is database chattiness!

So that’s the bare bones basics, now let’s look at some recent improvements in Wolverine for…

Development Time Usage with EF Core

We’ve invested a lot of time recently in trying to make EF Core easier to work with at development time with Wolverine. Coming from Marten where our database migrations have an “it should just work” model that quietly configures the database to match your application configuration at runtime for quick iteration at development time.

With the Wolverine.EntityFrameworkCore library, you can get that same behavior with EF Core through this option:

builder.UseWolverine(opts =>
{
opts.Services.AddDbContextWithWolverineIntegration<ItemsDbContext>(
x => x.UseSqlServer(connectionString));
// Diff the DbContext against the live DB at startup and apply missing DDL.
opts.UseEntityFrameworkCoreWolverineManagedMigrations();
// This will make Wolverine do any necessary database migration
// work happen at application startup
opts.Services.AddResourceSetupOnStartup();
});

To be clear, with this setup, you can change your EF Core mappings, then restart the application or an IHost in testing and your application will automatically detect any database differences from the configuration and quietly apply a patch for you on application startup. This enables a much faster iteration cycle than EF Core Migrations do in my opinion.

The Weasel docs go deeper on the diff engine, opt-outs, and how it handles schemas.

Another feature in Marten that our community utilizes very heavily is the ability to quickly reset the state of a database in tests. I’ve also occasionally used the Respawn library for the same kind of ability when developing closer to the metal of a relational database to do the same. In a recent version of Wolverine, we’ve added similar abilities to our EF Core support including a version of Marten’s IInitialData concept to help you reset data in tests:

public class SeedItems : IInitialData<ItemsDbContext>
{
public async Task Populate(ItemsDbContext context, CancellationToken cancellation)
{
context.Items.Add(new Item { Name = "Seed" });
await context.SaveChangesAsync(cancellation);
}
}
builder.Services.AddInitialData<ItemsDbContext, SeedItems>();

And to see that in usage:

[Fact]
public async Task ordering_flow()
{
await _host.ResetAllDataAsync<ItemsDbContext>();
// arrange ... act ... assert
}

The ResetAllDataAsync<T>() method will look through a DbContext object to see all the tables it maps to, and delete all the data in those tables. It does take into account foreign key relationships to order its operations. After the data is wiped out, each IInitialData<T> registered in your system will be applied to lay down baseline data.

While this feature will surely have to be enhanced if many people start using it, this is already helping us make the Wolverine internal EF Core testing a lot more reliable and easier to use.

Declarative Persistence with EF Core

The next usage is special to Wolverine. A lot of times in simpler HTTP endpoints or command handlers you simply need to load an entity by its identity or primary key. And frequently, you’ll need to apply some repetitive validation that the entity exists in the first place. For that common need, Wolverine has its declarative persistence helpers like the [Entity] attribute shown below that can automatically load an entity through EF Core by its identity on the incoming command type implied by some naming conventions like this sample below:

The mapping of the identity can be explicitly mapped as well of course, and the pre-generated code always reveals Wolverine’s behavior around handlers or HTTP endpoint methods.

public class ItemsDbContext : DbContext
{
public DbSet<BacklogItem> BacklogItems { get; set; }
public DbSet<Sprint> Sprints { get; set; }
}
public record CommitToSprint(Guid BacklogItemId, Guid SprintId);
public static class CommitToSprintHandler
{
public static object[] Handle(
CommitToSprint command,
// There's a naming convention here about how
// Wolverine "knows" the id for the BacklogItem
// from the incoming command
[Entity(Required = true)] BacklogItem item,
[Entity(Required = true)] Sprint sprint
)
{
return item.CommitTo(sprint);
}
}

In the code above, Wolverine “knows” that the ItemsDbContext persists both the BacklogItems and Sprint entities, so it’s generating code around your handler to load these entities through ItemsDbContext. We can also tell Wolverine to automatically stop handling or in HTTP usage return a 400 ProblemDetails response if either of the requested entities are missing in the database. This helps keep Wolverine handler or HTTP endpoint code simpler by eliminating asynchronous code and letting you write more and more business or workflow logic in pure functions that are easy to test.

In the code above, the EF Core transactional middleware is calling ItemsDbContext.SaveChangesAsync() for you, and the automatic EF Core change tracking will catch the change to the BacklogItem.

And now, I think this is cool, Wolverine has its own new mechanism to batch up the two queries above through a custom EF Core futures query mechanism so that the handler above can fetch both the BacklogItem and the Sprint entity in one database round trip.

But wait, there’s more!

At the risk of making this blog post way too long, here’s more ways that Wolverine can make EF Core usage more successful:

Critter Stack Roadmap for 2026

I normally write this out in January, but I’m feeling like now is a good time to get this out as some of it is in flight. So with plenty of feedback from the other Critter Stack Core team members and a lot of experience seeing where JasperFx Software clients have hit friction in the past couple years, here’s my current thinking about where the Critter Stack development goes for 2026.

As I’m sure you can guess, every time I’ve written this yearly post, it’s been absurdly off the mark of what actually gets done through the year.

Critter Watch

For the love of all that’s good in this world, JasperFx Software needs to get an MVP out the door that’s usable for early adopters who are already clamoring for it. The “Critter Watch” tool, in a nutshell, should be able to tell you everything you need to know about how or why a Critter Stack application is unhealthy and then also give you the tools you need to heal your systems when anything does go wrong.

The MVP is still shaping up as:

  • A visualization and explanation of the configuration of your Critter Stack application
  • Performance metrics integration from both Marten and Wolverine
  • Event Store monitoring and management of projections and subscriptions
  • Wolverine node visualization and monitoring
  • Dead Letter Queue querying and management
  • Alerting – but I don’t have a huge amount of detail yet. I’m paying close attention to the issues JasperFx clients see in production applications though, and using that to inform what information Critter Watch will surface through its user interface and push notifications

This work is heavily in flight, and will hopefully accelerate over the holidays and January as JasperFx Software clients tend to be much quieter. I will be publishing a separate vision document soon for users to review.

The Entire “Critter Stack”

  • We’re standing up the new docs.jasperfx.net (Babu is already working on this) to hold documentation on supporting libraries and more tutorials and sample projects that cross Marten & Wolverine. This will finally add some documentation for Weasel (database utilities and migration support), our command line support, the stateful resource model, the code generation model, and everything to do with DevOps recipes.
  • Play the “Cold Start Optimization” epic across both Marten and Wolverine (and possibly Lamar). I don’t think that true AOT support is feasible, but maybe we can get a lot closer. Have an optimized start mode of some sort that eliminates all or at least most of:
    • Reflection usage in bootstrapping
    • Reflection usage at runtime, which today is really just occasional calls to object.GetType()
    • Assembly scanning of any kind, which we know can be very expensive for some systems with very large dependency trees.
  • Increased and improved integration with EF Core across the stack

Marten

The biggest set of complaints I’m hearing lately is all around views between multiple entity types or projections involving multiple stream types or multiple entity types. I also got some feedback from multiple past clients about the limitation of Marten as a data source underneath UI grids, which isn’t particularly a new bit of feedback. In general, there also appears to be a massive opportunity to improve Marten’s usability for many users by having more robust support in the box for projecting event data to flat, denormalized tables.

I think I’d like to prioritize a series of work in 2026 to alleviate the complicated view problem:

  • The “Composite Projections” Epic where you might use the build products of upstream projections to create multi-stream projection views. This is also an opportunity to ratchet up even more scalability and throughput in the daemon. I’ve gotten positive feedback from a couple JasperFx clients about this. It’s also a big opportunity to increase the throughput and scalability of the Async Daemon by making fewer database requests
  • Revisit GroupJoin in the LINQ support even though that’s going to be absolutely miserable to build. GroupJoin() might end up being a much easier usage that all our Include() functionality. 
  • A first class model to project Marten event data with EF Core. In this proposed model, you’d use an EF Core DbContext to do all the actual writes to a database. 

Other than that, some other ideas that have kicked around for awhile are:

  • Improve the documentation and sample projects, especially around the usage of projections
  • Take a better look at the full text search features in Marten
  • Finally support the PostGIS extension in Marten. I think that could be something flashy and quick to build, but I’d strongly prefer to do this in the context of an actual client use case.
  • Continue to improve our story around multi-stream operations. I’m not enthusiastic about “Dynamic Boundary Consistency” (DCB) in regards to Marten though, so I’m not sure what this actually means yet. This might end up centering much more on the integration with Wolverine’s “aggregate handler workflow” which is already perfectly happy to support strong consistency models even with operations that touch more than one event stream.

Wolverine

Wolverine is by far and away the busiest part of the Critter Stack in terms of active development right now, but I think that slows down soon. To be honest, most work at this point is us reacting tactically to JasperFx client or user needs. In terms of general, strategic themes, I think that 2026 will involve:

  • In conjunction with “CritterWatch”, improving Wolverine’s management story around dead letter queueing
  • I would love to expand Wolverine’s database support beyond “just” SQL Server and PostgreSQL
  • Improving the Kafka integration. That’s not our most widely used messaging broker, but that seems to be the leading source of enhancement requests right now

New Critters?

We’ve done a lot of preliminary work to potentially build new Critter Stack event store alternatives based on different database engines. I’ve always believed that SQL Server would be the logical next database engine, but we’ve gotten fewer and fewer requests for this as PostgreSQL has become a much more popular database choice in the .NET ecosystem.

I’m not sure this will be a high priority in 2026, but you never know…

Leader Election and Virtual Actors in Wolverine

A JasperFx Software client was asking recently about the features for software controlled load balancing and “sticky” agents I’m describing in this post. Since these features are both critical for Wolverine functionality and maybe not perfectly documented already, it’s a great topic for a new blog post! Both because it’s helpful to understand what’s going on under the covers if you’re running Wolverine in production and also in case you want to build your own software managed load distribution for your own virtual agents.

Wolverine was rebooted around in 2022 as a complement to Marten to extend the newly named “Critter Stack” into a full Event Driven Architecture platform and arguably the only single “batteries included” technical stack for Event Sourcing on the .NET platform.

One of the things that Wolverine does for Marten is to provide a first class event subscription function where Wolverine can either asynchronously process events captured by Marten in strict order or forward those events to external messaging brokers. Those first class event subscriptions and the existing asynchronous projection support from Marten can both be executed in only one process at a time because the processing is stateful. As you can probably imagine, it would be very helpful for your system’s scalability and performance if those asynchronous projections and subscriptions could be spread out over an executing cluster of system nodes.

Fortunately enough, Wolverine works with Marten to provide its subscription and projection distribution to assign different asynchronous projections and event subscriptions to run on different nodes so you have a bit more even spread of work throughout your running application cluster like this illustration:

To support that capability above, Wolverine uses a combination of its Leader Election that allows Wolverine to designate one — and only one — node within an application cluster as the “leader” and it’s “agent family” feature that allows for assigning stateful agents across a running cluster of nodes. In the case above, there’s a single agent for every configured projection or subscription in the application that Wolverine will try to spread out over the application cluster.

Just for the sake of completeness, if you have configured Marten for multi-tenancy through separate databases, Wolverine’s projection/subscription distribution will distribute by database rather than by individual projection or subscription + database.

Alright, so here’s the things you might want to know about the subsystem above:

  1. You need to have some sort of Wolverine message persistence configured for your application. You might already be familiar with that for the transactional inbox or outbox storage, but there’s also storage to persist information about the running nodes and agents within your system that’s important for both the leader election and agent assignments
  2. There has to be some sort of “control endpoint” configured for Wolverine to be able to communicate between specific nodes. There is a built in “database control” transport that can act as a fallback mechanism, but all of this back and forth communication works better with transports like Wolverine’s Rabbit MQ integration that can quietly use non-durable queues per node for this intra-node communication. And in case you’re wondering, the Rabbit MQ
  3. Wolverine’s leader election process tries to make sure that there is always a single node that is running the “leader agent” that is monitoring the other running node status and all the known agents
  4. Wolverine’s agent (some other frameworks call these “virtual actors“) subsystem consisting of the IAgentFamily and IAgent interfaces

Building Your Own Agents

Let’s say you have some kind of stateful process in your system that you want to always be running like something that polls against an external system maybe. And then because this is a somewhat common scenario, let’s say that you need a completely separate polling mechanism against different outside entities or tenants.

First, we need to implement this Wolverine interface to be able to start and stop agents in your application:

/// <summary>
///     Models a constantly running background process within a Wolverine
///     node cluster
/// </summary>
public interface IAgent : IHostedService
{
    /// <summary>
    ///     Unique identification for this agent within the Wolverine system
    /// </summary>
    Uri Uri { get; }
    
    /// <summary>
    /// Is the agent running, stopped, or paused? Not really used
    /// by Wolverine *yet* 
    /// </summary>
    AgentStatus Status { get; }
}

IHostedService up above is the same old interface from .NET for long running processes, and Wolverine just adds a Uri and currently unused Status property (that hopefully gets used by “CritterWatch” someday soon for health checks). You could even use the BackgroundService from .NET itself as a base class.

Next, you need a way to tell Wolverine what agents exist and a strategy for distributing the agents across a running application cluster by implementing this interface:

/// <summary>
///     Pluggable model for managing the assignment and execution of stateful, "sticky"
///     background agents on the various nodes of a running Wolverine cluster
/// </summary>
public interface IAgentFamily
{
    /// <summary>
    ///     Uri scheme for this family of agents
    /// </summary>
    string Scheme { get; }

    /// <summary>
    ///     List of all the possible agents by their identity for this family of agents
    /// </summary>
    /// <returns></returns>
    ValueTask<IReadOnlyList<Uri>> AllKnownAgentsAsync();

    /// <summary>
    ///     Create or resolve the agent for this family
    /// </summary>
    /// <param name="uri"></param>
    /// <param name="wolverineRuntime"></param>
    /// <returns></returns>
    ValueTask<IAgent> BuildAgentAsync(Uri uri, IWolverineRuntime wolverineRuntime);

    /// <summary>
    ///     All supported agent uris by this node instance
    /// </summary>
    /// <returns></returns>
    ValueTask<IReadOnlyList<Uri>> SupportedAgentsAsync();

    /// <summary>
    ///     Assign agents to the currently running nodes when new nodes are detected or existing
    ///     nodes are deactivated
    /// </summary>
    /// <param name="assignments"></param>
    /// <returns></returns>
    ValueTask EvaluateAssignmentsAsync(AssignmentGrid assignments);
}

In this case, you can plug custom IAgentFamily strategies into Wolverine by just registering a concrete service in your DI container against that IAgentFamily interface. Wolverine does a simple IServiceProvider.GetServices<IAgentFamily>() during its boostrapping to find them.

As you can probably guess, the Scheme should be unique, and the Uri structure needs to be unique across all of your agents. EvaluateAssignmentsAsync() is your hook to create distribution strategies, with a simple “just distribute these things evenly across my cluster” strategy possible like this example from Wolverine itself:

    public ValueTask EvaluateAssignmentsAsync(AssignmentGrid assignments)
    {
        assignments.DistributeEvenly(Scheme);
        return ValueTask.CompletedTask;
    }

If you go looking for it, the equivalent in Wolverine’s distribution of Marten projections and subscriptions is a tiny bit more complicated in that it uses knowledge of node capabilities to support blue/green semantics to only distribute work to the servers that “know” how to use particular agents (like version 3 of a projection that doesn’t exist on “blue” nodes):

    public ValueTask EvaluateAssignmentsAsync(AssignmentGrid assignments)
    {
        assignments.DistributeEvenlyWithBlueGreenSemantics(SchemeName);
        return new ValueTask();
    }

The AssignmentGrid tells you the current state of your application in terms of which node is the leader, what all the currently running nodes are, and which agents are running on which nodes. Beyond the even distribution, the AssignmentGrid has fine grained API methods to start, stop, or reassign agents.

To wrap this up, I’m trying to guess at the questions you might have and see if I can cover all the bases:

  • Is some kind of persistence necessary? Yes, absolutely. Wolverine has to have some way to “know” what nodes are running and which agents are really running on each node.
  • How does Wolverine do health checks for each node? If you look in the wolverine_nodes table when using PostgreSQL or Sql Server, you’ll see a heartbeat column with a timestamp. Each Wolverine application is running a polling operation that updates its heartbeat timestamp and also checks that there is a known leader node. In normal shutdown, Wolverine tries to gracefully mark the current node as offline and send a message to the current leader node if there is one telling the leader that the node is shutting down. In real world usage though, Kubernetes or who knows what is frequently killing processes without a clean shutdown. In that case, the leader node will be able to detect stale nodes that are offline, eject them from the node persistence, and redistribute agents.
  • Can Wolverine switch over the leadership role? Yes, and that should be relatively quick. Plus Wolverine would keep trying to start a leader election if none is found. But yet, it’s an imperfect world where things can go wrong and there will 100% be the ability to either kickstart or assign the leader role from the forthcoming CritterWatch user interface.
  • How does the leadership election work? Crudely and relatively effectively. All of the storage mechanics today have some kind of sequential node number assignment for all newly persisted nodes. In a kind of simplified “Bully Algorithm,” Wolverine will always try to send “try assume leadership” messages to the node with the lowest sequential node number which will always be the longest running node. When a node does try to take leadership, it uses whatever kind of global, advisory lock function the current persistence uses to get sole access to write the leader node assignment to itself, but will back out if the current node detects from storage that the leadership is already running on another active node.
  • Can I extract the Wolverine leadership election for my own usage? Not easily at all, sorry. I don’t have the link anywhere handy, but there is I believe a couple OSS libraries in .NET that implement the Raft consensus algorithm for leader election. I honestly don’t remember why I didn’t think that was suitable for Wolverine though. Leadership election is most certainly not something for the feint of heart.

Summary

I’m not sure how useful this post was for most users, but hopefully it’s helpful to some. I’m sure I didn’t hit every possible question or concern you might have, so feel free to reach out in Discord or comments here with any questions.

Improved Declarative Persistence in Wolverine

To continue a consistent theme about how Wolverine is becoming the antidote to high ceremony Clean/Onion Architecture approaches, Wolverine 4.8 added some significant improvements to its declarative persistence support (partially after seeing how a recent JasperFx Software client was encountering a little bit of repetitive code).

A pattern I try to encourage — and many Wolverine users do like — is to make the main method of a message handler or an HTTP endpoint be the “happy path” after validation and even data lookups so that that method can be a pure method that’s mostly concerned with business or workflow logic. Wolverine can do this for you through its “compound handler” support that gets you to a low ceremony flavor of Railway Programming.

With all that out of the way, I saw a client frequently writing code something like this endpoint that would need to process a command that referenced one or more entities or event streams in their system:

public record ApproveIncident(Guid Id);

public class ApproveIncidentEndpoint
{
    // Try to load the referenced incident
    public static async Task<(Incident, ProblemDetails)> LoadAsync(
        
        // Say this is the request body, which we can *also* use in
        // LoadAsync()
        ApproveIncident command, 
        
        // Pulling in Marten
        IDocumentSession session,
        CancellationToken cancellationToken)
    {
        var incident = await session.LoadAsync<Incident>(command.Id, cancellationToken);
        if (incident == null)
        {
            return (null, new ProblemDetails { Detail = $"Incident {command.Id} cannot be found", Status = 400 });
        }

        return (incident, WolverineContinue.NoProblems);
    }

    [WolverinePost("/api/incidents/approve")]
    public SomeResponse Post(ApproveIncident command, Incident incident)
    {
        // actually do stuff knowing that the Incident is valid
    }
}

I’d ask you to mostly pay attention to the LoadAsync() method, and imagine copy & pasting dozens of times in a system. And sure, you could go back to returning IResult as a continuation from the HTTP endpoint method above, but that moves clutter back into your HTTP method and would add more manual work to mark up the method with attributes for OpenAPI metadata. Or we could improve the OpenAPI metadata generation by returning something like Task<Results<Ok<SomeResponse>, ProblemHttpResult>>, but c’mon, that’s an absolute eye sore that detracts from the readability of the code.

Instead, let’s use the newly enhanced version of Wolverine’s [Entity] attribute to simplify the code above and still get OpenAPI metadata generation that reflects both the 200 SomeResponse happy path and 400 ProblemDetails with the correct content type. That would look like this:

    [WolverinePost("/api/incidents/approve")]
    public static SomeResponse Post(
        // The request body. Wolverine doesn't require [FromBody], but it wouldn't hurt
        ApproveIncident command, 
        
        [Entity(OnMissing = OnMissing.ProblemDetailsWith400, MissingMessage = "Incident {0} cannot be found")]
        Incident incident)
    {
        // actually do stuff knowing that the Incident is valid
        return new SomeResponse();
    }

Behaviorally, at runtime that endpoint will try to load the Incident entity from whatever persistence tooling is configured for the application (Marten in the tests) using the “Id” property of the ApproveIncident object deserialized from the HTTP request body. If the data cannot be found, the HTTP requests ends with a 400 status code and a ProblemDetails response with the configured message up above. If the Incident can be found, it’s happily passed along to the main endpoint.

Not that every endpoint or message handler is really this simple, but plenty of times you would be changing a property on the incident and persisting it. We can *still* be mostly a pure function with the existing persistence helpers in Wolverine like so:

    [WolverinePost("/api/incidents/approve")]
    public static (SomeResponse, IStorageAction<Incident>) Post(
        // The request body. Wolverine doesn't require [FromBody], but it wouldn't hurt
        ApproveIncident command, 
        
        [Entity(OnMissing = OnMissing.ProblemDetailsWith400, MissingMessage = "Incident {0} cannot be found")]
        Incident incident)
    {
        incident.Approved = true;
        
        // actually do stuff knowing that the Incident is valid
        return (new SomeResponse(), Storage.Update(incident));
    }

Here’s some things I’d like you to know about that [Entity] attribute up above and how that is going to work out in real usage:

  • There is some default conventional magic going on to “decide” how to get the identity value for the entity being loaded (“IncidentId” or “Id” on the command type or request body type, then the same value in routing values for HTTP endpoints or declared query string values). This can be explicitly configured on the attribute something like [Entity(nameof(ApproveIncident.Id)]
  • Every attribute type that I’m mentioning in this post that can be applied to method parameters supports the same identity logic as I explained in the previous bullet
  • Before Wolverine 4.8, the “on missing” behavior was to simply set a 404 status code in HTTP or log that required data was missing in message handlers and quit. Wolverine 4.8 adds the ability to control the “on missing” behavior
  • This new “on missing” behavior is available on the older [Document] attribute in Wolverine.Http.Marten, and [Document] is now a direct subclass of [Entity] that can be used with either message handlers or HTTP endpoints now
  • The existing [AggregateHandler] and [Aggregate] attributes that are part of the Wolverine + Marten “aggregate handler workflow” (the “C” in CQRS) now support this “on missing” behavior, but it’s “opt in,” meaning that you would have to use [Aggregate(Required = true)] to get the gating logic. We had to make that required test opt in to avoid breaking existing behavior when folks upgraded.
  • The lighter weight [ReadAggregate] in the Marten integration also standardizes on this “OnMissing” behavior
  • Because of the confusion I was seeing from some users between [Aggregate]which is meant for writing events and is a little heavier runtime wise than [ReadAggregate], there’s a new [WriteAggregate] attribute with identical behavior to [Aggregate] and now available for message handlers as well. I think that [Aggregate] might get deprecated soon-ish to sidestep the potential confusion
  • [Entity] attribute usage is 100% supported for EF Core and RavenDb as well as Marten. Wolverine is even smart enough to select the correct DbContext type for the declared entity
  • If you coded with any of that [Entity] or Storage stuff and switched persistence tooling, your code should not have to change at all
  • There’s no runtime Reflection going on here. The usage of [Entity] is impacting Wolverine’s code generation around your message handler or HTTP endpoint methods.

The options so far for “OnMissing” behavior is this:

public enum OnMissing
{
    /// <summary>
    /// Default behavior. In a message handler, the execution will just stop after logging that the data was missing. In an HTTP
    /// endpoint the request will stop w/ an empty body and 404 status code
    /// </summary>
    Simple404,
    
    /// <summary>
    /// In a message handler, the execution will log that the required data is missing and stop execution. In an HTTP
    /// endpoint the request will stop w/ a 400 response and a ProblemDetails body describing the missing data
    /// </summary>
    ProblemDetailsWith400,
    
    /// <summary>
    /// In a message handler, the execution will log that the required data is missing and stop execution. In an HTTP
    /// endpoint the request will stop w/ a 404 status code response and a ProblemDetails body describing the missing data
    /// </summary>
    ProblemDetailsWith404,
    
    /// <summary>
    /// Throws a RequiredDataMissingException using the MissingMessage
    /// </summary>
    ThrowException
}

The Future

This new improvement to the declarative data access is meant to be part of a bigger effort to address some bigger use cases. Not every command or query is going to involve just one single entity lookup or one single Marten event stream, so what do you do when there are multiple declarations for data lookups?

I’m not sure what everyone else’s experience is, but a leading cause of performance problems in the systems I’ve helped with over the past decade has been too much chattiness between the application servers and the database. The next step with the declarative data access is to have at least the Marten integration opt into using Marten’s batch querying mechanism to improve performance by batching up requests in fewer network round trips any time there are multiple data lookups in a single HTTP endpoint or message handler.

The step after that is to also enroll our Marten integration for command handlers so that you can craft message handlers or HTTP endpoints that work against 2 or more event streams with strong consistency and transactional support while also leveraging the Marten batch querying for all the efficiency we can wring out of the tooling. I mostly want to see this behavior because I’ve seen clients who could actually use what I was just describing as a way to make their systems more efficient and remove some repetitive code.

I’ll also admit that I think this capability to have an alternative “aggregate handler workflow” that allows you to work efficiently with more than one event stream and/or projected aggregate at one time would put the Critter Stack ahead of the commercial tools pursuing “Dynamic Consistency Boundaries” with what I’ll be arguing is an easier to use alternative.

It’s already possible to work transactionally with multiple event streams at one time with strong consistency and both optimistic and exclusive version protections, but there’s opportunity for performance optimization here.

Summary

Pride goeth before destruction, and an haughty spirit before a fall.

Proverbs 16:18 in the King James version

With the quote above out of the way, let’s jump into some cocky salesmanship! My hope and vision for the Critter Stack is that it becomes the most effective tooling for building typical server side software systems. My personal vision and philosophy for making software development more productive and effective over time is to ruthlessly reduce repetitive code and eliminate code ceremony wherever possible. Our community’s take is that we can achieve improved results compared to more typical Clean/Onion/Hexagonal Architecture codebases by compressing and compacting code down without ever sacrificing performance, resiliency, or testability.

The declarative persistence helpers in this article are, I believe, a nice example of the evolving “Critter Stack Way.”

Low Ceremony Railway Programming with Wolverine

Railway Programming is an idea that came out of the F# community as a way to develop for “sad path” exception cases without having to resort to throwing .NET Exceptions as a way of doing flow control. Railway Programming works by chaining together functions with a standardized response in such a way that it’s relatively easy to abort workflows as preliminary steps are found to be invalid while still passing the results of the preceding function as the input into the next function.

Wolverine has some direct support for a quasi-Railway Programming approach by moving validation or data loading steps prior to the main message handler or HTTP endpoint logic. Let’s jump into a quick sample that works with either message handlers or HTTP endpoints using the built in HandlerContinuation enum:

public static class ShipOrderHandler
{
    // This would be called first
    public static async Task<(HandlerContinuation, Order?, Customer?)> LoadAsync(ShipOrder command, IDocumentSession session)
    {
        var order = await session.LoadAsync<Order>(command.OrderId);
        if (order == null)
        {
            return (HandlerContinuation.Stop, null, null);
        }

        var customer = await session.LoadAsync<Customer>(command.CustomerId);

        return (HandlerContinuation.Continue, order, customer);
    }

    // The main method becomes the "happy path", which also helps simplify it
    public static IEnumerable<object> Handle(ShipOrder command, Order order, Customer customer)
    {
        // use the command data, plus the related Order & Customer data to
        // "decide" what action to take next

        yield return new MailOvernight(order.Id);
    }
}

By naming convention (but you can override the method naming with attributes as you see fit), Wolverine will try to generate code that will call methods named Before/Validate/Load(Async) before the main message handler method or the HTTP endpoint method. You can use this compound handler approach to do set up work like loading data required by business logic in the main method or in this case, as validation logic that can stop further processing based on failed validation or data requirements or system state. Some Wolverine users like using these method to keep the main methods relatively simple and focused on the “happy path” and business logic in pure functions that are easier to unit test in isolation.

By returning a HandlerContinuation value either by itself or as part of a tuple returned by a BeforeValidate, or LoadAsync method, you can direct Wolverine to stop all other processing.

You have more specialized ways of doing that in HTTP endpoints by using the ProblemDetails specification to stop processing like this example that uses a Validate() method to potentially stop processing with a descriptive 400 and error message:

public record CategoriseIncident(
    IncidentCategory Category,
    Guid CategorisedBy,
    int Version
);

public static class CategoriseIncidentEndpoint
{
    // This is Wolverine's form of "Railway Programming"
    // Wolverine will execute this before the main endpoint,
    // and stop all processing if the ProblemDetails is *not*
    // "NoProblems"
    public static ProblemDetails Validate(Incident incident)
    {
        return incident.Status == IncidentStatus.Closed 
            ? new ProblemDetails { Detail = "Incident is already closed" } 
            
            // All good, keep going!
            : WolverineContinue.NoProblems;
    }
    
    // This tells Wolverine that the first "return value" is NOT the response
    // body
    [EmptyResponse]
    [WolverinePost("/api/incidents/{incidentId:guid}/category")]
    public static IncidentCategorised Post(
        // the actual command
        CategoriseIncident command, 
        
        // Wolverine is generating code to look up the Incident aggregate
        // data for the event stream with this id
        [Aggregate("incidentId")] Incident incident)
    {
        // This is a simple case where we're just appending a single event to
        // the stream.
        return new IncidentCategorised(incident.Id, command.Category, command.CategorisedBy);
    }
}

The value WolverineContinue.NoProblems tells Wolverine that everything is good, full speed ahead. Anything else will write the ProblemDetails value out to the response, return a 400 status code (or whatever you decide to use), and stop processing. Returning a ProblemDetails object hopefully makes these filter methods easy to unit test themselves.

You can also use the AspNetCore IResult as another formally supported “result” type in these filter methods like this shown below:

public static class ExamineFirstHandler
{
    public static bool DidContinue { get; set; }
    
    public static IResult Before([Entity] Todo2 todo)
    {
        return todo != null ? WolverineContinue.Result() : Results.Empty;
    }

    [WolverinePost("/api/todo/examinefirst")]
    public static void Handle(ExamineFirst command) => DidContinue = true;
}

In this case, the “special” value WolverineContinue.Result() tells Wolverine to keep going, otherwise, Wolverine will execute the IResult returned from one of these filter methods and stop all other processing for the HTTP request.

It’s maybe a shameful approach for folks who are more inline with a Functional Programming philosophy, but you could also use a signature like:

[WolverineBefore]
public static UnauthorizedHttpResult? Authorize(SomeCommand command, ClaimsPrincipal user)

In the case above, Wolverine will do nothing if the return value is null, but will execute the UnauthorizedHttpResult response if there is, and stop any further processing. There is *some* minor value to expressing the actual IResult type above because that can be used to help generate OpenAPI metadata.

Lastly, let’s think about the very common need to write an HTTP endpoint where you want to return a 404 status code if the requested data doesn’t exist. In many cases the API user is supplying the identity value for an entity, and your HTTP endpoint will first query for that data, and if it doesn’t exist, abort the processing with the 404 status code. Wolverine has some built in help for this tedious task through its unique persistence helpers as shown in this sample HTTP endpoint below:

    [WolverineGet("/orders/{id}")]
    public static Order GetOrder([Entity] Order order) => order;

Note the presence of the [Entity] attribute for the Order argument to this HTTP endpoint route. That’s telling Wolverine that that data should be loaded using the “id” route argument as the Order key from whatever persistence mechanism in your application deals with the Order entity, which could be Marten of course, an EF Core DbContext that has a mapping for Order, or Wolverine’s RavenDb integration. Unless we purposely mark [Entity(Required = false)], Wolverine.HTTP will return a 404 status code if the Order entity does not exist. The simplistic sample from Wolverine’s test suite above doesn’t do any kind of mapping from the raw Order to a view model, but the mechanics of the [Entity] loading would work equally if you also mapped the raw Order to some kind of OrderViewModel maybe.

Last Thoughts

I’m pushing Wolverine users and JasperFx clients to utilize Wolverine’s quasi Railway Programming capabilities as guard clauses to better separate out validation or error condition handling into easily spotted, atomic operations while reducing the core HTTP request or message handler to being a “happy path” operation. Especially in HTTP services where the ProblemDetails specification and integration with Wolverine fits well with this pattern and where I’d expect many HTTP client tools to already know how to work with problem details responses.

There have been a few attempts to adapt Railway Programming to C# that I’m aware of, inevitably using some kind of custom Result type that denotes success or failure with the actual results for the next function. I’ve seen some folks and OSS tools try to chain functions together with nested lambda functions within a fluent interface. I’m not a fan of any of this because I think the custom Result types just add code noise and extra mechanical work, then the fluent Interface approach can easily be nasty to debug and detracts from readability by the extra code noise. But anyway, read a lot more about this in Andrew Lock’s Series: Working with the result pattern and make up your own mind.

I’ve also seen an approach where folks used MediatR handlers for each individual step in the “railway” where each handler had to return a custom Result type with the inputs for the next handler in the series. I beg you, please don’t do this in your own system because that leads to way too much complexity, code that’s much harder to reason about because of the extra hoops and indirection, and potentially poor system performance because again, you can’t see what the code is doing and you can easily end up making unnecessarily duplicate database round trips or just being way too “chatty” to the database. And no, replacing MediatR handlers with Wolverine handlers is not going to help because the pattern was the problem and not MediatR itself.

As always, the Wolverine philosophy is that the path to long term success in enterprise-y software systems is by relentlessly eliminating code ceremony so that developers can better reason about how the system’s logic and behavior works. To a large degree, Wolverine is a reaction to the very high ceremony Clean/Onion Architecture/iDesign architectural approaches of the past 15-20 years and how hard those systems can be to deal with over time.

And as happens with just about any halfway good thing in programming, some folks overused the Railway Programming idea and there’s a little bit of pushback or backlash to the technique. I can’t find the quote to give it the real attribution, but something I’ve heard Martin Fowler say is that “we don’t know how useful an idea really can be until we push it too far, then pull back a little bit.”

Marten 8.0, Wolverine 4.0, and even Lamar 15.0 are out!

It’s a pretty big “Critter Stack” community release day today, as:

  1. Marten has its 8.0 release
  2. Wolverine got a 4.0 release
  3. Lamar, the spiritual successor to StructureMap, had a corresponding 15.0 release
  4. And underneath those tools, the new JasperFx & JasperFx.Events library went 1.0 and the supporting Weasel library that provides some low level functionality went 8.0

Before getting into the highlights, let me start by thanking the Critter Stack Core team for all their support, contributions to both the code and documentation, and for being a constant sounding board for me and source of ideas and advice:

Next, I’d like to thank our Critter Stack community for all the interest and the continuous help we get with suggestions, pull requests that improve the tools, and especially for the folks who take the time to create actionable bug reports because that’s half the battle of getting problems fixed. And while there are plenty of days when I wish there wasn’t a veritable pack of raptors prowling around the projects probing for weaknesses in the projects, I cannot overstate the importance for an OSS project to have user and community feedback.

Alright, on to some highlights.

The big changes are that we consolidated several smaller shared libraries into one bigger shared JasperFx library and also combined some smaller libraries like Marten.CommandLine, Weasel.CommandLine, and Lamar.Diagnostics into Marten, Weasel, and Lamar respectfully. That’s hopefully going to help folks get to command line utilities quicker and easier, and the Critter Stack tools do get some value out of those command line utilities.

We’ve now got a shared model to configure behavioral differences at “Development” vs “Production” time for both Marten and Wolverine all at one time like this:

// These settings would apply to *both* Marten and Wolverine
// if you happen to be using both
builder.Services.CritterStackDefaults(x =>
{
    x.ServiceName = "MyService";
    x.TenantIdStyle = TenantIdStyle.ForceLowerCase;
    
    // You probably won't have to configure this often,
    // but if you do, this applies to both tools
    x.ApplicationAssembly = typeof(Program).Assembly;
    
    x.Production.GeneratedCodeMode = TypeLoadMode.Static;
    x.Production.ResourceAutoCreate = AutoCreate.None;

    // These are defaults, but showing for completeness
    x.Development.GeneratedCodeMode = TypeLoadMode.Dynamic;
    x.Development.ResourceAutoCreate = AutoCreate.CreateOrUpdate;
});

It might be awhile before this pays off for us, but everything from the last couple paragraphs is also meant to speed up the development of additional Event Sourcing “Critter” tools to expand beyond PostgreSQL — not that we’re even slightly backing off our investment in the do everything PostgreSQL database!

For Marten 8.0, we’ve done a lot to make projections easier to use with explicit code, and added a new Stream Compacting feature for yet more scalability.

For Wolverine 4.0, we’ve improved Wolverine’s ability to support modular monolith architectures that might utilize multiple Marten stores or EF Core DbContext services targeting the same database or even different databases. More on this soon.

Wolverine 4.0 also gets some big improvements for EF Core users with a new Multi-Tenancy with EF Core feature.

Both Wolverine and Marten got some streamlined Open Telemetry span naming changes that were suggested by Pascal Senn of ChiliCream who collaborates with JasperFx for a mutual client.

For both Wolverine and Lamar 15, we added a little more full support for the [FromKeyedService] and “keyed services” in the .NET Core DI abstractions like this for a Wolverine handler:

    // From a test, just showing that you *can* do this
    // *Not* saying you *should* do that very often
    public static void Handle(UseMultipleThings command, 
        [FromKeyedServices("Green")] IThing green,
        [FromKeyedServices("Red")] IThing red)
    {
        green.ShouldBeOfType<GreenThing>();
        red.ShouldBeOfType<RedThing>();
    }

And inside of Lamar itself, any dependency from a constructor function that has this:

// Lamar will inject the IThing w/ the key "Red" here
public record ThingUser([FromKeyedServices("Red")] IThing Thing);

Granted, Lamar already had its own version of keyed services and even an equivalent to the [FromKeyedService] attribute long before this was added to the .NET DI abstractions and ServiceProvider conforming container, but .NET is Microsoft’s world and lowly OSS projects pretty well have to conform to their abstractions sometimes.

Just for the record, StructureMap had an equivalent to keyed services in its first production release way back in 2004 back when David Fowler was probably in middle school making googly eyes at Rihanna.

What’s Next for the Critter Stack?

Honestly, I had to cut some corners on documentation to get the releases out for a JasperFx Software client, so I’ll be focused on that for most of this week. And of course, plenty of open issues and some outstanding pull requests didn’t make the release, so those hopefully get addressed in the next couple minor releases.

For the bigger picture, I think the rest of this year is:

  1. “CritterWatch”, our long planned, not moving fast enough for my taste, management and observability console for both Marten and Wolverine.
  2. Improvements to Marten’s performance and scalability for Event Sourcing. We did a lot in that regard last year throughout Marten 7.*, but there’s another series of ideas to increase the throughput even farther.
  3. Wolverine is getting a lot of user contributions right now, and I expect that especially the asynchronous messaging support will continue to grow. I would like to see us add CosmosDb support to Wolverine by the end of the year. By and large, I would like to increase Wolverine’s community usage over all by trying to grow the tool beyond just folks already using Marten — but the Marten + Wolverine combination will hopefully continue to improve.
  4. More Critters? We’re still talking about a SQL Server backed Event Store, with CosmosDb being a later alternative

Wrapping Up

As for the wisdom of ever again making a release cycle where the entire Critter Stack has a major release at the exact same time, this:

Finally, a lot of things didn’t make the release that folks wanted, heck that I wanted, but at some point it becomes expensive for a project to have a long running branch for “vNext” and you have to make the release. I’m hopeful that even though these major releases didn’t add a ton of new functionality that they set us up with the right foundation for where the tools go next.

I also know that folks will have plenty of questions and probably even inevitably run into problems or confusion with the new releases — especially until we can catch up on documentation — but I stole time from the family to get this stuff out this weekend and I’ll probably not be able to respond to anyone but JasperFx customers on Monday. Finally, in the meantime, right after every big push, I promise to start responding to whatever problems folks will have, but:

Critter Stack Work in Progress

It’s just time for an update from my last post on Critter Stack Roadmap Update for February as the work has progressed in the past weeks and we have more clarity on what’s going to change.

Work is heavily underway right now for a round of related releases in the Critter Stack (Marten, Wolverine, and other tools) I was originally calling “Critter Stack 2025” involving these tools:

Ermine for Event Sourcing with SQL Server

“Ermine” is our next full fledged “Critter” that’s been a long planned port of a significant subset of Marten’s functionality to targeting SQL Server. At this point, the general thinking is:

  • Focus on porting the Event Sourcing functionality from Marten
  • Quite possibly build around the JSON field support in EF Core and utilize EF Core under the covers. Maybe.
  • Use a new common JasperFx.Events library that will contain the key abstractions, metadata tracking, and even projection support. This new library will be shared between Marten, Ermine, and theoretical later “critters” targeting CosmosDb or DynamoDb down the line
  • Maybe try to lift out more common database handling code from Marten, but man, there’s more differences between PostgreSQL and SQL Server than I think people understand and that might turn into a time sink
  • Support the same kind of “aggregate handler workflow” integration with Wolverine as we have with Marten today, and probably try to do this with shared code, but that’s just a detail

Is this a good idea to do at all? We’ll see. The work to generalize the Marten projection support has been a time sink so far. I’ve been told by folks for a decade that Marten should have targeted SQL Server, and that supporting SQL Server would open up a lot more users. I think this is a bit of a gamble, but I’m hopeful.

JasperFx Dependency Consolidation

Most of the little, shared foundational elements of Marten, Wolverine, and soon to be Ermine have been consolidated into a single JasperFx library. That now includes what was:

  1. JasperFx.Core (which in turn was renamed from “Baseline” after someone else squatted on that name and in turn was imported from ancient FubuCore for long term followers of mine)
  2. JasperFx.CodeGeneration
  3. The command line discovery, parsing, and execution model that is in Oakton today. That might be a touch annoying for the initial conversion, but in the little bit longer term that’s allowed us to combine several Nuget packages and simplify the project structure over all. TL;DR: fewer Nugets to install going forward.

Marten 8.0

I hope that Marten 8.0 is a much smaller release than Marten 7.0 was last year, but the projection model changes are turning out to be substantial. So far, this work has been done:

  • .NET 6/7 support has been dropped and the dependency tree simplified after that
  • Synchronous database access APIs have been eliminated
  • All other API signatures that were marked as [Obsolete] in the latest versions of Marten 7.* were removed
  • Marten.CommandLine was removed altogether, but the “db-*” commands are available as part of Marten’d dependency tree with no difference in functionality from the “marten-*” commands
  • Upgraded to the latest Npgsql 9

The projection subsystem overhaul is ongoing and substantial and frankly I’m kind of expecting Vizzini to show up in my home office and laugh at me for starting a land war in Southeast Asia. For right now I’ll just say that the key goals are:

  • The aforementioned reuse with Ermine and potential other Event Store implementations later
  • Making it as easy as possible to use explicit code instead as desired for the projections in addition to the existing conventional Apply / Create methods
  • Eliminate code generation for just the projections
  • Simplify the usage of “event slicing” for grouping events in multi-stream projections. I’m happy how this is shaping up so far, and I think this is going to end up being a positive after the initial conversion
  • Improve the throughput of the async daemon

There’s also a planned “stream compacting” feature happening, but it’s too early to talk about that much. Depending on how the projection work goes, there may be other performance related work as well.

Wolverine 4.0

Wolverine 4.0 is mostly about accomodating the work in other products, but there are some changes. Here’s what’s already been done:

  • Dropped .NET 7 support
  • Significant work for a single application being able to use multiple databases from within one application for folks getting clever with modular monoliths. In Wolverine 4.*, you’ll be able to mix and match any number of data stores with the corresponding transactional inbox/outbox support much better than Wolverine 3.* can do. This is 100% about modular monoliths, but also fit into the CritterWatch work
  • Work to provide information to CritterWatch

There are some other important features that might be part of Wolverine 4.0 depending on some ongoing negotiations with a potential JasperFx customer.

CritterWatch Minimal Viable Product Direction

“CritterWatch” is a long planned commercial add on product for Wolverine, Marten, and any future “critter” Event Store tools. The goal is to create both a management and monitoring dashboard for Wolverine messaging and the Event Sourcing processes in those systems.

The initial concept is shown below:

At least for the moment, the goal of the CritterWatch MVP is to deliver a standalone system that can be deployed either in the cloud or on a client premises. The MVP functionality set will:

  • Explain the configuration and capabilities of all your Critter Stack systems, including some visualization of how messages flow between your systems and the state of any event projections or subscriptions
  • Work with your OpenTelemetry tracking to correlate ongoing performance information to the artifacts in your system.
  • Visualize any ongoing event projections or subscriptions by telling you where each is running and how healthy they are — as well as give you the ability to pause, restart, rebuild, or rewind them as needed
  • Manage the dead letter queued (DLQ) messages of your system with the ability to query the messages and selectively replay or discard the DLQ messages

We have a world of other plans for CritterWatch, but the feature set above is the most requested features from the companies that are most interested in this tool first.

Projections, Consistency Models, and Zero Downtime Deployments with the Critter Stack

This content will later be published as a tutorial somewhere on one of our documentation websites. This was originally “just” an article on doing blue/green deployments when using projections with Marten, so hence the two martens up above:)

Event Sourcing may not seem that complicated to implement, and you might be tempted to forego any kind of off the shelf tooling and just roll your own. Just appending events to storage by itself isn’t all that difficult, but you’ll almost always need projections of some sort to derive the system state in a usable way and that’s a whole can of complexity worms as you need to worry about consistency models, concurrency, performance, snapshotting, and you inevitably need to change a projection in a deployment down the road.

Fortunately, the full combination of Marten and Wolverine (the “Critter Stack”) for Event Sourcing architectures gives you powerful options to cover a variety of projection scenarios and needs. Marten by itself provides multiple ways to achieve strongly consistent projected data when you have to have that. When you prefer or truly need eventual consistency instead for certain projections, Wolverine helps Marten scale up to larger data loads by distributing the background work that Marten does for asynchronous projection building. Moreover, when you put the two tools together, the Critter Stack can support zero downtime deployments that involve projections rebuilds without sacrificing strong consistency for certain types of projections.

Consistency Models in Marten

One of the decision points in building projections is determining for each individual projection view whether you need strong consistency where the projected data is guaranteed to match the current state of the persisted events, or if it would be preferable to rely on eventual consistency where the projected data might be behind the current events, but will “eventually” be caught up. Eventual consistency might be attractive because there are definite performance advantages to moving some projection building to an asynchronous, background process (Marten’s async daemon feature). Besides the performance benefits, eventual consistency might be necessary to accommodate cases where highly concurrent system inputs would make it very difficult to update projection data within command handling without either risking data loss or applying events out of sequential order.

Marten supports three projection lifecycles that we’ll explore throughout this paper:

  1. “Live” projections are calculated in memory by fetching the raw events and building up an aggregated view. Live projections are strongly consistent.
  2. “Inline” projections are persisted in the Marten database, and the projected data is updated as part of the same database transaction whenever any events are appended. Inline projections are also strongly consistent.
  3. “Async” projections are continuously built and updated in the database as new events come in a background process in Marten called the “Async Daemon“. On its face this is obviously eventual consistency, but there’s a technical wrinkle where Marten can “fast forward” asynchronous projections to still be strongly consistent on demand.

For Inline or Async projections, the projected data is being persisted to Marten using its document database capabilities and that data is available to be loaded through all of Marten’s querying capabilities, including its LINQ support. Writing “snapshots” of the projected data to the database also has an obvious performance advantage when it comes to reading projection state, especially if your event streams become too long to do Live aggregations on demand.

Now let’s talk about some common projection scenarios and how you should choose projection lifecycles for these scenarios:

A “write model” projection for a single event stream that represents a logical business entity or workflow like an “Invoice” or an “Order” with all the necessary information you would need in command handlers to “decide” how to process incoming commands. You will almost certainly need this data to be strongly consistent with the events in your command processing. I think it’s a perfectly good default to start with a Live lifecycle, and maybe even move to Inline if you want snapshotting in the case of longer event streams, but there’s a way in Marten to actually use Async as well with its FetchForWriting() API as shown below in this sample MVC controller that acts as a command handler (the “C” in CQRS):

    [HttpPost("/api/incidents/categorise")]
    public async Task<IActionResult> Post(
        CategoriseIncident command,
        IDocumentSession session,
        IValidator<CategoriseIncident> validator)
    {
        // Some validation first
        var result = await validator.ValidateAsync(command);
        if (!result.IsValid)
        {
            return Problem(statusCode: 400, detail: result.Errors.Select(x => x.ErrorMessage).Join(", "));
        }

        var userId = currentUserId();

        // This will give us access to the projected current Incident state for this event stream
        // regardless of whatever the projection lifecycle is!
        var stream = await session.Events.FetchForWriting<Incident>(command.Id, command.Version, HttpContext.RequestAborted);
        if (stream.Aggregate == null) return NotFound();
        
        if (stream.Aggregate.Category != command.Category)
        {
            stream.AppendOne(new IncidentCategorised
            {
                Category = command.Category,
                UserId = userId
            });
        }

        await session.SaveChangesAsync();

        return Ok();
    }

The FetchForWriting() API is the recommended way to write command handlers that need to use a “write model” to potentially append new events. FetchForWriting helps you opt into easy optimistic concurrency protection that you probably want to protect against concurrent access to the same event stream. As importantly, FetchForWriting completely encapsulates whatever projection lifecycle we’re using for the Incident write model above. If Incident is registered as:

  • Live, then this API does a live aggregation in memory
  • Inline, then this API just loads the persisted snapshot out of the database similar to IQuerySession.LoadAsync<Incident>(id)
  • Async, then this API does a “catch up” model for you by fetching — in one database round trip mind you! — the last persisted snapshot of the Incident and any captured events to that event stream after the last persisted snapshot, and incrementally applies the extra events to effectively “advance” the Incident to reflect all the current events captured in the system.

The takeaway here is that you can have the strongly consistent model you need for command handlers with concurrent access protections and be able to use any projection lifecycle as you see fit. You can even change lifecycles later without having to make code changes!

In the next section I’ll discuss how that “catch up” ability will allow you to make zero downtime deployments with projection changes.

I didn’t want to use any “magic” in the code sample above to discuss the FetchForWriting API in Marten, but do note that Wolverine’s “aggregate handler workflow” approach to streamlined command handlers utilizes Marten’s FetchForWriting API under the covers. Likewise, Wolverine has some other syntactic sugar for more easily using Marten’s FetchLatest API.

A “read model” projection for a single stream that again represents the state of a logical business entity or workflow, but this time optimized for whatever data needs a user interface or query endpoint of your system needs. You might be okay in some circumstances to get away with eventually consistent data for your “read model” projections, but for the sake of this article let’s say you do want strongly consistent information for your read model projections. There’s also a little bit lighter API called FetchLatest in Marten for fetching a read only view of a projection (this only works with a single stream projection in case you’re wondering):

public static async Task read_latest(
    // Watch this, only available on the full IDocumentSession
    IDocumentSession session,
    Guid invoiceId)
{
    var invoice = await session
        .Events.FetchLatest<Projections.Invoice>(invoiceId);
}

Our third common projection role is simply having a projected view for reporting. This kind of projection may incorporate information from outside of the event data as well, combine information from multiple “event streams” into a single document or record, or even cross over between logical types of event streams. At this point it’s not really possible to do Live aggregations like this, and an Inline projection lifecycle would be problematic if there was any level of concurrent requests that impact the same “multi-stream” projection state. You’ll pretty well have to use the Async lifecycle and accept some level of eventual consistency.

It’s beyond the scope of this paper, but there are ways to “wait” for an asynchronous projection to catch up or to take “side effect” actions whenever an asynchronous projection is being updated in a background process.

I should note that “read model” and “write model” are just roles within your system, and it’s going to be common to get by with a single model that happily plays both roles in simpler systems, but don’t hesitate to use separate projection representations of the same events if the consumers of your system’s data just have very different needs.

Persisting the snapshots comes with a potentially significant challenge when there is inevitably some reason why the projection data has to be rebuilt as part of a deployment. Maybe it’s because of a bug, new business requirements, a change in how your system calculates a metric from the event data, or even just adding an entirely new projection view of the same old event data — but the point is, that kind of change is pretty likely and it’s more reliable to plan for change rather than depend on being perfect upfront in all of your event modeling.

Fortunately, Marten with some serious help from Wolverine, has some answers for that!

There’s also an option to write projected data to “flat” PostgreSQL tables as you see fit.

Zero Downtime with Blue / Green Deployments

As I alluded to just above, one of the biggest challenges with systems using event sourcing is what happens when you need to deploy changes that involve projection changes that will require rebuilding persisted data in the database. As a community we’ve invested a lot of time into making the projection rebuild process smoother and faster, but there’s admittedly more work yet to come.

Instead of requiring some system downtime in order to do projection rebuilds before a new deployment though, the Critter Stack can now do a true “blue / green” deployment where both the old and new versions of the system and even versioned projections can run in parallel as shown below:

Let’s rewind a little bit and talk about how to make this happen, because it is a little bit of a multi-step process.

First off, try to only use FetchForWriting() or FetchLatest() when you need strongly consistent access to any kind of single stream projection (definitely “write model” projections and probably “read model” projections as well).

Next, if you need to make some kind of breaking changes to a projection of any kind, use the ProjectionVersion property and increment it to the next version like so:

// This class contains the directions for Marten about how to create the
// Incident view from the raw event data
public class IncidentProjection: SingleStreamProjection<Incident>
{
    public IncidentProjection()
    {
        // THIS is the magic sauce for side by side execution
        // in blue/green deployments
        ProjectionVersion = 2;
    }

    public static Incident Create(IEvent<IncidentLogged> logged) =>
        new(logged.StreamId, logged.Data.CustomerId, IncidentStatus.Pending, Array.Empty<IncidentNote>());

    public Incident Apply(IncidentCategorised categorised, Incident current) =>
        current with { Category = categorised.Category };

    // More event type handling...
}

By incrementing the projection version, we’re effectively making this a completely new projection in the application that will use completely different database tables for the Incident projection version 1 and version 2. This allows the “blue” nodes running the starting version of our application to keep chugging along using the old version of Incident while “green” nodes running the new version of our application can be running completely in parallel, but depending on the new version 2 of the Incident projection.

You will also need to make every single newly revised projection run under the Async lifecycle as well. As we discussed earlier, the FetchForWriting API is able to “fast forward” a single Incident write model projection as needed for command processing, so our “green” nodes will be able to handle commands against Incident event streams with the correct system state. Admittedly, the system might be running a little slower until the asynchronous Incident V2 projection gets caught up, but “slower” is arguably much better than “down”.

With the case of multi-stream projections (our reports), there is no equivalent to FetchLatest, so we’re stuck with eventual consistency. What you can at least do is deploy some “green” nodes with the new version of the system and the revisioned projections and let it start building the new projections from scratch as it starts — but not allow those nodes to handle outside requests until the new versions of the projection are “close” to being caught up to the current event store.

Now, the next question is “how does Marten know to only run the “green” versions of the projections on “green” nodes and make sure that every single projection + version combination is running somewhere?

While there are plenty of nice to have features that the Wolverine integration with Marten brings for the coding model, this next step is absolutely mandatory for the blue/green approach. In our application, we need to use Wolverine to distribute the background projection processes across our entire application cluster:

// This would be in your application bootstrapping
opts.Services.AddMarten(m =>
    {
        // Other Marten configuration

        m.Projections.Add<IncidentProjection>(ProjectionLifecycle.Async);

    })
    .IntegrateWithWolverine(m =>
    {
        // This makes Wolverine distribute the registered projections
        // and event subscriptions evenly across a running application
        // cluster
        m.UseWolverineManagedEventSubscriptionDistribution = true;
    });

Referring back to the diagram from above, that option above enables Wolverine to distribute projections to running application nodes based on each node’s declared capabilities. This also tries to evenly distribute the background projections so they’re spread out over the running service nodes of our application for better scalability instead of only running “hot/cold” like earlier versions of Marten’s async daemon did.

As “blue” nodes are pulled offline, it’s safe to drop the Marten table storage for the projection versions that are no longer used. Sorry, but at this point there’s nothing built into the Critter Stack, but you can easily do that through PostgreSQL by itself with pure SQL.

Summary

This is a powerful set of capabilities that can be valuable in real life, grown systems that utilize Event Sourcing and CQRS with the Critter Stack, but I think we as a community have failed until now to put all of this content together in one place to unlock its usage by more people.

I am not aware of any other Event Sourcing tool in .NET or any other technical ecosystem for that matter that can match Marten & Wolverine’s ability to support this kind of potentially zero downtime deployment model. I’ve also never seen another Event Sourcing tool that has something like Marten’s FetchForWriting and FetchLatest APIs. I definitely haven’t seen any other CQRS tooling enable your application code to be as streamlined as the Critter Stack’s approach to CQRS and Event Sourcing.

I hope the key takeaway here is that Marten is a mature tool that’s been beaten on by real people building and maintaining real systems, and that it already solves challenging technical issues in Event Sourcing. Lastly, Marten is the most commonly used Event Sourcing tool for .NET as is, and I’m very confident in saying it has by far the most complete and robust feature set while also having a very streamlined getting started experience.

So this was meant to be a quick win blog post that I was going to bang out at the kitchen table after dinner last night, but instead took most of the next day. The Critter Stack core team is working on a new set of tutorials for both Marten and Wolverine, and this will hopefully take its place with that new content soon.

Pretty Substantial Wolverine 3.11 Release

The Critter Stack community just made a pretty big Wolverine 3.11 release earlier today with 5 brand new contributors making their first pull requests! The highlights are:

  • Efficiency and throughput improvements for publishing messages through the Kafka transport
  • Hopefully more resiliency in the Kafka transport
  • A fix for object disposal mechanics that probably got messed up in the 3.0 release (oops on my part)
  • Improvements for the Azure Service Bus transport‘s ability to handle larger message batches
  • New options for the Pulsar transport
  • Expanded ability for interop with non-Wolverine services with the Google Pubsub transport
  • Some fixes for Wolverine.HTTP

Wolverine 4.0 is also under way, but there will be at least some Wolverine.HTTP improvements in the 3.* branch before we get to 4.0.

Big thanks to the whole Critter Stack community for continuing to support Wolverine, including the folks who took the time to create actionable bug reports that led to several of the fixes and the folks who made fixes to the documentation website as well!