Marten, Polecat, and Wolverine Releases — One Shining Moment Edition

For non basketball fans, the NCAA Tournament championship game broadcasts end each year with a highlight montage to a cheesy song called “One Shining Moment” that’s one of my favorite things to watch each year.

The Critter Stack community is pretty much always busy, but we were able to make some releases to Marten, Polecat, and Wolverine yesterday and today that dropped our open issue counts on GitHub to the lowest number in a decade. That’s bug fixes, some long overdue structural improvements, quite a few additions to the documentation, new features, and some quiet enablement of near term improvements in CritterWatch and our AI development strategy.

Wolverine 5.28.0 Released

We’re happy to announce Wolverine 5.28.0, a feature-packed release that significantly strengthens both the messaging and HTTP sides of the framework. This release includes major new infrastructure for transport observability, powerful new Wolverine.HTTP capabilities bringing closer parity with ASP.NET Core’s feature set, and several excellent community contributions.

Last week I took some time to do a “gap analysis” of Wolverine.HTTP against Minimal API and MVC Core for missing features and did a similar exercise of Wolverine’s asynchronous messaging support against other offerings in the .NET and Java world. This release actually plugs most of those gaps — albeit with just documentation in many cases.

Highlights

🔍 Transport Health Checks

This has been one of our most requested features. Wolverine now provides built-in health check infrastructure for all message transports — RabbitMQ, Kafka, Azure Service Bus, Amazon SQS, NATS, Redis, and MQTT. The new WolverineTransportHealthCheck base class reports point-in-time health status including connection state and, where supported, broker queue depth — critical for detecting the “silent failure” scenario where messages are piling up on the broker but aren’t being consumed (a situation we’ve seen in production with RabbitMQ).

Health checks integrate with ASP.NET Core’s standard IHealthCheck interface, so they plug directly into your existing health monitoring infrastructure.

Transport health check documentation →

This was built specifically for CritterWatch integration. I should also point out that CritterWatch is now able to kickstart the “silent failure” issues where Marten/Polecat projections claim to be running, but not advancing and messaging listeners who appear to be active but also aren’t actually receiving messages.

🔌 Wire Tap (Message Auditing)

Implementing the classic Enterprise Integration Patterns Wire Tap, this feature lets you record a copy of every message flowing through configured endpoints — without affecting the primary processing pipeline. It’s ideal for compliance logging, analytics, or debugging.

opts.ListenToRabbitQueue("orders")
.UseWireTap();

Implement the IWireTap interface with RecordSuccessAsync() and RecordFailureAsync() methods, and Wolverine handles the rest. Supports keyed services for different implementations per endpoint.

Wire Tap documentation →

📋 Declarative Marten Data Requirements

This feature is meant to be a new type of “declarative invariant” that will enable Critter Stack systems to be more efficient. If this is used with other declarative persistence helpers in the same HTTP endpoint or message handler, Wolverine is able to opt into Marten’s batch querying for more efficient code.

New [DocumentExists<T>] and [DocumentDoesNotExist<T>] attributes let you declaratively guard handlers with Marten document existence checks. Wolverine generates optimized middleware at compile time — no manual boilerplate needed:

[DocumentExists<Customer>]
public static OrderConfirmation Handle(PlaceOrder command)
{
// Customer is guaranteed to exist here
}

Throws RequiredDataMissingException if the precondition fails.

Marten integration documentation →

🎯 Confluent Schema Registry Serializers for Kafka

A community contribution that adds first-class support for Confluent Schema Registry serialization with Kafka topics. Both JSON Schema and Avro (for ISpecificRecord types) serializers are included, with automatic schema ID caching and the standard wire format (magic byte + 4-byte schema ID + payload).

opts.UseKafka("localhost:9092")
.ConfigureSchemaRegistry(config =>
{
config.Url = "http://localhost:8081";
})
.UseSchemaRegistryJsonSerializer();

Kafka Schema Registry documentation →

Wolverine.HTTP Improvements

This release brings a wave of HTTP features that close the gap with vanilla ASP.NET Core while maintaining Wolverine’s simpler programming model:

Response Content Negotiation

New ConnegMode configuration with Loose (default, falls back to JSON) and Strict (returns 406 Not Acceptable) modes. Use the [Writes] attribute to declare supported content types and [StrictConneg] to enforce strict matching per endpoint.

Content negotiation documentation →

OnException Convention

This is orthogonal to Wolverine’s error handling policies.

Handler and middleware methods named OnException or OnExceptionAsync are now automatically wired as exception handlers, ordered by specificity. Return ProblemDetailsIResult, or HandlerContinuation to control the response:

public static ProblemDetails OnException(OrderNotFoundException ex)
{
return new ProblemDetails { Status = 404, Detail = ex.Message };
}

Exception handling documentation →

Output Caching

Direct integration with ASP.NET Core’s output caching middleware via the [OutputCache] attribute on endpoints, supporting policy names, VaryByQuery, VaryByHeader, and tag-based invalidation.

Output caching documentation →

Rate Limiting

Apply ASP.NET Core’s rate limiting policies to Wolverine endpoints with [EnableRateLimiting("policyName")] — supporting fixed window, sliding window, token bucket, and concurrency algorithms.

Rate limiting documentation →

Antiforgery / CSRF Protection

Form endpoints automatically require antiforgery validation. Use [ValidateAntiforgery] to opt in non-form endpoints or [DisableAntiforgery] to opt out. Global configuration available via opts.RequireAntiforgeryOnAll().

Antiforgery documentation →

Route Prefix Groups

Organize endpoints with class-level [RoutePrefix("api/v1")] or namespace-based prefixes for cleaner API versioning:

opts.RoutePrefix("api/orders", forEndpointsInNamespace: "MyApp.Features.Orders");

Routing documentation →

SSE / Streaming Responses

Documentation and examples for Server-Sent Events and streaming responses using ASP.NET Core’s Results.Stream(), fully integrated with Wolverine’s service injection.

Streaming documentation →

Community Contributions

Thank you to our community contributors for this release:

  • @LodewijkSioen — Structured ValidationResult support for FluentValidation (#2332)
  • @dmytro-pryvedeniuk — AutoStartHost enabled by default (#2411)
  • @outofrange-consulting — Bidirectional MassTransit header mapping (#2439)
  • @Sonic198 — PartitionId on Envelope for Kafka partition tracking (#2440)
  • Confluent Schema Registry serializers for Kafka (#2443)

Bug Fixes

  • Fixed exchange naming when using FromHandlerType conventional routing (#2397)
  • Fixed flaky GloballyLatchedListenerTests caused by async disposal race condition in TCP SocketListener
  • Added handler.type OpenTelemetry tag for better tracing of message handlers and HTTP endpoints

New Documentation

We’ve also added several new tutorials and guides:

Marten 8.29.0 Release — Performance, Extensibility, and Bug Fixes

Marten 8.29.0 shipped yesterday with a packed release: a new LINQ operator, event enrichment for EventProjection, major async daemon performance improvements, the removal of the FSharp.Core dependency, and several important bug fixes for partitioned tables.

New Features

OrderByNgramRank — Sort Search Results by Relevance

You can now sort NGram search results by relevance using the new OrderByNgramRank() LINQ operator:

var results = await session
.Query<Product>()
.Where(x => x.Name.NgramSearch("blue shoes"))
.OrderByNgramRank(x => x.Name, "blue shoes")
.ToListAsync();

This generates ORDER BY ts_rank(mt_grams_vector(...), mt_grams_query(...)) DESC under the hood — no raw SQL needed.

EnrichEventsAsync for EventProjection

The EnrichEventsAsync hook that was previously only available on aggregation projections (SingleStreamProjection, MultiStreamProjection) is now available on EventProjection too. This lets you batch-load reference data before individual events are processed, avoiding N+1 query problems:

public class TaskProjection : EventProjection
{
public override async Task EnrichEventsAsync(
IQuerySession querySession, IReadOnlyList<IEvent> events,
CancellationToken cancellation)
{
// Batch-load users for all TaskAssigned events in one query
var userIds = events.OfType<IEvent<TaskAssigned>>()
.Select(e => e.Data.UserId).Distinct().ToArray();
var users = await querySession.LoadManyAsync<User>(cancellation, userIds);
// ... set enriched data on events
}
}

ConfigureNpgsqlDataSourceBuilder — Plugin Registration for All Data Sources

A new ConfigureNpgsqlDataSourceBuilder API on StoreOptions ensures Npgsql plugins like UseVector()UseNetTopologySuite(), and UseNodaTime() are applied to every NpgsqlDataSource Marten creates — including tenant databases in multi-tenancy scenarios:

opts.ConfigureNpgsqlDataSourceBuilder(b => b.UseVector());

This is the foundation for external PostgreSQL extension packages (PgVector, PostGIS, etc.) to work correctly across all tenancy modes.

And by the way, JasperFx will be releasing formal Marten support for pgvector and PostGIS in commercial add ons very soon.

Performance Improvements

Opt-in Event Type Index for Faster Projection Rebuilds

If your projections filter on a small subset of event types and your event store has millions of events, rebuilds can time out scanning through non-matching events. A new opt-in composite index solves this:

opts.Events.EnableEventTypeIndex = true;

This creates a (type, seq_id) B-tree index on mt_events, letting PostgreSQL jump directly to matching event types instead of sequential scanning.

And as always, remember that adding more indexes can slow down inserts, so use this judiciously.

Adaptive EventLoader

TL;DR: this helps make the Async Daemon be more reliable in the face of unexpected usage and more adaptive to get over unusual errors in production usage.

Even without the index, the async daemon now automatically adapts when event loading times out. It falls back through progressively simpler strategies — skip-ahead (find the next matching event via MIN(seq_id)), then window-step (advance in 10K fixed windows) — and resets when events flow normally. No configuration needed.

See the expanded tuning documentation for guidance on when to enable the index and how to diagnose slow rebuilds.

FSharp.Core Dependency Removed

Marten no longer has a compile-time dependency on FSharp.Core. F# support still works — if your project references FSharp.Core (as any F# project does), Marten detects it at runtime via reflection. This unblocks .NET 8 users who were stuck on older Marten versions due to the FSharp.Core 9.0.100 requirement.

If you use F# types with Marten (FSharpOption, discriminated union IDs, F# records), everything continues to work unchanged. The dependency just moved from Marten’s package to your project.

Bug Fixes

Partitioned Table Composite PK in Update Functions (#4223)

The generated mt_update_* PostgreSQL function now correctly uses all composite primary key columns in its WHERE clause. Previously, for partitioned tables with a PK like (id, date), the update only matched on id, causing duplicate key violations when multiple rows shared the same ID with different partition keys.

Long Identifier Names (#4224)

Auto-discovered tag types with long names (e.g., BootstrapTokenResourceName) no longer cause PostgresqlIdentifierTooLongException at startup. Generated FK, PK, and index names that exceed PostgreSQL’s 63-character limit are now deterministically shortened with a hash suffix.

This has been longstanding problem in Marten, and we probably should have dealt with this years ago:-(

EF Core 10 Compatibility (#4225)

Updated Weasel to 8.12.0 which fixes MissingMethodException when using Weasel.EntityFrameworkCore with EF Core 10 on .NET 10.

Upgrading

dotnet add package Marten --version 8.29.0

The full changelog is on GitHub.

Polecat 2.0.1

Some time in the last couple weeks I wrote a blog post about my experiences so far with Claude assisted developement where I tried to say that you absolutely have to carefully review what your AI tools are doing because they can take short cuts. So, yeah, I should do that even more closely.

Polecat 2.0.1 is using the SQL Server 2025 native JSON type correctly now, and the database migrations are now all done with the underlying Weasel library that enables Polecat to play nicely with all of the Critter Stack command line support for migrations.

Wolverine “Gap” Analysis

This is the kind of post I write for myself and just share on a Friday or weekend when not many folks are paying any attention.

I’ve taken a couple days at the end of this week after a month long crush to just think about the strategic technical vision for the Critter Stack and the commercial add on products that we’re building under the JasperFx Software rubric. As part of my “deep think, but don’t work too hard” day, I had Claude help me do a gap analysis between Wolverine.HTTP and ASP.Net Core Minimal API & MVC Core and even FastEndpoints. I also did the same for Wolverine’s messaging feature set and all the widely used .NET messaging frameworks (I think .NET has more strong options for this than any other platform and it still irritates me that Microsoft seriously tried to butt into that) and several options in the Java ecosystem.

Before I share the results and what I thought was and wasn’t important, let me share one big insight. Different tools in the same problem space frequently solve the same problems, but with very different technical solutions, concepts, and abstractions. Sometimes different tools even have very similar solutions to common problems, but use very different nomenclature . All this is to say that this effort helped me identify several places where we will try to improve documentation to map features from other tools to the options in Wolverine as Claude “identified” almost two dozen functional “gaps” where I felt like Wolverine already happily solved the same problems that features in MassTransit, NServiceBus, Mulesoft, or other tools did.

There’s also a lesson for folks who switch tools to understand the different concepts in the new tool instead of automatically trying to map your mental model from tool A to tool B without first learning what’s really different.

And lastly, a lesson for anybody who ever does any kind of support of development tools: remember to ask a user who is struggling what their end goals are or their real use case is instead of just focusing on the sometimes oddball implementation or API questions they’re asking you. And that goes double when a user is quite possibly trying to force fit their mental model of a completely different tool into your tool.

Anyway, here’s what I ended up adding to our backlog as well as things that I didn’t think were valuable at this time.

On the HTTP front, I came up with several things, with the big items being:

  1. I originally thought an equivalent to MVC’s IExceptionFilter, but we might just use that as is. That’s come up plenty of times before
  2. Anti-forgery support. I originally thought that Wolverine.HTTP would mostly be used for API development, so didn’t really bother much upfront with too much for supporting HTTP forms, but I think there’s a significant overlap between Wolverine.HTTP usage and htmx where forms are used more heavily, so here we go.
  3. Routing prefixes. It’s come up occasionally, and been just barely on my radar
  4. Endpoint rate limiting middleware for HTTP. This will build on our new rate limiting middleware for message handlers
  5. Server Sent Events support. Why not? For whatever reason, SSE seems to be getting rediscovered by folks. FubuMVC (Wolverine’s predecessor in the early 2010’s) actually had a first class SSE support all those years ago
  6. Output Caching. This has been in my thinking for quite awhile. I think this is going to be two pronged, with direct support for ASP.Net Core caching middleware and maybe some more directed “per entity” caching around our existing “declarative persistence” helpers. I think the second actually lives inside of message handlers as well
  7. API versioning of some sort. It’s easy enough to just add “1.0” into your routes, but we’ll look at more alternatives as well
  8. A little bit of content negotiation support, but that’s been on the periphery of my attention from the beginning. My thought all along was to not bother with that until people explicitly asked for it, but now I just want to close the gaps. FubuMVC had that 15 years ago, so I’ve already dealt with that successfully before — but that was in the ReST craze and “conneg” just isn’t nearly as common in usage as far as I can tell.

And the gap analysis helped point out several areas where we had opportunities to improve the documentation (and future AI skills) to help map Minimal API or MVC Core concepts to existing features in Wolverine.HTTP.

Now, on to the messaging support which turned up almost nothing that I was actually interested in adding to Wolverine except for these:

  1. Formal support for the EIP “Claim Check” pattern. I’ve never pursued that before because I’ve felt like it’s just not that much explicit code, but I still added that to the backlog for “completeness”
  2. Build in EIP “Wire Tap” support to persist messages but that was already in our backlog as that comes up from users and also because we have plans to expose that through MCP and command line AI support tools. I’m not enthusiastic thought about bothering with the “command sourcing” concept from Greg Young, but we’ll see if anybody ever wants it.

Claude came up with about 35 different things to consider, but other than those two things above, those items fell into either functionality we already had with different names or different conceptual solutions, features I just have no interest in supporting or I don’t see being used or requested by our users, or a third group of features that are happily planned and already underway with our forthcoming CritterWatch commercial add on.

Just for completeness, the features I’m saying we won’t even plan to support right now were:

  • The EIP “Routing Slip” concept. I know that MassTransit supports it, but I’m deeply unenthusiastic about both the concept and any attempt to support that in Wolverine. They can have that one.
  • Distributed transaction support. I don’t even know why I would need to explain why not!
  • “Change Data Capture” integration with something like Debezium. I just don’t see a demand for that with Wolverine
  • Any kind of visual process designer. Even on the Marten/Polecat side, I’m wanting us to focus on Markdown or Gherkin specifications or just flat out making our code as simple as possible to write instead of blowing energy on visual tools that generate XML that in turn get generated into Java code. Not that I’m necessarily giving some side eye to any other tool out there *cough* liar! *cough*
  • Batch processing support that really touched on ETL concerns
  • A long lived job model. Maybe down the road, but I’d push folks to just break that up into smaller actions whenever possible anyway. It’s trivial in Wolverine to have message handlers cascade out a request for the next step. Actually, this one is probably the one I’m most likely to have to change my mind about, but we’ll see
  • NServiceBus has their “messaging bridge” that I think would be trivial to build out later if that’s ever valuable for someone, but nobody is asking for that today and Wolverine happily lets you mix and match all the transports and even multiple brokers in one application

And of course, there was some random quirky features of some of the other tools I just didn’t think were worth any consideration outside of client requests or common user community requests.

Multi-Tenancy in the Critter Stack

We put on another Critter Stack live stream today to give a highlight tour of the multi-tenancy features and support across the entire stack. Long story short, I think we have by far and away the most comprehensive feature set for multi-tenancy in the .NET ecosystem, but I’ll let you judge that for yourself:

The Critter Stack provides comprehensive multi-tenancy support across all three tools — Marten, Wolverine, and Polecat — with tenant context flowing seamlessly from HTTP requests through message handling to data persistence. Here’s some links to various bits of documentation and some older blog posts at the bottom as well.

Marten (PostgreSQL)

Marten offers three tenancy strategies for both the document database and event store:

  • Conjoined Tenancy — All tenants share tables with automatic tenant_id discrimination, cross-tenant querying via TenantIsOneOf() and AnyTenant(), and PostgreSQL LIST/HASH partitioning on tenant_id (Document Multi-TenancyEvent Store Multi-Tenancy)
  • Database per Tenant — Four strategies ranging from static mapping to single-server auto-provisioning, master table lookup, and runtime tenant registration (Database-per-Tenant Configuration)
  • Sharded Multi-Tenancy with Database Pooling — Distributes tenants across a pool of databases using hash, smallest-database, or explicit assignment strategies, combining conjoined tenancy with database sharding for extreme scale (Database-per-Tenant Configuration)
  • Global Streams & Projections — Mix globally-scoped and tenant-specific event streams within a conjoined tenancy model (Event Store Multi-Tenancy)

Wolverine (Messaging, Mediator, and HTTP)

Wolverine propagates tenant context automatically through the entire message processing pipeline:

  • Handler Multi-Tenancy — Tenant IDs tracked as message metadata, automatically propagated to cascaded messages, with InvokeForTenantAsync() for explicit tenant targeting (Handler Multi-Tenancy)
  • HTTP Tenant Detection — Built-in strategies for detecting tenant from request headers, claims, query strings, route arguments, or subdomains (HTTP Multi-Tenancy)
  • Marten Integration — Database-per-tenant or conjoined tenancy with automatic IDocumentSession scoping and transactional inbox/outbox per tenant database (Marten Multi-Tenancy)
  • Polecat Integration — Same database-per-tenant and conjoined patterns for SQL Server (Polecat Multi-Tenancy)
  • EF Core Integration — Multi-tenant transactional inbox/outbox with separate databases and automatic migrations (EF Core Multi-Tenancy)
  • RabbitMQ per Tenant — Map tenants to separate virtual hosts or entirely different brokers (RabbitMQ Multi-Tenancy)
  • Azure Service Bus per Tenant — Map tenants to separate namespaces or connection strings (Azure Service Bus Multi-Tenancy)

Polecat (SQL Server)

Polecat mirrors Marten’s tenancy model for SQL Server:

Related Blog Posts

DatePost
Feb 2024Dynamic Tenant Databases in Marten
Mar 2024Recent Critter Stack Multi-Tenancy Improvements
May 2024Multi-Tenancy: What is it and why do you care?
May 2024Multi-Tenancy: Marten’s “Conjoined” Model
Jun 2024Multi-Tenancy: Database per Tenant with Marten
Sep 2024Multi-Tenancy in Wolverine Messaging
Dec 2024Message Broker per Tenant with Wolverine
Feb 2025Critter Stack Roadmap Update for February
May 2025Wolverine 4 is Bringing Multi-Tenancy to EF Core
Oct 2025Wolverine 5 and Modular Monoliths
Mar 2026Announcing Polecat: Event Sourcing with SQL Server
Mar 2026Critter Stack Wide Releases — March Madness Edition

Critter Stack Wide Releases — March Madness Edition

As anybody knows who follows the Critter Stack on our Discord server, I’m uncomfortable with the rapid pace of releases that we’ve sustained in the past couple quarters and I think I would like the release cadence to slow down. However, open issues and pull requests feel like money burning a hole in my pocket, and I don’t letting things linger very long. Our rapid cadence is somewhat driven by JasperFx Software client requests, some by our community being quite aggressive in contributing changes, and our users finding new issues that need to be addressed. While I’ve been known to be very unhappy with feedback saying that our frequent release cadence must be a sign of poor quality, I think our community seems to mostly appreciate that we move relatively fast. I believe that we are definitely innovating much faster and more aggressively than any of the other asynchronous messaging tools in the .NET space, so there’s that. Anyway, enough of that, here’s a rundown of the new releases today.

It’s been a busy week across the Critter Stack! We shipped coordinated releases today across all five projects: Marten 8.27, Wolverine 5.25, Polecat 1.5, Weasel 8.11.1, and JasperFx 1.21.1. Here’s a rundown of what’s new.


Marten 8.27.0

Sharded Multi-Tenancy with Database Pooling

For teams operating at extreme scale — we’re talking hundreds of billions of events — Marten now supports a sharded multi-tenancy model that distributes tenants across a pool of databases. Each tenant gets its own native PostgreSQL LIST partition within a shard database, giving you the isolation benefits of per-tenant databases with the operational simplicity of a managed pool.

Configuration is straightforward:

opts.MultiTenantedWithShardedDatabases(x =>
{
    // Connection to the master database that holds the pool registry
    x.ConnectionString = masterConnectionString;

    // Schema for the registry tables in the master database
    x.SchemaName = "tenants";

    // Seed the database pool on startup
    x.AddDatabase("shard_01", shard1ConnectionString);
    x.AddDatabase("shard_02", shard2ConnectionString);
    x.AddDatabase("shard_03", shard3ConnectionString);
    x.AddDatabase("shard_04", shard4ConnectionString);

    // Choose a tenant assignment strategy (see below)
    x.UseHashAssignment(); // this is the default
});

Calling MultiTenantedWithShardedDatabases() automatically enables conjoined tenancy for both documents and events, with native PG list partitions created per tenant.

Three tenant assignment strategies are built-in:

  • Hash Assignment (default) — deterministic FNV-1a hash of the tenant ID. Fast, predictable, no database queries needed. Best when tenants are roughly equal in size.
  • Smallest Database — assigns new tenants to the database with the fewest existing tenants. Accepts a custom IDatabaseSizingStrategy for balancing by row count, disk usage, or any other metric.
  • Explicit Assignment — you control exactly which database hosts each tenant via the admin API.

The admin API lets you manage the pool at runtime: AddTenantToShardAsyncAddDatabaseToPoolAsyncMarkDatabaseFullAsync — all with advisory-locked concurrent safety.

See the multi-tenancy documentation for the full details.

Bulk COPY Event Append for High-Throughput Seeding

For data migrations, test fixture setup, load testing, or importing events from external systems, Marten now supports a bulk COPY-based event append that uses PostgreSQL’s COPY ... FROM STDIN BINARY for maximum throughput:

// Build up a list of stream actions with events
var streams = new List<StreamAction>();

for (int i = 0; i < 1000; i++)
{
    var streamId = Guid.NewGuid();
    var events = new object[]
    {
        new OrderPlaced(streamId, "Widget", 5),
        new OrderShipped(streamId, $"TRACK-{i}"),
        new OrderDelivered(streamId, DateTimeOffset.UtcNow)
    };

    streams.Add(StreamAction.Start(store.Events, streamId, events));
}

// Bulk insert all events using PostgreSQL COPY for maximum throughput
await store.BulkInsertEventsAsync(streams);

This supports all combinations of Guid/string identity, single/conjoined tenancy, archived stream partitioning, and metadata columns. When using conjoined tenancy, a tenant-specific overload is available:

await store.BulkInsertEventsAsync("tenant-abc", streams);

See the event appending documentation for more.

Other Fixes

  • FetchForWriting now auto-discovers natural keys without requiring an explicit projection registration, and works correctly with strongly typed IDs combined with UseIdentityMapForAggregates
  • Compiled queries using IsOneOf with array parameters now generate correct SQL
  • EF Core OwnsOne().ToJson() support (via Weasel 8.11.1) — schema diffing now correctly handles JSON column mapping when Marten and EF Core share a database
  • Thanks to @erdtsieck for fixing duplicate codegen when using secondary document stores!

Wolverine 5.25.0

This is a big release with 12 PRs merged — a mix of bug fixes, new features, and community contributions.

MassTransit and NServiceBus Interop for Azure Service Bus Topics

Previously, MassTransit and NServiceBus interoperability was only available on Azure Service Bus queues. With 5.25, you can now interoperate on ASB topics and subscriptions too — making it much easier to migrate incrementally or coexist with other .NET messaging frameworks:

// Publish to a topic with NServiceBus interop
opts.PublishAllMessages().ToAzureServiceBusTopic("nsb-topic")
    .UseNServiceBusInterop();

// Listen on a subscription with MassTransit interop
opts.ListenToAzureServiceBusSubscription("wolverine-sub")
    .FromTopic("wolverine-topic")
    .UseMassTransitInterop(mt => { })
    .DefaultIncomingMessage<ResponseMessage>().UseForReplies();

Both UseMassTransitInterop() and UseNServiceBusInterop() are available on AzureServiceBusTopic (for publishing) and AzureServiceBusSubscription (for listening). This is ideal for brownfield scenarios where you’re migrating services one at a time and need different messaging frameworks to talk to each other through shared ASB topics.

Other New Features

  • Handler Type Naming for Conventional Routing — NamingSource.FromHandlerType names listener queues after the handler type instead of the message type, useful for modular monolith scenarios with multiple handlers per message
  • Enhanced WolverineParameterAttribute — new FromHeaderFromClaim, and FromMethod value sources for binding handler parameters to HTTP headers, claims, or static method return values
  • Full Tracing for InvokeAsync — opt-in InvokeTracingMode.Full emits the same structured log messages as transport-received messages, with zero overhead in the default path
  • Configurable SQL transport polling interval — thanks to new contributor @xwipeoutx!

Bug Fixes


Polecat 1.5.0

Polecat — the Critter Stack’s newer, lighter-weight event store option — had a big jump from 1.2 to 1.5:

  • net9.0 support and CI workflow
  • SingleStreamProjection<TDoc, TId> with strongly-typed ID support
  • Auto-discover natural keys for FetchForWriting
  • Conjoined tenancy support for DCB tags and natural keys
  • Fix for FetchForWriting with UseIdentityMapForAggregates and strongly typed IDs

Weasel 8.11.1

  • EF Core OwnsOne().ToJson() support — Weasel’s schema diffing now correctly handles EF Core’s JSON column mapping, preventing spurious migration diffs when Marten and EF Core share a database

JasperFx 1.21.1 / JasperFx.Events 1.24.1

  • Skip unknown flags when AutoStartHost is true — fixes an issue where unrecognized CLI flags would cause errors during host auto-start
  • Retrofit IEventSlicer tests

Upgrading

All packages are available on NuGet now. The Marten and Wolverine releases are fully coordinated — if you’re using the Critter Stack together, upgrade both at the same time for the best experience.

As always, please report any issues on the respective GitHub repositories and join us on the Critter Stack Discord if you have questions!

The World’s Crudest Chaos Monkey

I’m working pretty hard this week and early next to deliver the CritterWatch MVP (our new management and observability console for the Critter Stack) to a JasperFx Software client. One of the things we need to do for testing is to fake out several failure conditions in message handlers to be able to test CritterWatch’s “Dead Letter Queue” management and alerting features. To that end, we have some fake systems that constantly process messages, and we’ve rigged up what I’m going to call the world’s crudest Chaos Monkey in Wolverine middleware:

    public static async Task Before(ChaosMonkeySettings chaos)
    {
        // Configurable slow handler for testing back pressure
        if (chaos.SlowHandlerMs > 0)
        {
            await Task.Delay(chaos.SlowHandlerMs);
        }

        if (chaos.FailureRate <= 0) return;

        // Chaos monkey — distribute failure rate equally across 5 exception types
        var perType = chaos.FailureRate / 5.0;
        var next = Random.Shared.NextDouble();

        if (next < perType)
        {
            throw new TripServiceTooBusyException("Just feeling tired at " + DateTime.Now);
        }

        if (next < perType * 2)
        {
            throw new TrackingUnavailableException("Tracking is down at " + DateTime.Now);
        }

        if (next < perType * 3)
        {
            throw new DatabaseIsTiredException("The database wants a break at " + DateTime.Now);
        }

        if (next < perType * 4)
        {
            throw new TransientException("Slow down, you move too fast.");
        }

        if (next < perType * 5)
        {
            throw new OtherTransientException("Slow down, you move too fast.");
        }
    }

And this to control it remotely in tests or just when doing exploratory manual testing:

    private static void MapChaosMonkeyEndpoints(WebApplication app)
    {
        var group = app.MapGroup("/api/chaos")
            .WithTags("Chaos Monkey");

        group.MapGet("/", (ChaosMonkeySettings settings) => Results.Ok(settings))
            .WithSummary("Get current chaos monkey settings");

        group.MapPost("/enable", (ChaosMonkeySettings settings) =>
        {
            settings.FailureRate = 0.20;
            return Results.Ok(new { message = "Chaos monkey enabled at 20% failure rate", settings });
        }).WithSummary("Enable chaos monkey with default 20% failure rate");

        group.MapPost("/disable", (ChaosMonkeySettings settings) =>
        {
            settings.FailureRate = 0;
            return Results.Ok(new { message = "Chaos monkey disabled", settings });
        }).WithSummary("Disable chaos monkey (0% failure rate)");

        group.MapPost("/failure-rate/{rate:double}", (double rate, ChaosMonkeySettings settings) =>
        {
            rate = Math.Clamp(rate, 0, 1);
            settings.FailureRate = rate;
            return Results.Ok(new { message = $"Failure rate set to {rate:P0}", settings });
        }).WithSummary("Set chaos monkey failure rate (0.0 to 1.0)");

        group.MapPost("/slow-handler/{ms:int}", (int ms, ChaosMonkeySettings settings) =>
        {
            ms = Math.Max(0, ms);
            settings.SlowHandlerMs = ms;
            return Results.Ok(new { message = $"Handler delay set to {ms}ms", settings });
        }).WithSummary("Set artificial handler delay in milliseconds (for back pressure testing)");

        group.MapPost("/projection-failure-rate/{rate:double}", (double rate, ChaosMonkeySettings settings) =>
        {
            rate = Math.Clamp(rate, 0, 1);
            settings.ProjectionFailureRate = rate;
            return Results.Ok(new { message = $"Projection failure rate set to {rate:P0}", settings });
        }).WithSummary("Set projection failure rate (0.0 to 1.0)");
    }

In this case, the Before middleware is just baked into the message handlers, but in your development the “chaos monkey” middleware could be applied only in testing with a Wolverine extension.

And I was probably listening to Simon & Garfunkel when I did the first cut at the chaos monkey:

New Option for Simple Projections in Marten or Polecat

JasperFx Software is around and ready to assist you with getting the best possible results using the Critter Stack.

The projections model in Marten and now Polecat has evolved quite a bit over the past decade. Consider this simple aggregated projection of data for our QuestParty in our tests:

public class QuestParty
{
public List<string> Members { get; set; } = new();
public IList<string> Slayed { get; } = new List<string>();
public string Key { get; set; }
public string Name { get; set; }
// In this particular case, this is also the stream id for the quest events
public Guid Id { get; set; }
// These methods take in events and update the QuestParty
public void Apply(MembersJoined joined) => Members.Fill(joined.Members);
public void Apply(MembersDeparted departed) => Members.RemoveAll(x => departed.Members.Contains(x));
public void Apply(QuestStarted started) => Name = started.Name;
public override string ToString()
{
return $"Quest party '{Name}' is {Members.Join(", ")}";
}
}

That type is mutable, but the projection library underneath Marten and Polecat happily supports projecting to immutable types as well.

Some people actually like the conventional method approach up above with the Apply, Create, and ShouldDelete methods. From the perspective of Marten’s or Polecat’s internals, it’s always been helpful because the projection subsystem “knows” in this case that the QuestParty is only applicable to the specific event types referenced in those methods, and when you call this code:

var party = await query
.Events
.AggregateStreamAsync<QuestParty>(streamId);

Marten and Polecat are able to quietly use extra SQL filters to limit the events fetched from the database to only the types utilized by the projected QuestParty aggregate.

Great, right? Except that some folks don’t like the naming conventions, just prefer explicit code, or do some clever things with subclasses on events that can confuse Marten or Polecat about the precedence of the event type handlers. To that end, Marten 8.0 introduced more options for explicit code. We can rewrite the projection part of the QuestParty above to a completely different class where you can add explicit code:

public class QuestPartyProjection: SingleStreamProjection<QuestParty, Guid>
{
public QuestPartyProjection()
{
// This is *no longer necessary* in
// the very most recent versions of Marten,
// but used to be just to limit Marten's
// querying of event types when doing live
// or async projections
IncludeType<MembersJoined>();
IncludeType<MembersDeparted>();
IncludeType<QuestStarted>();
}
public override QuestParty Evolve(QuestParty snapshot, Guid id, IEvent e)
{
snapshot ??= new QuestParty{ Id = id };
switch (e.Data)
{
case MembersJoined j:
// Small helper in JasperFx that prevents
// double values
snapshot.Members.Fill(j.Members);
break;
case MembersDeparted departed:
snapshot.Members.RemoveAll(x => departed.Members.Contains(x));
break;
}
return snapshot;
}
}

There are several more items in that SingleStreamProjection base type like versioning or fine grained control over asynchronous projection behavior that might be valuable later, but for now, let’s look at a new feature in Marten and Polecat that let’s you use explicit code right in the single aggregate type:

public class QuestParty
{
public List<string> Members { get; set; } = new();
public IList<string> Slayed { get; } = new List<string>();
public string Key { get; set; }
public string Name { get; set; }
// In this particular case, this is also the stream id for the quest events
public Guid Id { get; set; }
public void Evolve(IEvent e)
{
switch (e.Data)
{
case QuestStarted _:
// Little goofy, but this let's Marten know that
// the projection cares about that event type
break;
case MembersJoined j:
// Small helper in JasperFx that prevents
// double values
Members.Fill(j.Members);
break;
case MembersDeparted departed:
Members.RemoveAll(x => departed.Members.Contains(x));
break;
}
}
public override string ToString()
{
return $"Quest party '{Name}' is {Members.Join(", ")}";
}
}

This is admittedly yet another convention method in terms of the method name and the possible arguments, but hopefully the switch statement approach is much more explicit for folks who prefer that. As an additional bonus, Marten is able to automatically register the event types via a source generator that the version of QuestParty just above is using automatically so that we get all the benefits of the event filtering without making users do extra explicit configuration.

Projecting to Immutable Views

Just for completeness, let’s look at alternative versions of QuestParty just to see what it looks like if you make the aggregate an immutable type. First up is the conventional method approach:

public sealed record QuestParty(Guid Id, List<string> Members)
{
// These methods take in events and update the QuestParty
public static QuestParty Create(QuestStarted started) => new(started.QuestId, []);
public static QuestParty Apply(MembersJoined joined, QuestParty party) =>
party with
{
Members = party.Members.Union(joined.Members).ToList()
};
public static QuestParty Apply(MembersDeparted departed, QuestParty party) =>
party with
{
Members = party.Members.Where(x => !departed.Members.Contains(x)).ToList()
};
public static QuestParty Apply(MembersEscaped escaped, QuestParty party) =>
party with
{
Members = party.Members.Where(x => !escaped.Members.Contains(x)).ToList()
};
}

And with the Evolve approach:

public sealed record QuestParty(Guid Id, List<string> Members)
{
public static QuestParty Evolve(QuestParty? party, IEvent e)
{
switch (e.Data)
{
case QuestStarted s:
return new(s.QuestId, []);
case MembersJoined joined:
return party with {
Members = party.Members.Union(joined.Members).ToList()
};
case MembersDeparted departed:
return party with
{
Members = party.Members.Where(x => !departed.Members.Contains(x)).ToList()
};
case MembersEscaped escaped:
return party with
{
Members = party.Members.Where(x => !escaped.Members.Contains(x)).ToList()
};
}
return party;
}

Summary

What do I recommend? Honestly, just whatever you prefer. This is a case where I’d like everyone to be happy with one of the available options. And yes, it’s not always good that there is more than one way to do the same thing in a framework, but I think we’re going to just keep all these options in the long run. It wasn’t shown here at all, but I think we’ll kill off the early options to define projections through a ton of inline Lambda functions within a fluent interface. That stuff can just die.

In the medium and longer term, we’re going to be utilizing more source generators across the entire Critter Stack as a way of both eliminating some explicit configuration requirements and to optimize our cold start times. I’m looking forward to getting much more into that work.

CQRS and Event Sourcing with Polecat and SQL Server

If you’re already familiar with Marten and Wolverine, this is all old news except for the part where we’re using SQL Server. If you’re brand new to the “Critter Stack,” Event Sourcing, or CQRS, hang around! And just so you know, JasperFx Software is completely ready to support our clients using Polecat.

All of the sample code in this blog post can be found in the Wolverine codebase on GitHub here.

With the advent of Polecat going 1.0 last week, you now have a robust solution for Event Sourcing using SQL Server 2025 as the backing store. If you’re reading this, you’re surely involved in software development and that means that your job at some point has been dictated by some kind of issue tracking tool, so let’s use that as our example system and pretend we’re creating an incident tracking system for our help desk folks as shown below:

To get started, I’m a fan of using the Event Storming technique to identify some of the meaningful events we should capture in our system and start to identify possible commands within our system:

Having at least some initial thoughts about the shape of our system, let’s start a new web service project in .NET with:

dotnet new webapi

Then add both Polecat (for persistence) and Wolverine (for both HTTP endpoints and asynchronous messaging) with:

dotnet add package WolverineFx.Polecat
dotnet add package WolverineFx.Http

And now, let’s jump into our Program file to wire up Polecat to an existing SQL Server database and configure Wolverine as well:

using Polecat;
using Polecat.Projections;
using PolecatIncidentService;
using Wolverine;
using Wolverine.Http;
using Wolverine.Polecat;
var builder = WebApplication.CreateBuilder(args);
builder.Services.AddOpenApi();
builder.Services.AddPolecat(opts =>
{
var connectionString = builder.Configuration.GetConnectionString("SqlServer")
??
"Server=localhost,1434;User Id=sa;Password=P@55w0rd;Timeout=5;MultipleActiveResultSets=True;Initial Catalog=master;Encrypt=False";
opts.ConnectionString = connectionString;
opts.DatabaseSchemaName = "incidents";
// We'll talk about this soon...
opts.Projections.Snapshot<Incident>(SnapshotLifecycle.Inline);
})
// For Marten users, *this* is the default for Polecat!
//.UseLightweightSessions()
.IntegrateWithWolverine(x => x.UseWolverineManagedEventSubscriptionDistribution = true);
builder.Host.UseWolverine(opts => { opts.Policies.AutoApplyTransactions(); });
builder.Services.AddWolverineHttp();
var app = builder.Build();
if (app.Environment.IsDevelopment())
{
app.MapOpenApi();
}
// Adding Wolverine.HTTP
app.MapWolverineEndpoints();
// This gets you a lot of CLI goodness from the
// greater JasperFx / Critter Stack ecosystem
// and will soon feed quite a bit of AI assisted development as well
return await app.RunJasperFxCommands(args);
// For test bootstrapping in case you want to work w/
// more than one system at a time
public partial class Program
{
}

Our events are just going to be some immutable records like this:

public record LogIncident(
Guid CustomerId,
Contact Contact,
string Description,
Guid LoggedBy
);
public record CategoriseIncident(
IncidentCategory Category,
Guid CategorisedBy,
int Version
);
public record CloseIncident(
Guid ClosedBy,
int Version
);

It’s not mandatory to use immutable types, but you might as well and it’s just idiomatic.

Let’s start with our LogIncident use case and build out an HTTP endpoint that creates a new “event stream” for events related to a single, logical Incident:

public static class LogIncidentEndpoint
{
[WolverinePost("/api/incidents")]
public static (CreationResponse<Guid>, IStartStream) Post(LogIncident command)
{
var (customerId, contact, description, loggedBy) = command;
var logged = new IncidentLogged(customerId, contact, description, loggedBy);
var start = PolecatOps.StartStream<Incident>(logged);
var response = new CreationResponse<Guid>("/api/incidents/" + start.StreamId, start.StreamId);
return (response, start);
}
}

Polecat does support “Dynamic Consistency Boundary” event sourcing as well, but that’s not where I think most people should start, and I’ll get to that in a later post I keep putting off…

With some help from Alba, another JasperFx supported library, we can write both unit tests for the business logic (such as it is) and do an end to end test through the HTTP endpoint like this:

public class when_logging_an_incident : IntegrationContext
{
public when_logging_an_incident(AppFixture fixture) : base(fixture)
{
}
[Fact]
public void unit_test()
{
var contact = new Contact(ContactChannel.Email);
var command = new LogIncident(Guid.NewGuid(), contact, "It's broken", Guid.NewGuid());
// Pure function FTW!
var (response, startStream) = LogIncidentEndpoint.Post(command);
// Should only have the one event
startStream.Events.ShouldBe([
new IncidentLogged(command.CustomerId, command.Contact, command.Description, command.LoggedBy)
]);
}
[Fact]
public async Task happy_path_end_to_end()
{
var contact = new Contact(ContactChannel.Email);
var command = new LogIncident(Guid.NewGuid(), contact, "It's broken", Guid.NewGuid());
// Log a new incident first
var initial = await Scenario(x =>
{
x.Post.Json(command).ToUrl("/api/incidents");
x.StatusCodeShouldBe(201);
});
// Read the response body by deserialization
var response = initial.ReadAsJson<CreationResponse<Guid>>();
// Reaching into Polecat to build the current state of the new Incident
await using var session = Store.LightweightSession();
var incident = await session.Events.FetchLatest<Incident>(response.Value);
incident!.Status.ShouldBe(IncidentStatus.Pending);
}
}

Now, to build out a command handler for potentially categorizing an event, we’ll need to:

  1. Know the current state of the logical Incident by rolling up the events into some kind of representation of the state so that we can “decide” which if any events should be appended at this time. In Event Sourcing terms, I’d refer to this as the “write model.”
  2. The command type itself
  3. Validation logic for the input
  4. Like I said earlier, decide which events should be published
  5. Do some metadata correlation for observability. It’s not obvious from the code, but in the sample below Wolverine & Marten are tracking the events captured against the correlation id of the current HTTP request
  6. Establish transactional boundaries, including any outbound messaging that might be taking place in response to the events that are being appended. This is something that Wolverine does for Polecat (and Marten) in command handlers. This includes the transactional outbox support in Wolverine.
  7. Create protections against concurrent writes to any given Incident stream, which Wolverine and Polecat do for you in the next endpoint by applying optimistic concurrency checks to guarantee that no other thread changed the Incident since this CategoriseIncident command was issued by the caller

That’s actually quite a bit of responsibility for the command handler, but not to worry, Wolverine and Polecat are going to keep your code nice and simple. Hopefully even a pure function “Decider” for the business logic in many cases. Before I get into the command handler, here’s what the “projection” that gives us the current state of the Incident by applying events:

public class Incident
{
public Guid Id { get; set; }
// Polecat will set this itself for optimistic concurrency
public int Version { get; set; }
public IncidentStatus Status { get; set; } = IncidentStatus.Pending;
public IncidentCategory? Category { get; set; }
public bool HasOutstandingResponseToCustomer { get; set; } = false;
public Incident()
{
}
public void Apply(IncidentLogged _) { }
public void Apply(IncidentCategorised e) => Category = e.Category;
public void Apply(AgentRespondedToIncident _) => HasOutstandingResponseToCustomer = false;
public void Apply(CustomerRespondedToIncident _) => HasOutstandingResponseToCustomer = true;
public void Apply(IncidentResolved _) => Status = IncidentStatus.Resolved;
public void Apply(ResolutionAcknowledgedByCustomer _) => Status = IncidentStatus.ResolutionAcknowledgedByCustomer;
public void Apply(IncidentClosed _) => Status = IncidentStatus.Closed;
public bool ShouldDelete(Archived @event) => true;
}

And finally, the command handler:

public record CategoriseIncident(
IncidentCategory Category,
Guid CategorisedBy,
int Version
);
public static class CategoriseIncidentEndpoint
{
public static ProblemDetails Validate(Incident incident)
{
return incident.Status == IncidentStatus.Closed
? new ProblemDetails { Detail = "Incident is already closed" }
: WolverineContinue.NoProblems;
}
[EmptyResponse]
[WolverinePost("/api/incidents/{incidentId:guid}/category")]
public static IncidentCategorised Post(
CategoriseIncident command,
[Aggregate("incidentId")] Incident incident)
{
return new IncidentCategorised(incident.Id, command.Category, command.CategorisedBy);
}
}

And I admit that that’s a lot of code thrown at you all at once, and maybe even a lot of new concepts. For further reading, see:

Announcing Polecat: Event Sourcing with SQL Server

Polecat is now completely supported by JasperFx Software and automatically part of any existing and future support agreements through our existing plans.

Polecat was released as 1.0 this past week (with 1.1 & now 1.2 coming soon). Let’s call it what it is, Polecat is a port of (most of) Marten to target SQL Server 2025 and SQL Server’s new JSON data type. For folks not familiar with Marten, Polecat is in one library:

  1. A very full fledged Event Store library for SQL Server that includes event projection and subscriptions, Dynamic Consistency Boundary support, a large amount of functionality for Event Sourcing basics, rich event metadata tracking capabilities, and even rich multi-tenancy support.
  2. A feature rich set of Document Database capabilities backed by SQL Server including LINQ querying support

And while Polecat is brand spanking new, it comes out of the gate with the decade old Marten pedigree and its own Wolverine integration for CQRS usage. I’m confident in saying Polecat is now the best technical option for using Event Sourcing with SQL Server in the .NET ecosystem.

And of course, if you’re a shop with deep existing roots into EF Core usage, Polecat also comes with projection support to EF Core, so Polecat can happily coexist with EF Core in the same systems.

Alright, let’s just into a quick start. First, let’s say you’ve started a brand new .NET project through dotnet run webapi and you’ve added a reference to Polecat through Nuget (and you have a running SQL Server 2025 instance handy too of course!). Next, let’s start with the inevitable AddPolecat() usage in your Program file:

builder.Services.AddPolecat(options =>
{
// Connection string to your SQL Server 2025 database
options.Connection("Server=localhost;Database=myapp;User Id=sa;Password=YourStrong!Password;TrustServerCertificate=True");
// Optionally change the default schema (default is "dbo")
options.DatabaseSchemaName = "myschema";
});

Polecat can be used without IHost or IServiceCollection registrations by just directly building a DocumentStore object.

Next, let’s say you’ve got this simplistic document type (entity in Polecat parlance):

public class User
{
public Guid Id { get; set; }
public required string FirstName { get; set; }
public required string LastName { get; set; }
public bool Internal { get; set; }
}

And now, let’s use Polecat within some Minimal API endpoints to capture and query User documents:

// Store a document
app.MapPost("/user", async (CreateUserRequest create, IDocumentSession session) =>
{
var user = new User
{
FirstName = create.FirstName,
LastName = create.LastName,
Internal = create.Internal
};
session.Store(user);
await session.SaveChangesAsync();
});
// Query with LINQ
app.MapGet("/users", async (bool internalOnly, IDocumentSession session, CancellationToken ct) =>
{
return await session.Query<User>()
.Where(x => x.Internal == internalOnly)
.ToListAsync(ct);
});
// Load by ID
app.MapGet("/user/{id:guid}", async (Guid id, IQuerySession session, CancellationToken ct) =>
{
return await session.LoadAsync<User>(id, ct);
});

For folks used to EF Core, I should point out that Polecat has its own “it just works” database migration subsystem that in the default development mode will happily make sure that all necessary database tables, views, and functions are exactly as they should be at runtime so you don’t have to fiddle with database migrations when all you want to do is just get things done.

While I initially thought that we’d mainly focus on the event sourcing support, we were also able to recreate the mass majority of Marten’s document database capabilities (including the “partial update” model, LINQ support, soft deletes, multi-tenancy, and batch updates for starters) as well if you’d only be interested in that feature set by itself.

Moving over to event sourcing instead, let’s say you’re into fantasy books like I am and you want to build a system to model the journeys and adventures of a quest in your favorite fantasy series. You might model some of the events in that system like:

public record QuestStarted(string Name);
public record MembersJoined(string Location, string[] Members);
public record MembersDeparted(string Location, string[] Members);
public record QuestEnded(string Name);

And you model the current state of the quest party like this:

public class QuestParty
{
public Guid Id { get; set; }
public string Name { get; set; } = "";
public List<string> Members { get; set; } = new();
public void Apply(QuestStarted started)
{
Name = started.Name;
}
public void Apply(MembersJoined joined)
{
Members.AddRange(joined.Members);
}
public void Apply(MembersDeparted departed)
{
foreach (var member in departed.Members)
Members.Remove(member);
}
}

The step above isn’t strictly necessary for event sourcing, but you usually need a projection of some sort sooner or later.

And finally, we can add events by starting a new event stream:

var store = DocumentStore.For(opts =>
{
opts.Connection("Server=localhost,1433;Database=myapp;User Id=sa;Password=YourStrong!Password;TrustServerCertificate=True");
});
await using var session = store.LightweightSession();
// Start a new stream with initial events
var questId = session.Events.StartStream<QuestParty>(
new QuestStarted("Destroy the Ring"),
new MembersJoined("Rivendell", ["Frodo", "Sam", "Aragorn", "Gandalf"])
);
await session.SaveChangesAsync();

And even append some new ones to the same stream later:

await using var session = store.LightweightSession();
session.Events.Append(questId,
new MembersJoined("Moria", ["Gimli", "Legolas"]),
new MembersDeparted("Moria", ["Gandalf"])
);
await session.SaveChangesAsync();

And derive the current state of our quest:

var party = await session.Events.AggregateStreamAsync<QuestParty>(questId);
// party.Name == "Destroy the Ring"
// party.Members == ["Frodo", "Sam", "Aragorn", "Gimli", "Legolas"]

And there’s much, much more of course, including everything you’d need to build real systems based on our 10 years and counting supporting Marten with PostgreSQL.

How is Polecat Different than Marten?

There are of course some differences besides just the database engine:

  • Polecat is using source generators instead of the runtime code generation that Marten does today
  • Polecat will only support System.Text.Json for now as a serialization engine
  • Polecat only supports the “Quick Append” option from Marten
  • There is no automatic dirty checking
  • No “duplicate fields” support so far, we’re going to reevaluate that though
  • Plenty of other technical baggage features I flat out didn’t want to support in Marten didn’t make the cut, but I can’t imagine anyone will miss any of that!

Summary

For over a decade people have been telling me that Marten would be more successful and adopted by more .NET shops if it only supported SQL Server in addition to or instead of PostgreSQL. While I’ve never really disagreed with that idea — and it’s impossible to really prove the counter factual anyway — there have always been real blockers in both SQL Server’s JSON support lagging far behind PostgreSQL and frankly the time commitment on my part to be able to attempt that work in the first place.

So what changed to enable this?

  1. SQL Server 2025 added much better JSON support rivaling PostgreSQL’s JSONB type
  2. We had already invested in pulling the basic event abstractions and projection support out of Marten and into a common library called JasperFx.Events as part of the Marten 8.0 release cycle and that work was always meant to be an enabler for what is now Polecat
  3. Claude & Opus 4.5/4.6 turned out to be very, very good at grunt work

That second item had to this point been a near disaster in my mind because of how much work and time that took compared to the benefits and was the single most time consuming part of Polecat development. Let’s just say that I’m very relieved that that effort didn’t turn out to be a very expensive sunk cost for JasperFx!

I have no earthly idea how much traction Polecat will really get, but we’ve already had some interest from folks who have wanted to use Marten, but couldn’t get their .NET shop to adopt PostgreSQL. I’m hopeful!

Critter Stack Roadmap for March 2026

It’s only a month since I’ve written an update on the Critter Stack roadmap, but it’s maybe worth some time on my part to update what I think the roadmap is now. The biggest change is the utter dominance of AI in the software development discourse and the fact that Claude usage has allowed us to chew through a shocking amount of backlog in the past 6 weeks. That’s probably also changed my own thinking about what should be next throughout this year.

First, some updates on what’s been added to the Critter Stack in just the last month:

By the time you read this, we may very well have Polecat 1.0 out as well.

Short Term

The short term priority for myself and JasperFx Software is to deliver the CritterWatch MVP in a usable form by the end of March.

Marten, Wolverine, and even Polecat have no major new features planned for the short term and I think they will only get tactical releases for bug fixes and JasperFx client requests for a little while. And let me tell you, it feels *weird* to say that, but we’ve blown through a tremendous amount of the backlog so far in 2026.

Medium Term

  • Enhance CritterWatch until it’s the best in class monitoring tool for asynchronous messaging and event sourcing. Part of that will probably be adding quite a bit more functionality for development time as well.
  • For a JasperFx Software client, we’re doing PoC work on scaling Marten to be able to handle having several hundred billion events in a single system. I’m going to assume that this PoC will probably lead to enhancements in both Marten and Wolverine!
  • We’ll finally add some direct support to Marten for the PostGIS PostgreSQL extension
  • I’m a little curious to try to use the hstore extension with Marten as a possible way to optimize our new DCB support
  • Play with Pgvector and TimescaleDb in combination with Marten as some kind of vague “how can we say that Marten is even more awesome for AI?”
  • There’s going to be a new wave of releases later this year for Marten 9.0, Wolverine 6.0, and Polecat 2.0 that will mostly about performance optimizations and especially finding ways to optimize the cold start time of applications using these tools.
  • Babu and I (really all Babu so far) are going to be building a set of AI skills for using the Critter Stack tools that will be curated in a GitHub repository and available to JasperFx Software clients. I do not know what the full impact of AI tools are really going to be on software development, but I personally want to plan for the worst case that AI tools plus LLM-friendly documentation drastically reduces the demand for consulting and try to belatedly pivot JasperFx Software to being at least partially a product company.
  • Build tooling for spec driven development using the Critter Stack. I don’t have any details beyond “hey, wouldn’t that be cool?”. My initial thought is to play with Gherkin specifications that generates “best practices” Critter Stack code with the accompanying automated tests to boot.
  • One way or another, we’ll be building MCP support into the Critter Stack, but again, I don’t know anything more than “hey, wouldn’t that be cool?”

Long Term

Profit?

I’m playing with the idea of completely rebooting Storyteller as a new spec driven development tool. I have the Nuget rights to the “Storyteller” name and graphics from Khalid (a necessary requirement for any successful effort on my part), and I’ve always wanted to go back to it some day.

Re-Sequencer and Global Message Partitioning in Wolverine

Last week I helped a JasperFx Software client with a use case where they get a steady stream of related events from an upstream system into a downstream system where order of processing is important, but the messages might arrive out of order.

Once again referring to the venerable Enterprise Integration Patterns book, that scenario requires a Resequencer:

How can we get a stream of related but out-of-sequence messages back into the correct order?

EIP ReSequencer

To solve the message ordering challenge, we introduced the new Resequencer Saga feature into Wolverine, and combined that with the existing “Partitioned Sequential Messaging” feature.

For the new built in re-sequencing, we do need you to implement this interface on any message types in that related stream so that Wolverine “knows” what order the message is inside of a related stream:

public interface SequencedMessage
{
int? Order { get; }
}

The next step is to use a special kind of new Wolverine Saga called ResequencerSaga<T>, where the T is just some sort of common interface for all the message types that are part of this ordered stream and also implements the SequencedMessage shown above. Here’s a simple example I used for the testing:

public record StartMyWorkflow(Guid Id);
public record MySequencedCommand(Guid SagaId, int? Order) : SequencedMessage;
public class MyWorkflowSaga : ResequencerSaga<MySequencedCommand>
{
public Guid Id { get; set; }
public static MyWorkflowSaga Start(StartMyWorkflow cmd)
{
return new MyWorkflowSaga { Id = cmd.Id };
}
public void Handle(MySequencedCommand cmd)
{
// This will only be called when messages arrive in the correct order,
// or when out-of-order messages are replayed after gaps are filled
}
}

At runtime, when Wolverine gets a message that is handled by that MyWorkflowSaga, there is some middleware that first compares the declared order of that message against the recorded state of the saga so far. In more concrete terms, if…

  • It’s the first message in the sequence, Wolverine just processes it as normal and records in the saga state what the last processed message order was so that it “knows” what message sequence should be next
  • It’s a later message in the sequence compared to the last message sequence processed, the saga state will just store the current message, persist the saga state, and otherwise skip the normal message processing
  • The message is the next in the sequence according to what the saga state says should be processed next, it processes normally. If there are any previously out of order messages that the saga state already knows about that are sequentially next after the current message, Wolverine will re-publish those messages locally — but with the normal Wolverine message sequencing these cascading messages will not go anywhere until the initiating message completes

With this mechanism, Wolverine is able to put the messages arriving from the outside world back into the correct sequential order in its own processing.

Of course though, this processing is very stateful and somewhat likely to be vulnerable to concurrent access problems. Most of the saga storage mechanisms in Wolverine happily support optimistic concurrency around saving saga state, so you could just use some selective retries on concurrency violations. Or better yet, Wolverine users can just about completely side step issues with concurrency by utilizing our newest improvement to partitioned messaging we’re calling “Global Partitioning.”

Let’s say that you have a great deal of operations in your system that have to modify a resource of some sort like an entity, a file, a saga in this case, or an event stream that might be a little bit sensitive to concurrent access. Let’s also say that you have a mix of messages that impact these sensitive resources that come from both external, upstream systems and from cascaded messages within your own system.

The syntax for this next feature was added just today in Wolverine 5.21 as I realized the previous syntax was basically unusable in the course of trying to write this blog post. So it goes.

A “global partitioning” allows you to create a guarantee that messages impacting those resources can be processed sequentially within a message group while allowing for parallel processing between message groups throughout the entire cluster.

Imagine it like this (but know I drew this diagram for someone using Kafka even though the next example is using Rabbit MQ queues):

And with this configuration:

using var host = await Host.CreateDefaultBuilder()
.UseWolverine(opts =>
{
// You'd *also* supply credentials here of course!
opts.UseRabbitMq();
// Do something to add Saga storage too!
opts
.MessagePartitioning
// This tells Wolverine to "just" use implied
// message grouping based on Saga identity among other things
.UseInferredMessageGrouping()
.GlobalPartitioned(topology =>
{
// Creates 5 sharded RabbitMQ queues named "sequenced1" through "sequenced5"
// with matching companion local queues for sequential processing
topology.UseShardedRabbitQueues("sequenced", 5);
topology.MessagesImplementing<MySequencedCommand>();
});
}).StartAsync();

What this does is spread the work out for handling MySequencedCommand messages through five different Rabbit MQ + Local queue pairs, with each pair active on only one single node within your application. Even inside each local queue in this partitioning scheme, Wolverine is parallelizing between message groups.

Now, let’s talk about receiving any message that can be cast to MySequencedCommand. If the message is received at a completely different listener than the “sequenced1/2/3/4/5” queues defined above, like from an external system that knows absolutely nothing about your message partitioning, Wolverine is going to immediately determine the message group identity by inferring that from the saga message handler rules (that’s what the UseInferredMessageGrouping() option does for us), then forwards that message to the proper node that is currently handling that group id. If the current node happens to be assigned that message group id, Wolverine forwards the message directly to the right local queue.

Likewise, if you publish a cascading message inside one of your handlers, Wolverine will determine the message group id for that message type, then try to either route that message locally if that group happens to be assigned to the current node (and it probably would be if you were cascading from your own handlers) or sends it remotely to the right messaging endpoint (Rabbit MQ queue or a Kafka topic or an AWS SQS queue maybe).

The point being, this guarantees that related messages are processed sequentially across the entire application cluster while allowing parallel processing between unrelated messages.

Summary

These are hopefully two powerful new features that will benefit Wolverine users in the near future. Both of these features were built at the behest of JasperFx Software clients to directly support their current work. I’m very happy to just quietly fold in reasonably sized new features for JasperFx support clients without extra cost when those features likely benefit the community as a whole. Contact us at sales@jasperfx.net to find out what we can do to help your software development efforts be more successful.

And just for bragging rights tonight, I did some poking around (okay, I asked Claude to do it for me) to see if any other asynchronous messaging tools offer anything similar to what our global partitioning option does for Wolverine users. While you can certainly achieve the same goals through actor frameworks like AkkaDotNet or Orleans (I consider actor frameworks to be such a different paradigm that I don’t really think of them as direct competitors to Wolverine), it doesn’t appear that there are any equivalents out there to this feature in the .NET space. MassTransit and NServiceBus both have more limited versions of this capability, but nothing that is as easy or flexible as what Wolverine has at this point. Now, granted, we’re at this point because Marten event stream appends can be sensitive to concurrent access so we’ve had to take concurrency maybe a little more seriously than the pure play asynchronous messaging tools that don’t really have an event sourcing component.