New Option for Simple Projections in Marten or Polecat

JasperFx Software is around and ready to assist you with getting the best possible results using the Critter Stack.

The projections model in Marten and now Polecat has evolved quite a bit over the past decade. Consider this simple aggregated projection of data for our QuestParty in our tests:

public class QuestParty
{
public List<string> Members { get; set; } = new();
public IList<string> Slayed { get; } = new List<string>();
public string Key { get; set; }
public string Name { get; set; }
// In this particular case, this is also the stream id for the quest events
public Guid Id { get; set; }
// These methods take in events and update the QuestParty
public void Apply(MembersJoined joined) => Members.Fill(joined.Members);
public void Apply(MembersDeparted departed) => Members.RemoveAll(x => departed.Members.Contains(x));
public void Apply(QuestStarted started) => Name = started.Name;
public override string ToString()
{
return $"Quest party '{Name}' is {Members.Join(", ")}";
}
}

That type is mutable, but the projection library underneath Marten and Polecat happily supports projecting to immutable types as well.

Some people actually like the conventional method approach up above with the Apply, Create, and ShouldDelete methods. From the perspective of Marten’s or Polecat’s internals, it’s always been helpful because the projection subsystem “knows” in this case that the QuestParty is only applicable to the specific event types referenced in those methods, and when you call this code:

var party = await query
.Events
.AggregateStreamAsync<QuestParty>(streamId);

Marten and Polecat are able to quietly use extra SQL filters to limit the events fetched from the database to only the types utilized by the projected QuestParty aggregate.

Great, right? Except that some folks don’t like the naming conventions, just prefer explicit code, or do some clever things with subclasses on events that can confuse Marten or Polecat about the precedence of the event type handlers. To that end, Marten 8.0 introduced more options for explicit code. We can rewrite the projection part of the QuestParty above to a completely different class where you can add explicit code:

public class QuestPartyProjection: SingleStreamProjection<QuestParty, Guid>
{
public QuestPartyProjection()
{
// This is *no longer necessary* in
// the very most recent versions of Marten,
// but used to be just to limit Marten's
// querying of event types when doing live
// or async projections
IncludeType<MembersJoined>();
IncludeType<MembersDeparted>();
IncludeType<QuestStarted>();
}
public override QuestParty Evolve(QuestParty snapshot, Guid id, IEvent e)
{
snapshot ??= new QuestParty{ Id = id };
switch (e.Data)
{
case MembersJoined j:
// Small helper in JasperFx that prevents
// double values
snapshot.Members.Fill(j.Members);
break;
case MembersDeparted departed:
snapshot.Members.RemoveAll(x => departed.Members.Contains(x));
break;
}
return snapshot;
}
}

There are several more items in that SingleStreamProjection base type like versioning or fine grained control over asynchronous projection behavior that might be valuable later, but for now, let’s look at a new feature in Marten and Polecat that let’s you use explicit code right in the single aggregate type:

public class QuestParty
{
public List<string> Members { get; set; } = new();
public IList<string> Slayed { get; } = new List<string>();
public string Key { get; set; }
public string Name { get; set; }
// In this particular case, this is also the stream id for the quest events
public Guid Id { get; set; }
public void Evolve(IEvent e)
{
switch (e.Data)
{
case QuestStarted _:
// Little goofy, but this let's Marten know that
// the projection cares about that event type
break;
case MembersJoined j:
// Small helper in JasperFx that prevents
// double values
Members.Fill(j.Members);
break;
case MembersDeparted departed:
Members.RemoveAll(x => departed.Members.Contains(x));
break;
}
}
public override string ToString()
{
return $"Quest party '{Name}' is {Members.Join(", ")}";
}
}

This is admittedly yet another convention method in terms of the method name and the possible arguments, but hopefully the switch statement approach is much more explicit for folks who prefer that. As an additional bonus, Marten is able to automatically register the event types via a source generator that the version of QuestParty just above is using automatically so that we get all the benefits of the event filtering without making users do extra explicit configuration.

Projecting to Immutable Views

Just for completeness, let’s look at alternative versions of QuestParty just to see what it looks like if you make the aggregate an immutable type. First up is the conventional method approach:

public sealed record QuestParty(Guid Id, List<string> Members)
{
// These methods take in events and update the QuestParty
public static QuestParty Create(QuestStarted started) => new(started.QuestId, []);
public static QuestParty Apply(MembersJoined joined, QuestParty party) =>
party with
{
Members = party.Members.Union(joined.Members).ToList()
};
public static QuestParty Apply(MembersDeparted departed, QuestParty party) =>
party with
{
Members = party.Members.Where(x => !departed.Members.Contains(x)).ToList()
};
public static QuestParty Apply(MembersEscaped escaped, QuestParty party) =>
party with
{
Members = party.Members.Where(x => !escaped.Members.Contains(x)).ToList()
};
}

And with the Evolve approach:

public sealed record QuestParty(Guid Id, List<string> Members)
{
public static QuestParty Evolve(QuestParty? party, IEvent e)
{
switch (e.Data)
{
case QuestStarted s:
return new(s.QuestId, []);
case MembersJoined joined:
return party with {
Members = party.Members.Union(joined.Members).ToList()
};
case MembersDeparted departed:
return party with
{
Members = party.Members.Where(x => !departed.Members.Contains(x)).ToList()
};
case MembersEscaped escaped:
return party with
{
Members = party.Members.Where(x => !escaped.Members.Contains(x)).ToList()
};
}
return party;
}

Summary

What do I recommend? Honestly, just whatever you prefer. This is a case where I’d like everyone to be happy with one of the available options. And yes, it’s not always good that there is more than one way to do the same thing in a framework, but I think we’re going to just keep all these options in the long run. It wasn’t shown here at all, but I think we’ll kill off the early options to define projections through a ton of inline Lambda functions within a fluent interface. That stuff can just die.

In the medium and longer term, we’re going to be utilizing more source generators across the entire Critter Stack as a way of both eliminating some explicit configuration requirements and to optimize our cold start times. I’m looking forward to getting much more into that work.

CQRS and Event Sourcing with Polecat and SQL Server

If you’re already familiar with Marten and Wolverine, this is all old news except for the part where we’re using SQL Server. If you’re brand new to the “Critter Stack,” Event Sourcing, or CQRS, hang around! And just so you know, JasperFx Software is completely ready to support our clients using Polecat.

All of the sample code in this blog post can be found in the Wolverine codebase on GitHub here.

With the advent of Polecat going 1.0 last week, you now have a robust solution for Event Sourcing using SQL Server 2025 as the backing store. If you’re reading this, you’re surely involved in software development and that means that your job at some point has been dictated by some kind of issue tracking tool, so let’s use that as our example system and pretend we’re creating an incident tracking system for our help desk folks as shown below:

To get started, I’m a fan of using the Event Storming technique to identify some of the meaningful events we should capture in our system and start to identify possible commands within our system:

Having at least some initial thoughts about the shape of our system, let’s start a new web service project in .NET with:

dotnet new webapi

Then add both Polecat (for persistence) and Wolverine (for both HTTP endpoints and asynchronous messaging) with:

dotnet add package WolverineFx.Polecat
dotnet add package WolverineFx.Http

And now, let’s jump into our Program file to wire up Polecat to an existing SQL Server database and configure Wolverine as well:

using Polecat;
using Polecat.Projections;
using PolecatIncidentService;
using Wolverine;
using Wolverine.Http;
using Wolverine.Polecat;
var builder = WebApplication.CreateBuilder(args);
builder.Services.AddOpenApi();
builder.Services.AddPolecat(opts =>
{
var connectionString = builder.Configuration.GetConnectionString("SqlServer")
??
"Server=localhost,1434;User Id=sa;Password=P@55w0rd;Timeout=5;MultipleActiveResultSets=True;Initial Catalog=master;Encrypt=False";
opts.ConnectionString = connectionString;
opts.DatabaseSchemaName = "incidents";
// We'll talk about this soon...
opts.Projections.Snapshot<Incident>(SnapshotLifecycle.Inline);
})
// For Marten users, *this* is the default for Polecat!
//.UseLightweightSessions()
.IntegrateWithWolverine(x => x.UseWolverineManagedEventSubscriptionDistribution = true);
builder.Host.UseWolverine(opts => { opts.Policies.AutoApplyTransactions(); });
builder.Services.AddWolverineHttp();
var app = builder.Build();
if (app.Environment.IsDevelopment())
{
app.MapOpenApi();
}
// Adding Wolverine.HTTP
app.MapWolverineEndpoints();
// This gets you a lot of CLI goodness from the
// greater JasperFx / Critter Stack ecosystem
// and will soon feed quite a bit of AI assisted development as well
return await app.RunJasperFxCommands(args);
// For test bootstrapping in case you want to work w/
// more than one system at a time
public partial class Program
{
}

Our events are just going to be some immutable records like this:

public record LogIncident(
Guid CustomerId,
Contact Contact,
string Description,
Guid LoggedBy
);
public record CategoriseIncident(
IncidentCategory Category,
Guid CategorisedBy,
int Version
);
public record CloseIncident(
Guid ClosedBy,
int Version
);

It’s not mandatory to use immutable types, but you might as well and it’s just idiomatic.

Let’s start with our LogIncident use case and build out an HTTP endpoint that creates a new “event stream” for events related to a single, logical Incident:

public static class LogIncidentEndpoint
{
[WolverinePost("/api/incidents")]
public static (CreationResponse<Guid>, IStartStream) Post(LogIncident command)
{
var (customerId, contact, description, loggedBy) = command;
var logged = new IncidentLogged(customerId, contact, description, loggedBy);
var start = PolecatOps.StartStream<Incident>(logged);
var response = new CreationResponse<Guid>("/api/incidents/" + start.StreamId, start.StreamId);
return (response, start);
}
}

Polecat does support “Dynamic Consistency Boundary” event sourcing as well, but that’s not where I think most people should start, and I’ll get to that in a later post I keep putting off…

With some help from Alba, another JasperFx supported library, we can write both unit tests for the business logic (such as it is) and do an end to end test through the HTTP endpoint like this:

public class when_logging_an_incident : IntegrationContext
{
public when_logging_an_incident(AppFixture fixture) : base(fixture)
{
}
[Fact]
public void unit_test()
{
var contact = new Contact(ContactChannel.Email);
var command = new LogIncident(Guid.NewGuid(), contact, "It's broken", Guid.NewGuid());
// Pure function FTW!
var (response, startStream) = LogIncidentEndpoint.Post(command);
// Should only have the one event
startStream.Events.ShouldBe([
new IncidentLogged(command.CustomerId, command.Contact, command.Description, command.LoggedBy)
]);
}
[Fact]
public async Task happy_path_end_to_end()
{
var contact = new Contact(ContactChannel.Email);
var command = new LogIncident(Guid.NewGuid(), contact, "It's broken", Guid.NewGuid());
// Log a new incident first
var initial = await Scenario(x =>
{
x.Post.Json(command).ToUrl("/api/incidents");
x.StatusCodeShouldBe(201);
});
// Read the response body by deserialization
var response = initial.ReadAsJson<CreationResponse<Guid>>();
// Reaching into Polecat to build the current state of the new Incident
await using var session = Store.LightweightSession();
var incident = await session.Events.FetchLatest<Incident>(response.Value);
incident!.Status.ShouldBe(IncidentStatus.Pending);
}
}

Now, to build out a command handler for potentially categorizing an event, we’ll need to:

  1. Know the current state of the logical Incident by rolling up the events into some kind of representation of the state so that we can “decide” which if any events should be appended at this time. In Event Sourcing terms, I’d refer to this as the “write model.”
  2. The command type itself
  3. Validation logic for the input
  4. Like I said earlier, decide which events should be published
  5. Do some metadata correlation for observability. It’s not obvious from the code, but in the sample below Wolverine & Marten are tracking the events captured against the correlation id of the current HTTP request
  6. Establish transactional boundaries, including any outbound messaging that might be taking place in response to the events that are being appended. This is something that Wolverine does for Polecat (and Marten) in command handlers. This includes the transactional outbox support in Wolverine.
  7. Create protections against concurrent writes to any given Incident stream, which Wolverine and Polecat do for you in the next endpoint by applying optimistic concurrency checks to guarantee that no other thread changed the Incident since this CategoriseIncident command was issued by the caller

That’s actually quite a bit of responsibility for the command handler, but not to worry, Wolverine and Polecat are going to keep your code nice and simple. Hopefully even a pure function “Decider” for the business logic in many cases. Before I get into the command handler, here’s what the “projection” that gives us the current state of the Incident by applying events:

public class Incident
{
public Guid Id { get; set; }
// Polecat will set this itself for optimistic concurrency
public int Version { get; set; }
public IncidentStatus Status { get; set; } = IncidentStatus.Pending;
public IncidentCategory? Category { get; set; }
public bool HasOutstandingResponseToCustomer { get; set; } = false;
public Incident()
{
}
public void Apply(IncidentLogged _) { }
public void Apply(IncidentCategorised e) => Category = e.Category;
public void Apply(AgentRespondedToIncident _) => HasOutstandingResponseToCustomer = false;
public void Apply(CustomerRespondedToIncident _) => HasOutstandingResponseToCustomer = true;
public void Apply(IncidentResolved _) => Status = IncidentStatus.Resolved;
public void Apply(ResolutionAcknowledgedByCustomer _) => Status = IncidentStatus.ResolutionAcknowledgedByCustomer;
public void Apply(IncidentClosed _) => Status = IncidentStatus.Closed;
public bool ShouldDelete(Archived @event) => true;
}

And finally, the command handler:

public record CategoriseIncident(
IncidentCategory Category,
Guid CategorisedBy,
int Version
);
public static class CategoriseIncidentEndpoint
{
public static ProblemDetails Validate(Incident incident)
{
return incident.Status == IncidentStatus.Closed
? new ProblemDetails { Detail = "Incident is already closed" }
: WolverineContinue.NoProblems;
}
[EmptyResponse]
[WolverinePost("/api/incidents/{incidentId:guid}/category")]
public static IncidentCategorised Post(
CategoriseIncident command,
[Aggregate("incidentId")] Incident incident)
{
return new IncidentCategorised(incident.Id, command.Category, command.CategorisedBy);
}
}

And I admit that that’s a lot of code thrown at you all at once, and maybe even a lot of new concepts. For further reading, see:

Announcing Polecat: Event Sourcing with SQL Server

Polecat is now completely supported by JasperFx Software and automatically part of any existing and future support agreements through our existing plans.

Polecat was released as 1.0 this past week (with 1.1 & now 1.2 coming soon). Let’s call it what it is, Polecat is a port of (most of) Marten to target SQL Server 2025 and SQL Server’s new JSON data type. For folks not familiar with Marten, Polecat is in one library:

  1. A very full fledged Event Store library for SQL Server that includes event projection and subscriptions, Dynamic Consistency Boundary support, a large amount of functionality for Event Sourcing basics, rich event metadata tracking capabilities, and even rich multi-tenancy support.
  2. A feature rich set of Document Database capabilities backed by SQL Server including LINQ querying support

And while Polecat is brand spanking new, it comes out of the gate with the decade old Marten pedigree and its own Wolverine integration for CQRS usage. I’m confident in saying Polecat is now the best technical option for using Event Sourcing with SQL Server in the .NET ecosystem.

And of course, if you’re a shop with deep existing roots into EF Core usage, Polecat also comes with projection support to EF Core, so Polecat can happily coexist with EF Core in the same systems.

Alright, let’s just into a quick start. First, let’s say you’ve started a brand new .NET project through dotnet run webapi and you’ve added a reference to Polecat through Nuget (and you have a running SQL Server 2025 instance handy too of course!). Next, let’s start with the inevitable AddPolecat() usage in your Program file:

builder.Services.AddPolecat(options =>
{
// Connection string to your SQL Server 2025 database
options.Connection("Server=localhost;Database=myapp;User Id=sa;Password=YourStrong!Password;TrustServerCertificate=True");
// Optionally change the default schema (default is "dbo")
options.DatabaseSchemaName = "myschema";
});

Polecat can be used without IHost or IServiceCollection registrations by just directly building a DocumentStore object.

Next, let’s say you’ve got this simplistic document type (entity in Polecat parlance):

public class User
{
public Guid Id { get; set; }
public required string FirstName { get; set; }
public required string LastName { get; set; }
public bool Internal { get; set; }
}

And now, let’s use Polecat within some Minimal API endpoints to capture and query User documents:

// Store a document
app.MapPost("/user", async (CreateUserRequest create, IDocumentSession session) =>
{
var user = new User
{
FirstName = create.FirstName,
LastName = create.LastName,
Internal = create.Internal
};
session.Store(user);
await session.SaveChangesAsync();
});
// Query with LINQ
app.MapGet("/users", async (bool internalOnly, IDocumentSession session, CancellationToken ct) =>
{
return await session.Query<User>()
.Where(x => x.Internal == internalOnly)
.ToListAsync(ct);
});
// Load by ID
app.MapGet("/user/{id:guid}", async (Guid id, IQuerySession session, CancellationToken ct) =>
{
return await session.LoadAsync<User>(id, ct);
});

For folks used to EF Core, I should point out that Polecat has its own “it just works” database migration subsystem that in the default development mode will happily make sure that all necessary database tables, views, and functions are exactly as they should be at runtime so you don’t have to fiddle with database migrations when all you want to do is just get things done.

While I initially thought that we’d mainly focus on the event sourcing support, we were also able to recreate the mass majority of Marten’s document database capabilities (including the “partial update” model, LINQ support, soft deletes, multi-tenancy, and batch updates for starters) as well if you’d only be interested in that feature set by itself.

Moving over to event sourcing instead, let’s say you’re into fantasy books like I am and you want to build a system to model the journeys and adventures of a quest in your favorite fantasy series. You might model some of the events in that system like:

public record QuestStarted(string Name);
public record MembersJoined(string Location, string[] Members);
public record MembersDeparted(string Location, string[] Members);
public record QuestEnded(string Name);

And you model the current state of the quest party like this:

public class QuestParty
{
public Guid Id { get; set; }
public string Name { get; set; } = "";
public List<string> Members { get; set; } = new();
public void Apply(QuestStarted started)
{
Name = started.Name;
}
public void Apply(MembersJoined joined)
{
Members.AddRange(joined.Members);
}
public void Apply(MembersDeparted departed)
{
foreach (var member in departed.Members)
Members.Remove(member);
}
}

The step above isn’t strictly necessary for event sourcing, but you usually need a projection of some sort sooner or later.

And finally, we can add events by starting a new event stream:

var store = DocumentStore.For(opts =>
{
opts.Connection("Server=localhost,1433;Database=myapp;User Id=sa;Password=YourStrong!Password;TrustServerCertificate=True");
});
await using var session = store.LightweightSession();
// Start a new stream with initial events
var questId = session.Events.StartStream<QuestParty>(
new QuestStarted("Destroy the Ring"),
new MembersJoined("Rivendell", ["Frodo", "Sam", "Aragorn", "Gandalf"])
);
await session.SaveChangesAsync();

And even append some new ones to the same stream later:

await using var session = store.LightweightSession();
session.Events.Append(questId,
new MembersJoined("Moria", ["Gimli", "Legolas"]),
new MembersDeparted("Moria", ["Gandalf"])
);
await session.SaveChangesAsync();

And derive the current state of our quest:

var party = await session.Events.AggregateStreamAsync<QuestParty>(questId);
// party.Name == "Destroy the Ring"
// party.Members == ["Frodo", "Sam", "Aragorn", "Gimli", "Legolas"]

And there’s much, much more of course, including everything you’d need to build real systems based on our 10 years and counting supporting Marten with PostgreSQL.

How is Polecat Different than Marten?

There are of course some differences besides just the database engine:

  • Polecat is using source generators instead of the runtime code generation that Marten does today
  • Polecat will only support System.Text.Json for now as a serialization engine
  • Polecat only supports the “Quick Append” option from Marten
  • There is no automatic dirty checking
  • No “duplicate fields” support so far, we’re going to reevaluate that though
  • Plenty of other technical baggage features I flat out didn’t want to support in Marten didn’t make the cut, but I can’t imagine anyone will miss any of that!

Summary

For over a decade people have been telling me that Marten would be more successful and adopted by more .NET shops if it only supported SQL Server in addition to or instead of PostgreSQL. While I’ve never really disagreed with that idea — and it’s impossible to really prove the counter factual anyway — there have always been real blockers in both SQL Server’s JSON support lagging far behind PostgreSQL and frankly the time commitment on my part to be able to attempt that work in the first place.

So what changed to enable this?

  1. SQL Server 2025 added much better JSON support rivaling PostgreSQL’s JSONB type
  2. We had already invested in pulling the basic event abstractions and projection support out of Marten and into a common library called JasperFx.Events as part of the Marten 8.0 release cycle and that work was always meant to be an enabler for what is now Polecat
  3. Claude & Opus 4.5/4.6 turned out to be very, very good at grunt work

That second item had to this point been a near disaster in my mind because of how much work and time that took compared to the benefits and was the single most time consuming part of Polecat development. Let’s just say that I’m very relieved that that effort didn’t turn out to be a very expensive sunk cost for JasperFx!

I have no earthly idea how much traction Polecat will really get, but we’ve already had some interest from folks who have wanted to use Marten, but couldn’t get their .NET shop to adopt PostgreSQL. I’m hopeful!

Critter Stack Roadmap for March 2026

It’s only a month since I’ve written an update on the Critter Stack roadmap, but it’s maybe worth some time on my part to update what I think the roadmap is now. The biggest change is the utter dominance of AI in the software development discourse and the fact that Claude usage has allowed us to chew through a shocking amount of backlog in the past 6 weeks. That’s probably also changed my own thinking about what should be next throughout this year.

First, some updates on what’s been added to the Critter Stack in just the last month:

By the time you read this, we may very well have Polecat 1.0 out as well.

Short Term

The short term priority for myself and JasperFx Software is to deliver the CritterWatch MVP in a usable form by the end of March.

Marten, Wolverine, and even Polecat have no major new features planned for the short term and I think they will only get tactical releases for bug fixes and JasperFx client requests for a little while. And let me tell you, it feels *weird* to say that, but we’ve blown through a tremendous amount of the backlog so far in 2026.

Medium Term

  • Enhance CritterWatch until it’s the best in class monitoring tool for asynchronous messaging and event sourcing. Part of that will probably be adding quite a bit more functionality for development time as well.
  • For a JasperFx Software client, we’re doing PoC work on scaling Marten to be able to handle having several hundred billion events in a single system. I’m going to assume that this PoC will probably lead to enhancements in both Marten and Wolverine!
  • We’ll finally add some direct support to Marten for the PostGIS PostgreSQL extension
  • I’m a little curious to try to use the hstore extension with Marten as a possible way to optimize our new DCB support
  • Play with Pgvector and TimescaleDb in combination with Marten as some kind of vague “how can we say that Marten is even more awesome for AI?”
  • There’s going to be a new wave of releases later this year for Marten 9.0, Wolverine 6.0, and Polecat 2.0 that will mostly about performance optimizations and especially finding ways to optimize the cold start time of applications using these tools.
  • Babu and I (really all Babu so far) are going to be building a set of AI skills for using the Critter Stack tools that will be curated in a GitHub repository and available to JasperFx Software clients. I do not know what the full impact of AI tools are really going to be on software development, but I personally want to plan for the worst case that AI tools plus LLM-friendly documentation drastically reduces the demand for consulting and try to belatedly pivot JasperFx Software to being at least partially a product company.
  • Build tooling for spec driven development using the Critter Stack. I don’t have any details beyond “hey, wouldn’t that be cool?”. My initial thought is to play with Gherkin specifications that generates “best practices” Critter Stack code with the accompanying automated tests to boot.
  • One way or another, we’ll be building MCP support into the Critter Stack, but again, I don’t know anything more than “hey, wouldn’t that be cool?”

Long Term

Profit?

I’m playing with the idea of completely rebooting Storyteller as a new spec driven development tool. I have the Nuget rights to the “Storyteller” name and graphics from Khalid (a necessary requirement for any successful effort on my part), and I’ve always wanted to go back to it some day.

Re-Sequencer and Global Message Partitioning in Wolverine

Last week I helped a JasperFx Software client with a use case where they get a steady stream of related events from an upstream system into a downstream system where order of processing is important, but the messages might arrive out of order.

Once again referring to the venerable Enterprise Integration Patterns book, that scenario requires a Resequencer:

How can we get a stream of related but out-of-sequence messages back into the correct order?

EIP ReSequencer

To solve the message ordering challenge, we introduced the new Resequencer Saga feature into Wolverine, and combined that with the existing “Partitioned Sequential Messaging” feature.

For the new built in re-sequencing, we do need you to implement this interface on any message types in that related stream so that Wolverine “knows” what order the message is inside of a related stream:

public interface SequencedMessage
{
int? Order { get; }
}

The next step is to use a special kind of new Wolverine Saga called ResequencerSaga<T>, where the T is just some sort of common interface for all the message types that are part of this ordered stream and also implements the SequencedMessage shown above. Here’s a simple example I used for the testing:

public record StartMyWorkflow(Guid Id);
public record MySequencedCommand(Guid SagaId, int? Order) : SequencedMessage;
public class MyWorkflowSaga : ResequencerSaga<MySequencedCommand>
{
public Guid Id { get; set; }
public static MyWorkflowSaga Start(StartMyWorkflow cmd)
{
return new MyWorkflowSaga { Id = cmd.Id };
}
public void Handle(MySequencedCommand cmd)
{
// This will only be called when messages arrive in the correct order,
// or when out-of-order messages are replayed after gaps are filled
}
}

At runtime, when Wolverine gets a message that is handled by that MyWorkflowSaga, there is some middleware that first compares the declared order of that message against the recorded state of the saga so far. In more concrete terms, if…

  • It’s the first message in the sequence, Wolverine just processes it as normal and records in the saga state what the last processed message order was so that it “knows” what message sequence should be next
  • It’s a later message in the sequence compared to the last message sequence processed, the saga state will just store the current message, persist the saga state, and otherwise skip the normal message processing
  • The message is the next in the sequence according to what the saga state says should be processed next, it processes normally. If there are any previously out of order messages that the saga state already knows about that are sequentially next after the current message, Wolverine will re-publish those messages locally — but with the normal Wolverine message sequencing these cascading messages will not go anywhere until the initiating message completes

With this mechanism, Wolverine is able to put the messages arriving from the outside world back into the correct sequential order in its own processing.

Of course though, this processing is very stateful and somewhat likely to be vulnerable to concurrent access problems. Most of the saga storage mechanisms in Wolverine happily support optimistic concurrency around saving saga state, so you could just use some selective retries on concurrency violations. Or better yet, Wolverine users can just about completely side step issues with concurrency by utilizing our newest improvement to partitioned messaging we’re calling “Global Partitioning.”

Let’s say that you have a great deal of operations in your system that have to modify a resource of some sort like an entity, a file, a saga in this case, or an event stream that might be a little bit sensitive to concurrent access. Let’s also say that you have a mix of messages that impact these sensitive resources that come from both external, upstream systems and from cascaded messages within your own system.

The syntax for this next feature was added just today in Wolverine 5.21 as I realized the previous syntax was basically unusable in the course of trying to write this blog post. So it goes.

A “global partitioning” allows you to create a guarantee that messages impacting those resources can be processed sequentially within a message group while allowing for parallel processing between message groups throughout the entire cluster.

Imagine it like this (but know I drew this diagram for someone using Kafka even though the next example is using Rabbit MQ queues):

And with this configuration:

using var host = await Host.CreateDefaultBuilder()
.UseWolverine(opts =>
{
// You'd *also* supply credentials here of course!
opts.UseRabbitMq();
// Do something to add Saga storage too!
opts
.MessagePartitioning
// This tells Wolverine to "just" use implied
// message grouping based on Saga identity among other things
.UseInferredMessageGrouping()
.GlobalPartitioned(topology =>
{
// Creates 5 sharded RabbitMQ queues named "sequenced1" through "sequenced5"
// with matching companion local queues for sequential processing
topology.UseShardedRabbitQueues("sequenced", 5);
topology.MessagesImplementing<MySequencedCommand>();
});
}).StartAsync();

What this does is spread the work out for handling MySequencedCommand messages through five different Rabbit MQ + Local queue pairs, with each pair active on only one single node within your application. Even inside each local queue in this partitioning scheme, Wolverine is parallelizing between message groups.

Now, let’s talk about receiving any message that can be cast to MySequencedCommand. If the message is received at a completely different listener than the “sequenced1/2/3/4/5” queues defined above, like from an external system that knows absolutely nothing about your message partitioning, Wolverine is going to immediately determine the message group identity by inferring that from the saga message handler rules (that’s what the UseInferredMessageGrouping() option does for us), then forwards that message to the proper node that is currently handling that group id. If the current node happens to be assigned that message group id, Wolverine forwards the message directly to the right local queue.

Likewise, if you publish a cascading message inside one of your handlers, Wolverine will determine the message group id for that message type, then try to either route that message locally if that group happens to be assigned to the current node (and it probably would be if you were cascading from your own handlers) or sends it remotely to the right messaging endpoint (Rabbit MQ queue or a Kafka topic or an AWS SQS queue maybe).

The point being, this guarantees that related messages are processed sequentially across the entire application cluster while allowing parallel processing between unrelated messages.

Summary

These are hopefully two powerful new features that will benefit Wolverine users in the near future. Both of these features were built at the behest of JasperFx Software clients to directly support their current work. I’m very happy to just quietly fold in reasonably sized new features for JasperFx support clients without extra cost when those features likely benefit the community as a whole. Contact us at sales@jasperfx.net to find out what we can do to help your software development efforts be more successful.

And just for bragging rights tonight, I did some poking around (okay, I asked Claude to do it for me) to see if any other asynchronous messaging tools offer anything similar to what our global partitioning option does for Wolverine users. While you can certainly achieve the same goals through actor frameworks like AkkaDotNet or Orleans (I consider actor frameworks to be such a different paradigm that I don’t really think of them as direct competitors to Wolverine), it doesn’t appear that there are any equivalents out there to this feature in the .NET space. MassTransit and NServiceBus both have more limited versions of this capability, but nothing that is as easy or flexible as what Wolverine has at this point. Now, granted, we’re at this point because Marten event stream appends can be sensitive to concurrent access so we’ve had to take concurrency maybe a little more seriously than the pure play asynchronous messaging tools that don’t really have an event sourcing component.

Natural Keys in the Critter Stack

Just to level set everyone, there are two general categories of identifiers we use in software:

  • “Surrogate” keys are data elements like Guid values, database auto numbering or sequences, or snowflake generated identifiers that have no real business meaning and just try to be unique values.
  • “Natural” keys have some kind of business meaning and usually utilize some piece of existing information like email addresses or phone numbers. A natural key could also be an external supplied identifier from your clients. In fact, it’s quite common to have your own tracking identifier (usually a surrogate key) while also having to track a client or user’s own identification for the same business entity.

That very last sentence is where this post takes off. You see Marten can happily track event streams with either Guid identifiers (surrogate key) or string identifiers — or strong typed identifiers that wrap an inner Guid or string, but in this case that’s really the same thing, just with more style I guess. Likewise, in combination with Wolverine for our recommended “aggregate handler workflow” approach to building command handlers, we’ve only supported the stream id or key. Until now!

With the Marten 8.23 and Wolverine 5.18 releases last week (we’ve been very busy and there are newer releases now), you are now able to “tag” Marten (or Polecat!) event streams with a natural key in addition to its surrogate stream id and use that natural key in conjunction with Wolverine’s aggregate handler workflow.

Of course, if you use strings as the stream identifier you could already use natural keys, but let’s just focus on the case of Guid identified streams that are also tagged with some kind of natural key that will be supplied by users in the commands sent to the system.

First, to tag streams with natural keys in Marten, you have to have a strong typed identifier type for the natural key. Next, there’s a little bit of attribute decoration in the targeted document type of a single stream projection, i.e., the “write model” for an event stream. Here’s an example from the Marten documentation:

public record OrderNumber(string Value);
public record InvoiceNumber(string Value);
public class OrderAggregate
{
public Guid Id { get; set; }
[NaturalKey]
public OrderNumber OrderNum { get; set; }
public decimal TotalAmount { get; set; }
public string CustomerName { get; set; }
public bool IsComplete { get; set; }
[NaturalKeySource]
public void Apply(OrderCreated e)
{
OrderNum = e.OrderNumber;
CustomerName = e.CustomerName;
}
public void Apply(OrderItemAdded e)
{
TotalAmount += e.Price;
}
[NaturalKeySource]
public void Apply(OrderNumberChanged e)
{
OrderNum = e.NewOrderNumber;
}
public void Apply(OrderCompleted e)
{
IsComplete = true;
}
}

In particular, see the usage of [NaturalKey] which should be self-explanatory. Also see the [NaturalKeySource] attribute that we’re using to mark when a natural key value might change. Marten is starting to use source generators for some projection internals (in place of some nasty, not entirely as efficient as it should have been, Expression-compiled-to-Lambda functions).

And that’s that, really. You’re now able to use the designated natural keys as the input to an “aggregate handler workflow” command handler with Wolverine. See Natural Keys from the Wolverine documentation for more information.

For a little more information:

  • The natural keys are stored in a separate table, and when using FetchForWriting(), Marten is doing an inner join from the tag table for that natural key type to the mt_streams table in the Marten database
  • You can change the natural key against the surrogate key
  • We expect this to be most useful when you want to use the Guid surrogate keys for uniqueness in your own system, but you frequently receive a natural key from API users of your system — or at least this has been encountered by a couple different JasperFx Software customers.
  • The natural key storage does have a unique value constraint on the “natural key” part of the storage
  • Really only a curiosity, but this was done in the same wave of development as Marten’s new DCB support

Validation Options in Wolverine

Wolverine — the event-driven messaging and HTTP framework for .NET — provides a rich, layered set of options for validating incoming data. Whether you are building HTTP endpoints or message handlers, Wolverine meets you where you are: from zero-configuration inline checks to full Fluent Validation or Data Annotation middleware support for both command handlers and HTTP endpoints.

Let’s maybe over simplify validation scenarios say they’ll fall into two buckets:

  1. Run of the mill field level validation rules like required fields or value ranges. These rules are the bread and butter of dedicated validation frameworks like Fluent Validation or Microsoft’s Data Annotations markup.
  2. Custom validation rules that are custom to your business domain and might involve checks against the existing state of your system beyond the command messages.

Let’s first look at Wolverine’s Data Annotation integration that is completely baked into the core WolverineFx Nuget. To get started, just opt into the Data Annotations middleware for message handlers like this:

using var host = await Host.CreateDefaultBuilder()
.UseWolverine(opts =>
{
// Apply the validation middleware
opts.UseDataAnnotationsValidation();
}).StartAsync();

In message handlers, this middleware will kick in for any message type that has any validation attributes as this example:

public record CreateCustomer(
// you can use the attributes on a record, but you need to
// add the `property` modifier to the attribute
[property: Required] string FirstName,
[property: MinLength(5)] string LastName,
[property: PostalCodeValidator] string PostalCode
) : IValidatableObject
{
public IEnumerable<ValidationResult> Validate(ValidationContext validationContext)
{
// you can implement `IValidatableObject` for custom
// validation logic
yield break;
}
};
public class PostalCodeValidatorAttribute : ValidationAttribute
{
public override bool IsValid(object? value)
{
// custom attributes are supported
return true;
}
}
public static class CreateCustomerHandler
{
public static void Handle(CreateCustomer customer)
{
// do whatever you'd do here, but this won't be called
// at all if the DataAnnotations Validation rules fail
}
}

By default for message handlers, any validation errors are logged, then the current execution is stopped through the usage of the HandlerContinuation value we’ll discuss later.

For Wolverine.HTTP integration with Data Annotations, use:

app.MapWolverineEndpoints(opts =>
{
// Use Data Annotations that are built
// into the Wolverine.HTTP library
opts.UseDataAnnotationsValidationProblemDetailMiddleware();
});

Likewise, this middleware will only apply to HTTP endpoints that have a request input model that contains data annotation attributes. In this case though, Wolverine is using the ProblemDetails specification to report validation errors back to the caller with a status code of 400 by default.

Fluent Validation Middleware

Similarly, the Fluent Validation integration works more or less the same, but requires the WolverineFx.FluentValidation package for message handlers and the WolverineFx.Http.FluentValidation package for HTTP endpoints. There are some Wolverine helpers for discovering and registering FluentValidation validators in a way that applies some Wolverine-specific performance optimizations by trying to register most validators with a Singleton lifetime just to allow Wolverine to generate more optimized code.

It is possible to override how Wolverine handles validation failures, but I’d personally recommend just using the ProblemDetails default in most cases.

I would like to note that the way that Wolverine generates code for the Fluent Validation middleware is generally going to be more efficient at runtime than the typical IoC dependent equivalents you’ll frequently find in the MediatR space.

Explicit Validation

Let’s move on to validation rules that are more specific to your own problem domain, and especially the type of validation rules that would require you to examine the state of your system by exercising some kind of data access. These kinds of rules certainly can be done with custom Fluent Validation validators, but I strongly recommend you put that kind of validation directly into your message handlers or HTTP endpoints to colocate business logic together with the actual message handler or HTTP endpoint happy path.

One of the unique features of Wolverine in comparison to the typical “IHandler of T” application frameworks in .NET is Wolverine’s built in support for a type of low code ceremony Railway Programming, and this turns out to be perfect for one off validation rules.

In message handlers we’ve long had support for returning the HandlerContinuation enum from Validate() or Before() methods as a way to signal to Wolverine to conditionally stop all additional processing:

public static class ShipOrderHandler
{
// This would be called first
public static async Task<(HandlerContinuation, Order?, Customer?)> LoadAsync(ShipOrder command, IDocumentSession session)
{
var order = await session.LoadAsync<Order>(command.OrderId);
if (order == null)
{
return (HandlerContinuation.Stop, null, null);
}
var customer = await session.LoadAsync<Customer>(command.CustomerId);
return (HandlerContinuation.Continue, order, customer);
}
// The main method becomes the "happy path", which also helps simplify it
public static IEnumerable<object> Handle(ShipOrder command, Order order, Customer customer)
{
// use the command data, plus the related Order & Customer data to
// "decide" what action to take next
yield return new MailOvernight(order.Id);
}
}

But of course, with the example above, you could also write that with Wolverine’s declarative persistence like this:

public static class ShipOrderHandler
{
// The main method becomes the "happy path", which also helps simplify it
public static IEnumerable<object> Handle(
ShipOrder command,
// This is loaded by the OrderId on the ShipOrder command
[Entity(Required = true)]
Order order,
// This is loaded by the CustomerId value on the ShipOrder command
[Entity(Required = true)]
Customer customer)
{
// use the command data, plus the related Order & Customer data to
// "decide" what action to take next
yield return new MailOvernight(order.Id);
}
}

In the code above, Wolverine would stop the processing if either the Order or Customer entity referenced by the command message is missing. Similarly, if this code were in an HTTP endpoint instead, Wolverine would emit a ProblemDetails with a 400 status code and a message stating the data that is missing.

If you were using the code above with the integration with Marten or Polecat, Wolverine can even emit code that uses Marten or Polecat’s batch querying functionality to make your system more efficient by eliminating database round trips.

Likewise in the HTTP space, you could also return a ProblemDetails object directly from a Validate() method like:

public class ProblemDetailsUsageEndpoint
{
public ProblemDetails Validate(NumberMessage message)
{
if (message.Number > 5)
return new ProblemDetails
{
Detail = "Number is bigger than 5",
Status = 400
};
// All good — continue!
return WolverineContinue.NoProblems;
}
[WolverinePost("/problems")]
public static string Post(NumberMessage message) => "Ok";
}

Even More Lightweight Validation!

When reviewing client code that uses the HandlerContinuation or ProblemDetails syntax, I definitely noticed the code can become verbose and noisy, especially compared to just embedding throw new InvalidOperationException("something is not right here"); code directly in the main methods — which isn’t something I’d like to see people tempted to do.

Instead, Wolverine 5.18 added a more lightweight approach that allows you to just return an array of strings from a Before/Validation() method:

    public static IEnumerable<string> Validate(SimpleValidateEnumerableMessage message)
    {
        if (message.Number > 10)
        {
            yield return "Number must be 10 or less";
        }
    }

    // or

    public static string[] Validate(SimpleValidateStringArrayMessage message)
    {
        if (message.Number > 10)
        {
            return ["Number must be 10 or less"];
        }

        return [];
    }

At runtime, Wolverine will stop a handler if there are any messages or emit a ProblemDetails response in HTTP endpoints.

Summary

Hopefully, Wolverine has you covered no matter what with options. A few practical takeaways:

  • Reach for Validate() / ValidateAsync() first whenever IoC services or database queries are involved or the validation logic is just specific to your message handler or HTTP endpoint.
  • Use Data Annotations middleware when your model types are already decorated with attributes and you want zero validator classes.
  • Use Fluent Validation middleware when you want reusable, composable validators shared across multiple handlers or endpoints.

All three strategies generate efficient, ahead-of-time compiled middleware via Wolverine’s code generation engine, keeping the runtime overhead minimal regardless of which path you choose.

New Improvements to Marten’s LINQ Support

I hold these two facts to be simultaneously true:

  1. Language Integrated Query (LINQ) is the singular best feature in .NET that developers would miss out working in other development platforms
  2. Developing and supporting a LINQ provider is a nastily hard and laborious task for an OSS author and probably my least favorite area of Marten to work in

Alright, on that note, let’s talk about a couple potentially important recent improvements to Marten’s LINQ support. First, we’ve received the message loud and clear that Marten was sometimes hard when what you really need is to fetch data from more than one document type at a time. We’d also got some feedback about the difficultly overall in making denormalized views projected from events with a mix of different streams and potentially different reference documents.

For projections in the event sourcing space, we added the Composite Projection capability. For straight up document database work, Marten 8.23 introduced support for the LINQ GroupJoin operator as shown in this test from the Marten codebase:

public class JoinCustomer
{
public Guid Id { get; set; }
public string Name { get; set; } = "";
public string City { get; set; } = "";
}
public class JoinOrder
{
public Guid Id { get; set; }
public Guid CustomerId { get; set; }
public string Status { get; set; } = "";
public decimal Amount { get; set; }
}
[Fact]
public async Task GroupJoin_select_outer_entity_properties()
{
// It's just setting up some Customer and Order data in the database
await SetupData();
var results = await _session.Query<JoinCustomer>()
.GroupJoin(
_session.Query<JoinOrder>(),
c => c.Id,
o => o.CustomerId,
(c, orders) => new { c, orders })
.SelectMany(
x => x.orders,
(x, o) => new { x.c.Name, x.c.City })
.ToListAsync();
results.Count.ShouldBe(3);
results.Count(r => r.City == "Seattle").ShouldBe(2); // Alice's 2 orders
results.Count(r => r.City == "Portland").ShouldBe(1); // Bob's 1 order
}

This is of course brand new, which means there are probably “unknown unknown” bugs in there, but just give us a reproduction in a GitHub issue and we’ll address whatever it is.

Select/Where Hoisting

Without getting into too many details, the giant “hey, let’s rewrite our LINQ support almost from scratch!” effort in Marten V7 a couple years ago made some massive strides in our LINQ provider, but unintentionally “broke” our support for chaining Where clauses *after* Select transforms. To be honest, that’s nothing I even realized you could or would do with LINQ, so I was caught off guard when we got a couple bug reports about that later. No worries now, because you can now do that with Marten as this new test shows:

    [Fact]
    public async Task select_before_where_with_different_type()
    {
        var doc1 = new DocWithInner { Id = Guid.NewGuid(), Name = "one", Inner = new InnerDoc { Value = 10, Text = "low" } };
        var doc2 = new DocWithInner { Id = Guid.NewGuid(), Name = "two", Inner = new InnerDoc { Value = 50, Text = "mid" } };
        var doc3 = new DocWithInner { Id = Guid.NewGuid(), Name = "three", Inner = new InnerDoc { Value = 90, Text = "high" } };

        theSession.Store(doc1, doc2, doc3);
        await theSession.SaveChangesAsync();

        // Select().Where() - the problematic ordering from GH-3009
        var results = await theSession.Query<DocWithInner>()
            .Select(x => x.Inner)
            .Where(x => x.Value > 40)
            .ToListAsync();

        results.Count.ShouldBe(2);
        results.ShouldContain(x => x.Value == 50);
        results.ShouldContain(x => x.Value == 90);
    }

Summary

So you might ask, how did we suddenly get to a point where there are literally no open GitHub issues related to LINQ in the Marten codebase? It turns out that the LINQ provider support is an absolutely perfect place to just let Claude go fix it — but know that that is backed up by about a 1,000 regression tests for LINQ to chew through at the same time.

My limited experience suggests that the AI assisted development really works when you have very well defined and executable acceptance requirements and better yet tests for your AI agent to develop to. Duh.

Fun Five Year Retrospective on Marten Adoption

In the midst of some let’s call it “market” research to better understand how Marten stacks up to a newer competitor, I stumbled back over a GitHub discussion I initiated in 2021 called “What would it take for you to adopt Marten?” long before I was able to found JasperFx Software. I seeded this original discussion with my thoughts about the then forthcoming giant Marten 4.0 release that was meant to permanently put Marten on a solid technical foundation for all time and address all known significant technical shortcomings of Marten.

Narrators’s voice: the V4 release was indeed a major step forward and still shapes a great deal of Marten’s internals, but it was not even remotely the end all, be all of technical releases and V5 came out less than 6 months later to address shortcomings of V4 and to add first class multi-tenancy through separate databases. And arguably, V7 just three years later was nearly as big a change to Marten’s internals.

So now that it’s five years later and Marten’s usage numbers are vastly greater than that moment in time in 2021, let me run through the things we thought needed to change to garner more usage, whether or not and how those ideas took fruit, and whether or not I think those ideas made any difference in the end.

Enterprise Level Support

People frequently told me that they could not seriously consider Marten or later Wolverine without there being commercial support for those tools or at least a company behind them. As of now, JasperFx Software (my company) provides support agreements for any tool under the JasperFx GitHub organization. I would say though that the JasperFx support agreement ends up being more like an ongoing consulting engagement rather than the “here’s an email for support, we’ll response within 72 hours” licensing agreement that you’d be getting from other Event Driven Architecture tools and companies in the .NET space.

Read more about that in Little Diary of How JasperFx Helps Our Clients.

And no, we’re not a big company at all, but we’re getting there and at least “we” isn’t just the royal “we” now:)

I’m hoping that JasperFx is able to expand when we are able to start selling the CritterWatch commercial add on soon.

More Professional Documentation

Long story short, a good, a modern looking website for your project is an absolute must. Today, all of the Critter Stack / JasperFx projects use VitePress and MarkdownSnippets for our documentation websites. Plus we have real project logo images that I really like myself created by Khalid Abuhakmeh. Babu Annamalai did a fantastic job on setting up our documentation infrastructure.

People do still complain about the documentation from time to time, but after I was mercilessly flogged online for the StructureMap documentation being so far behind in the late 00’s and FubuMVC never really having had any, I’ve been paranoid about OSS documentation ever since and we as a community try really hard to curate and expand our documentation. Anne Erdtsieck especially has added quite a bit of explanatory detail to the Marten documentation in the last six months.

It’s only anecdotal evidence, but the availability of the LLMS-friendly docs plus the most recent advances in AI LLM tools seem to have dramatically reduced the amount of questions we’re fielding in our Discord chat rooms while our usage numbers are still accelerating.

Oh, and I cannot emphasize more how important and valuable it is to be able to both quickly publish documentation updates and to enable users to quickly make suggestions to the documentation through pull requests.

Moar YouTube Videos

I dragged my feet on this one for a long time and probably still don’t do well enough, but we have the JasperFx Software Channel now with some videos and plenty of live streams. I’ve had mostly positive feedback on the live streams, so it’s just up to me to get back in a groove on this one.

SQL Server or CosmosDb Support in Addition to PostgreSQL

The most common complaint or concern about Marten in its first 5-7 years was that it only supported PostgreSQL as a backing data store. The most common advice we got from the outside was that we absolutely had to have SQL Server support in order to be viable inside the .NET ecosystem where shops do tend to be conservative in technology adoption and also tend to favor Microsoft offerings.

While I’ve always seen the obvious wisdom in supporting SQL Server, I never believed that it was practical to replicate Marten’s functionality with SQL Server. Partially because SQL Server lagged far behind PostgreSQL in its JSON capabilities for a long time and partially just out of sheer bandwidth limitations. I think it’s telling that nobody built a truly robust and widely used event store on top of SQL Server in the mean time.

But it’s 2026 and the math feels very different in many ways:

  1. PostgreSQL has grown in stature and at least in my experience, far more .NET shops are happy to take the plunge into PostgreSQL. It absolutely helps that the PostgreSQL ecosystem has absolutely exploded with innovation and that PostgreSQL has first class managed hosting or even serverless support on every cloud provider of any stature.
  2. SQL Server 2025 introduced a new native JSON type that brought SQL Server at least into the same neighborhood as PostgreSQL’s JSONB type. Using that, the JasperFx Software is getting close to releasing a full fledged port of most of Marten (you won’t miss the parts that were left out, I know I won’t!) called “Polecat” that will be backed by SQL Server 2025. We’ll see how much traction that tool gets, but early feedback has been positive.
  3. While we’re not there yet on Event Sourcing, at least Wolverine does have CosmosDb backed transactional inbox and outbox support as well as other integration into Wolverine handlers. I don’t have any immediate plans for Event Sourcing with CosmosDb other than “wouldn’t that be nice?” kind of thoughts. I don’t hear that many requests for this. I get even less feedback about DynamoDb, but I’ve always assumed we’d get around to that some day too.

Better Support for Cross Document Views or Queries

So, yeah. Document database approaches are awesome when your problem domain is well described by self-contained entities, but maybe not so much if you really need to model a lot of relationships between different first class entities in your system. Marten already had the Include() operator in our LINQ support, but folks aren’t super enthusiastic about it all the time. As Marten really became mostly about Event Sourcing over time, some of this issue went away for folks who could focus on using projections to just write documents out exactly as your use cases needed — which can sometimes happily eliminate the need for fancy JOIN queries and AutoMapper type translation in memory. However, I’ve worked with several JasperFx clients and other users in the past couple years who had real struggles with creating denormalized views with Marten projections, so that needed work too.

While that complaint was made in 2021, we now have or are just about to get in the next wave of releases:

  • The new “composite projection” model that was designed for easier creation of denormalized event projection views that has already met with some early success (and actionable feedback). This feature was also designed with some performance and scalability tuning in mind as well.
  • The next big release of Marten (8.23) will include support for the GroupJoin LINQ provider. Finally.
  • And let’s face it, EF Core will always have better LINQ support than Marten over all and a straight up relational table is probably always going to be more appropriate for reporting. To that end, Marten 8.23 will also have an extension library that adds first class event projections that write to EF Core.

Polecat 1.0 will include all of these new Marten features as well.

Improving LINQ Support

LINQ support was somewhat improved for that V4 release I was already selling in 2021, but much more so for V7 in early 2024 that moved us toward using much more PostgreSQL specific optimizations in JSONB searching as we were able to utilize JSONPath searching or back to the PostgreSQL containment operator.

At this point, it has turned out that recent versions of Claude are very effective at enhancing or fixing issues in the LINQ provider and at this point we have zero open issues related to LINQ for the first time since Marten’s founding back in 2015!

There’s one issue open as I write this that has an existing fix that hasn’t been committed to master yet, so if you go check up on me, I’m not technically lying:)

Open Telemetry Support and other Improved Metadata

Open Telemetry support is table stakes for .NET application framework tools and especially for any kind of Event Driven Architecture or Event Sourcing tool like Marten. We’ve had all that since Marten V7, with occasional enhancements or adjustments since in reaction to JasperFx client needs.

More Sample Applications

Yeah, we could still do a lot better on this front. Sigh.

One thing I want to try doing soon is developing some Claude skills for the Critter Stack in general, and a particular focus on creating instructions for best practices converting codebases to the Critter Stack. As part of that, I’ve identified about a dozen open source sample applications out there that would be good targets for this work. It’s a lazy way to create new samples applications while building an AI offering for JasperFx, but I’m all about being lazy sometimes.

We’ll see how this goes.

Scalability Improvements including Sharding

We’ve done a lot here since that 2021 discussion. Some of the Event Sourcing scalability options are explained here. This isn’t an exhaustive list, but since 2021 we have:

  • Much better load distribution of asynchronous projection and subscription work within clusters
  • Support for PostgreSQL read replicas
  • First class support for managing PostgreSQL native partitioning with Marten
  • A ton of internal improvements including work to utilize the latest, greatest low level support in Npgsql for query batching

And for that matter, I’ve finishing up a proposal this weekend for a new JasperFx client looking to scale a single Marten system to the neighborhood of 200-300 billion events, so there’s still more work ahead.

Event Streaming Support or Transactional Outbox Integration

This was frequently called out as a big missing feature in Marten in 2021, but with the integration into the full Critter Stack with Wolverine, we have first class event streaming support that was introduced in 2024 for every messaging technology that Wolverine supports today, which is just about everything you could possibly think to use!

Management User Interface

Sigh. Still in flight, but now very heavily in flight with a CritterWatch MVP promised to a JasperFx client by the end of this month. Learn more about that here:

Cloud Hosting Models and Recipes

People have brought this up a bit over the years, but we don’t have much other than some best practices with using Marten inside of Docker containers. I think it helps us that PostgreSQL is almost ubiquitous now and that otherwise a Marten application is just a .NET application.

SignalR + the Critter Stack

It’s early so I should be too cocky, but JasperFx Software is having success in integrating SignalR with both Wolverine and Marten in our forthcoming CritterWatch product. In this post I’ll show you how we’re doing that from the server side C# code all the way down to the client side TypeScript.

Last week I did a live stream talking about many of the details and a way too early demonstration of CritterWatch, JasperFx Software‘s long planned management console for the “Critter Stack” tools (Marten, Wolverine, and soon to be Polecat).

A big technical wrinkle in the CritterWatch approach so far is our utilization of the SignalR messaging support built into Wolverine. Just like with external messaging brokers like Rabbit MQ or Azure Service Bus, Wolverine does a lot of work to remove the technical details of SignalR and let’s you focus on just writing your application code.

In some ways, CritterWatch is kind of a man in the middle between the intended CritterWatch user interface (Vue.js) and the Wolverine enabled applications in your system:

Note that Wolverine will be required for CritterWatch, but if you only today use Marten and want to use CritterWatch to manage just the event sourcing, know that you will be able to use a very minimalistic Wolverine setup just for communication to CritterWatch without having to migrate your entire messaging infrastructure to Wolverine. And for that matter, Wolverine now has a pretty robust HTTP transport for asynchronous messaging that would work fine for CritterWatch integration.

As I said earlier, CritterWatch is going to depend very heavily on two way WebSockets communication between the user interface and the CritterWatch server, and we’re utilizing Wolverine’s SignalR messaging transport (which was purposefully built for CritterWatch in the first place) to get that done. In the CritterWatch codebase, we have this little bit of Wolverine configuration:

    public static void AddCritterWatchServices(this WolverineOptions opts, NpgsqlDataSource postgresSource)
    {
        // Much more of course...
        opts.Services.AddWolverineHttp();
        
        opts.UseSignalR();
        
        // The publishing rule to route any message type that implements
        // a marker interface to the connected SignalR Hub
        opts.Publish(x =>
        {
            x.MessagesImplementing<ICritterStackWebSocketMessage>();
            x.ToSignalR();
        });


        // Really need this so we can handle messages in order for 
        // a particular service
        opts.MessagePartitioning.UseInferredMessageGrouping();
        opts.Policies.AllListeners(x => x.PartitionProcessingByGroupId(PartitionSlots.Five));
    }

And at the bottom of the ASP.Net Core application hosting CritterWatch, we’ll have this to configure the request pipeline:

builder.Services.AddWolverineHttp();
var app = builder.Build();
// Little bit more in the real code of course...
app.MapWolverineSignalRHub("/api/messages");
return await app.RunJasperFxCommands(args);

As you can infer from the Wolverine publishing rule above, we’re using a marker interface to let Wolverine “know” what messages should always be sent to SignalR:

/// <summary>
/// Marker interface for all messages that are sent to the CritterWatch web client
/// via web sockets
/// </summary>
public interface ICritterStackWebSocketMessage : ICritterWatchMessage, WebSocketMessage;

We also use that marker interface in a homegrown command line integration to generate TypeScript versions of all those messages with NJsonSchema as well as message types that go from the user interface to the CritterWatch server. Wolverine’s SignalR integration assumes that all messages sent to SignalR or received from SignalR are wrapped in a Cloud Events compliant JSON wrapper, but the only required members are type that should identify what type of message it is and data that holds the actual message body as JSON. To make this easier, when we generate the TypeScript code we also insert a little method like this that we can use to identify the message type sent from the client to the Wolverine powered back end:

export class CompactStreamResult implements WebsocketMessage {
serviceName!: string;
streamId!: string;
success!: boolean;
error!: string | undefined;
queryId!: string | undefined;
// THIS method is injected by our custom codegen
// and helps us communicate with the server as
// this matches Wolverine's internal identification of
// this message
get messageTypeName() : string{
return "compact_stream_result";
}
// other stuff...
init(_data?: any) {
if (_data) {
this.serviceName = _data["serviceName"];
this.streamId = _data["streamId"];
this.success = _data["success"];
this.error = _data["error"];
this.queryId = _data["queryId"];
}
}
static fromJS(data: any): CompactStreamResult {
data = typeof data === 'object' ? data : {};
let result = new CompactStreamResult();
result.init(data);
return result;
}
toJSON(data?: any) {
data = typeof data === 'object' ? data : {};
data["serviceName"] = this.serviceName;
data["streamId"] = this.streamId;
data["success"] = this.success;
data["error"] = this.error;
data["queryId"] = this.queryId;
return data;
}
}

Most of the code above is generated by NJsonSchema, but our custom codegen inserts in the get messageTypeName() method, that we use in the client side code below to wrap up messages to send back up to our server:

  async function sendMessage(msg: WebsocketMessage) {
    if (conn.state === HubConnectionState.Connected) {
      const payload = 'toJSON' in msg ? (msg as any).toJSON() : msg
      const cloudEvent = JSON.stringify({
        id: crypto.randomUUID(),
        specversion: '1.0',
        type: msg.messageTypeName,
        source: 'Client',
        datacontenttype: 'application/json; charset=utf-8',
        time: new Date().toISOString(),
        data: payload,
      })
      await conn.invoke('ReceiveMessage', cloudEvent)
    }
  }

In the reverse direction, we receive the raw message from a connected WebSocket with the SignalR client, interrogate the expected CloudEvents wrapper, figure out what the message type is from there, deserialize the raw JSON to the right TypeScript type, and generally just relay that to a Pinia store where all the normal Vue.js + Pinia reactive user interface magic happens.

export function relayToStore(data: any){
const servicesStore = useServicesStore();
const dlqStore = useDlqStore();
const metricsStore = useMetricsStore();
const projectionsStore = useProjectionsStore();
const durabilityStore = useDurabilityStore();
const eventsStore = useEventsStore();
const scheduledMessagesStore = useScheduledMessagesStore();
const envelope = typeof data === 'string' ? JSON.parse(data) : data;
switch (envelope.type){
case "dead_letter_details":
dlqStore.handleDeadLetterDetails(DeadLetterDetails.fromJS(envelope.data));
break;
case "dead_letter_queue_summary_results":
dlqStore.handleDeadLetterQueueSummaryResults(DeadLetterQueueSummaryResults.fromJS(envelope.data));
break;
case "all_service_summaries":
const allSummaries = AllServiceSummaries.fromJS(envelope.data);
servicesStore.handleAllServiceSummaries(allSummaries);
if (allSummaries.persistenceCounts) {
for (const pc of allSummaries.persistenceCounts) {
durabilityStore.handlePersistenceCountsChanged(pc);
}
}
if (allSummaries.metricsRollups) {
metricsStore.handleAllMetricsRollups(allSummaries.metricsRollups);
}
break;
case "summary_updated":
servicesStore.handleSummaryUpdated(SummaryUpdated.fromJS(envelope.data));
break;
case "agent_and_node_state_changed":
servicesStore.handleAgentAndNodeStateChanged(AgentAndNodeStateChanged.fromJS(envelope.data));
break;
case "service_summary_changed":
servicesStore.handleServiceSummaryChanged(ServiceSummaryChanged.fromJS(envelope.data));
break;
case "metrics_rollup":
metricsStore.handleMetricsRollup(MetricsRollup.fromJS(envelope.data));
break;
case "all_metrics_rollups":
metricsStore.handleAllMetricsRollups(AllMetricsRollups.fromJS(envelope.data));
break;
case "shard_states_changed":
projectionsStore.handleShardStatesChanged(ShardStatesChanged.fromJS(envelope.data));
break;
case "persistence_counts_changed":
durabilityStore.handlePersistenceCountsChanged(PersistenceCountsChanged.fromJS(envelope.data));
break;
case "stream_details":
eventsStore.handleStreamDetails(StreamDetails.fromJS(envelope.data));
break;
case "event_query_results":
eventsStore.handleEventQueryResults(EventQueryResults.fromJS(envelope.data));
break;
case "compact_stream_result":
eventsStore.handleCompactStreamResult(CompactStreamResult.fromJS(envelope.data));
break;
// *CASE ABOVE* -- do not remove this comment for the codegen please!
}
}

And that’s really it. I omitted some of our custom codegen code (because it’s hokey), but it doesn’t do much more than find the message types in the .NET code that are marked as going to or coming from the Vue.js client and writes them as matching TypeScript types.

But wait, Marten gets into the act too!

With the Marten + Wolverine integration through this:

        opts.Services.AddMarten(m =>
        {
            // Other stuff...
            m.Projections.Add<ServiceSummaryProjection>(ProjectionLifecycle.Async);
        }).IntegrateWithWolverine(w =>
        {
            w.UseWolverineManagedEventSubscriptionDistribution = true;
        });

Marten can also get into the SignalR act through its support for “Side Effects” in projections. As a certain projection for ServiceSummary is updated with new events in CritterWatch, we can raise messages reflecting the new changes in state to notify our clients with code like this from a SingleStreamProjection:

    public override ValueTask RaiseSideEffects(IDocumentOperations operations, IEventSlice<ServiceSummary> slice)
    {
        var hasShardStates = slice.Events().Any(x => x.Data is ShardStatesUpdated);

        if (hasShardStates)
        {
            var shardEvent = slice.Events().Last(x => x.Data is ShardStatesUpdated).Data as ShardStatesUpdated;
            slice.PublishMessage(new ShardStatesChanged(slice.Snapshot.Id, shardEvent!.States));
        }

        if (slice.Events().All(x => x.Data is IImpactsAgentOrNodes || x.Data is ShardStatesUpdated))
        {
            if (!hasShardStates)
            {
                slice.PublishMessage(new AgentAndNodeStateChanged(slice.Snapshot.Id, slice.Snapshot.Nodes, slice.Snapshot.Agents));
            }
        }
        else
        {
            slice.PublishMessage(new ServiceSummaryChanged(slice.Snapshot));
        }

        return new ValueTask();
    }

The Marten projection itself knows absolutely nothing about where those messages will go or how, but Wolverine kicks in to help its Critter Stack sibling and it deals with all the message delivery. The message types above all implement the ICritterStackWebSocketMessage interface, so they will get routed by Wolverine to SignalR. To rewind, the workflow here is:

  1. CritterWatch constantly receives messages from Wolverine applications with changes in state like new messaging endpoints being used, agents being reassigned, or nodes being started or shut down
  2. CritterWatch saves any changes in state as events to Marten (or later to SQL Server backed Polecat)
  3. The Marten async daemon processes those events to update CritterWatch’s ServiceSummary projection
  4. As pages of events are applied to individual services, Marten calls that RaiseSideEffects() method to relay some state changes to Wolverine, which will..
  5. Send those messages to SignalR based on Wolverine’s routing rules and on to the client side code which…
  6. Relays the incoming messages to the proper Pinia store

Summary

I won’t say that using Wolverine for processing and sending messages via SignalR is justified in every application, but it more than pays off if you have a highly interactive application that sends any number of messages between the user interface and the server.

Sometime last week I said online that no project is truly a failure if you learned something valuable from that effort that could help a later project succeed. When I wrote that I was absolutely thinking about the work shown above and relating that to a failed effort of mine called Storyteller (Redux + early React.js + roll your own WebSockets support on the server) that went nowhere in the end, but taught me a lot of valuable lessons about using WebSockets in a highly interactive application that has directly informed my work on CritterWatch.