Announcing Polecat: Event Sourcing with SQL Server

Polecat is now completely supported by JasperFx Software and automatically part of any existing and future support agreements through our existing plans.

Polecat was released as 1.0 this past week (with 1.1 & now 1.2 coming soon). Let’s call it what it is, Polecat is a port of (most of) Marten to target SQL Server 2025 and SQL Server’s new JSON data type. For folks not familiar with Marten, Polecat is in one library:

  1. A very full fledged Event Store library for SQL Server that includes event projection and subscriptions, Dynamic Consistency Boundary support, a large amount of functionality for Event Sourcing basics, rich event metadata tracking capabilities, and even rich multi-tenancy support.
  2. A feature rich set of Document Database capabilities backed by SQL Server including LINQ querying support

And while Polecat is brand spanking new, it comes out of the gate with the decade old Marten pedigree and its own Wolverine integration for CQRS usage. I’m confident in saying Polecat is now the best technical option for using Event Sourcing with SQL Server in the .NET ecosystem.

And of course, if you’re a shop with deep existing roots into EF Core usage, Polecat also comes with projection support to EF Core, so Polecat can happily coexist with EF Core in the same systems.

Alright, let’s just into a quick start. First, let’s say you’ve started a brand new .NET project through dotnet run webapi and you’ve added a reference to Polecat through Nuget (and you have a running SQL Server 2025 instance handy too of course!). Next, let’s start with the inevitable AddPolecat() usage in your Program file:

builder.Services.AddPolecat(options =>
{
// Connection string to your SQL Server 2025 database
options.Connection("Server=localhost;Database=myapp;User Id=sa;Password=YourStrong!Password;TrustServerCertificate=True");
// Optionally change the default schema (default is "dbo")
options.DatabaseSchemaName = "myschema";
});

Polecat can be used without IHost or IServiceCollection registrations by just directly building a DocumentStore object.

Next, let’s say you’ve got this simplistic document type (entity in Polecat parlance):

public class User
{
public Guid Id { get; set; }
public required string FirstName { get; set; }
public required string LastName { get; set; }
public bool Internal { get; set; }
}

And now, let’s use Polecat within some Minimal API endpoints to capture and query User documents:

// Store a document
app.MapPost("/user", async (CreateUserRequest create, IDocumentSession session) =>
{
var user = new User
{
FirstName = create.FirstName,
LastName = create.LastName,
Internal = create.Internal
};
session.Store(user);
await session.SaveChangesAsync();
});
// Query with LINQ
app.MapGet("/users", async (bool internalOnly, IDocumentSession session, CancellationToken ct) =>
{
return await session.Query<User>()
.Where(x => x.Internal == internalOnly)
.ToListAsync(ct);
});
// Load by ID
app.MapGet("/user/{id:guid}", async (Guid id, IQuerySession session, CancellationToken ct) =>
{
return await session.LoadAsync<User>(id, ct);
});

For folks used to EF Core, I should point out that Polecat has its own “it just works” database migration subsystem that in the default development mode will happily make sure that all necessary database tables, views, and functions are exactly as they should be at runtime so you don’t have to fiddle with database migrations when all you want to do is just get things done.

While I initially thought that we’d mainly focus on the event sourcing support, we were also able to recreate the mass majority of Marten’s document database capabilities (including the “partial update” model, LINQ support, soft deletes, multi-tenancy, and batch updates for starters) as well if you’d only be interested in that feature set by itself.

Moving over to event sourcing instead, let’s say you’re into fantasy books like I am and you want to build a system to model the journeys and adventures of a quest in your favorite fantasy series. You might model some of the events in that system like:

public record QuestStarted(string Name);
public record MembersJoined(string Location, string[] Members);
public record MembersDeparted(string Location, string[] Members);
public record QuestEnded(string Name);

And you model the current state of the quest party like this:

public class QuestParty
{
public Guid Id { get; set; }
public string Name { get; set; } = "";
public List<string> Members { get; set; } = new();
public void Apply(QuestStarted started)
{
Name = started.Name;
}
public void Apply(MembersJoined joined)
{
Members.AddRange(joined.Members);
}
public void Apply(MembersDeparted departed)
{
foreach (var member in departed.Members)
Members.Remove(member);
}
}

The step above isn’t strictly necessary for event sourcing, but you usually need a projection of some sort sooner or later.

And finally, we can add events by starting a new event stream:

var store = DocumentStore.For(opts =>
{
opts.Connection("Server=localhost,1433;Database=myapp;User Id=sa;Password=YourStrong!Password;TrustServerCertificate=True");
});
await using var session = store.LightweightSession();
// Start a new stream with initial events
var questId = session.Events.StartStream<QuestParty>(
new QuestStarted("Destroy the Ring"),
new MembersJoined("Rivendell", ["Frodo", "Sam", "Aragorn", "Gandalf"])
);
await session.SaveChangesAsync();

And even append some new ones to the same stream later:

await using var session = store.LightweightSession();
session.Events.Append(questId,
new MembersJoined("Moria", ["Gimli", "Legolas"]),
new MembersDeparted("Moria", ["Gandalf"])
);
await session.SaveChangesAsync();

And derive the current state of our quest:

var party = await session.Events.AggregateStreamAsync<QuestParty>(questId);
// party.Name == "Destroy the Ring"
// party.Members == ["Frodo", "Sam", "Aragorn", "Gimli", "Legolas"]

And there’s much, much more of course, including everything you’d need to build real systems based on our 10 years and counting supporting Marten with PostgreSQL.

How is Polecat Different than Marten?

There are of course some differences besides just the database engine:

  • Polecat is using source generators instead of the runtime code generation that Marten does today
  • Polecat will only support System.Text.Json for now as a serialization engine
  • Polecat only supports the “Quick Append” option from Marten
  • There is no automatic dirty checking
  • No “duplicate fields” support so far, we’re going to reevaluate that though
  • Plenty of other technical baggage features I flat out didn’t want to support in Marten didn’t make the cut, but I can’t imagine anyone will miss any of that!

Summary

For over a decade people have been telling me that Marten would be more successful and adopted by more .NET shops if it only supported SQL Server in addition to or instead of PostgreSQL. While I’ve never really disagreed with that idea — and it’s impossible to really prove the counter factual anyway — there have always been real blockers in both SQL Server’s JSON support lagging far behind PostgreSQL and frankly the time commitment on my part to be able to attempt that work in the first place.

So what changed to enable this?

  1. SQL Server 2025 added much better JSON support rivaling PostgreSQL’s JSONB type
  2. We had already invested in pulling the basic event abstractions and projection support out of Marten and into a common library called JasperFx.Events as part of the Marten 8.0 release cycle and that work was always meant to be an enabler for what is now Polecat
  3. Claude & Opus 4.5/4.6 turned out to be very, very good at grunt work

That second item had to this point been a near disaster in my mind because of how much work and time that took compared to the benefits and was the single most time consuming part of Polecat development. Let’s just say that I’m very relieved that that effort didn’t turn out to be a very expensive sunk cost for JasperFx!

I have no earthly idea how much traction Polecat will really get, but we’ve already had some interest from folks who have wanted to use Marten, but couldn’t get their .NET shop to adopt PostgreSQL. I’m hopeful!

Critter Stack Roadmap for March 2026

It’s only a month since I’ve written an update on the Critter Stack roadmap, but it’s maybe worth some time on my part to update what I think the roadmap is now. The biggest change is the utter dominance of AI in the software development discourse and the fact that Claude usage has allowed us to chew through a shocking amount of backlog in the past 6 weeks. That’s probably also changed my own thinking about what should be next throughout this year.

First, some updates on what’s been added to the Critter Stack in just the last month:

By the time you read this, we may very well have Polecat 1.0 out as well.

Short Term

The short term priority for myself and JasperFx Software is to deliver the CritterWatch MVP in a usable form by the end of March.

Marten, Wolverine, and even Polecat have no major new features planned for the short term and I think they will only get tactical releases for bug fixes and JasperFx client requests for a little while. And let me tell you, it feels *weird* to say that, but we’ve blown through a tremendous amount of the backlog so far in 2026.

Medium Term

  • Enhance CritterWatch until it’s the best in class monitoring tool for asynchronous messaging and event sourcing. Part of that will probably be adding quite a bit more functionality for development time as well.
  • For a JasperFx Software client, we’re doing PoC work on scaling Marten to be able to handle having several hundred billion events in a single system. I’m going to assume that this PoC will probably lead to enhancements in both Marten and Wolverine!
  • We’ll finally add some direct support to Marten for the PostGIS PostgreSQL extension
  • I’m a little curious to try to use the hstore extension with Marten as a possible way to optimize our new DCB support
  • Play with Pgvector and TimescaleDb in combination with Marten as some kind of vague “how can we say that Marten is even more awesome for AI?”
  • There’s going to be a new wave of releases later this year for Marten 9.0, Wolverine 6.0, and Polecat 2.0 that will mostly about performance optimizations and especially finding ways to optimize the cold start time of applications using these tools.
  • Babu and I (really all Babu so far) are going to be building a set of AI skills for using the Critter Stack tools that will be curated in a GitHub repository and available to JasperFx Software clients. I do not know what the full impact of AI tools are really going to be on software development, but I personally want to plan for the worst case that AI tools plus LLM-friendly documentation drastically reduces the demand for consulting and try to belatedly pivot JasperFx Software to being at least partially a product company.
  • Build tooling for spec driven development using the Critter Stack. I don’t have any details beyond “hey, wouldn’t that be cool?”. My initial thought is to play with Gherkin specifications that generates “best practices” Critter Stack code with the accompanying automated tests to boot.
  • One way or another, we’ll be building MCP support into the Critter Stack, but again, I don’t know anything more than “hey, wouldn’t that be cool?”

Long Term

Profit?

I’m playing with the idea of completely rebooting Storyteller as a new spec driven development tool. I have the Nuget rights to the “Storyteller” name and graphics from Khalid (a necessary requirement for any successful effort on my part), and I’ve always wanted to go back to it some day.

Re-Sequencer and Global Message Partitioning in Wolverine

Last week I helped a JasperFx Software client with a use case where they get a steady stream of related events from an upstream system into a downstream system where order of processing is important, but the messages might arrive out of order.

Once again referring to the venerable Enterprise Integration Patterns book, that scenario requires a Resequencer:

How can we get a stream of related but out-of-sequence messages back into the correct order?

EIP ReSequencer

To solve the message ordering challenge, we introduced the new Resequencer Saga feature into Wolverine, and combined that with the existing “Partitioned Sequential Messaging” feature.

For the new built in re-sequencing, we do need you to implement this interface on any message types in that related stream so that Wolverine “knows” what order the message is inside of a related stream:

public interface SequencedMessage
{
int? Order { get; }
}

The next step is to use a special kind of new Wolverine Saga called ResequencerSaga<T>, where the T is just some sort of common interface for all the message types that are part of this ordered stream and also implements the SequencedMessage shown above. Here’s a simple example I used for the testing:

public record StartMyWorkflow(Guid Id);
public record MySequencedCommand(Guid SagaId, int? Order) : SequencedMessage;
public class MyWorkflowSaga : ResequencerSaga<MySequencedCommand>
{
public Guid Id { get; set; }
public static MyWorkflowSaga Start(StartMyWorkflow cmd)
{
return new MyWorkflowSaga { Id = cmd.Id };
}
public void Handle(MySequencedCommand cmd)
{
// This will only be called when messages arrive in the correct order,
// or when out-of-order messages are replayed after gaps are filled
}
}

At runtime, when Wolverine gets a message that is handled by that MyWorkflowSaga, there is some middleware that first compares the declared order of that message against the recorded state of the saga so far. In more concrete terms, if…

  • It’s the first message in the sequence, Wolverine just processes it as normal and records in the saga state what the last processed message order was so that it “knows” what message sequence should be next
  • It’s a later message in the sequence compared to the last message sequence processed, the saga state will just store the current message, persist the saga state, and otherwise skip the normal message processing
  • The message is the next in the sequence according to what the saga state says should be processed next, it processes normally. If there are any previously out of order messages that the saga state already knows about that are sequentially next after the current message, Wolverine will re-publish those messages locally — but with the normal Wolverine message sequencing these cascading messages will not go anywhere until the initiating message completes

With this mechanism, Wolverine is able to put the messages arriving from the outside world back into the correct sequential order in its own processing.

Of course though, this processing is very stateful and somewhat likely to be vulnerable to concurrent access problems. Most of the saga storage mechanisms in Wolverine happily support optimistic concurrency around saving saga state, so you could just use some selective retries on concurrency violations. Or better yet, Wolverine users can just about completely side step issues with concurrency by utilizing our newest improvement to partitioned messaging we’re calling “Global Partitioning.”

Let’s say that you have a great deal of operations in your system that have to modify a resource of some sort like an entity, a file, a saga in this case, or an event stream that might be a little bit sensitive to concurrent access. Let’s also say that you have a mix of messages that impact these sensitive resources that come from both external, upstream systems and from cascaded messages within your own system.

The syntax for this next feature was added just today in Wolverine 5.21 as I realized the previous syntax was basically unusable in the course of trying to write this blog post. So it goes.

A “global partitioning” allows you to create a guarantee that messages impacting those resources can be processed sequentially within a message group while allowing for parallel processing between message groups throughout the entire cluster.

Imagine it like this (but know I drew this diagram for someone using Kafka even though the next example is using Rabbit MQ queues):

And with this configuration:

using var host = await Host.CreateDefaultBuilder()
.UseWolverine(opts =>
{
// You'd *also* supply credentials here of course!
opts.UseRabbitMq();
// Do something to add Saga storage too!
opts
.MessagePartitioning
// This tells Wolverine to "just" use implied
// message grouping based on Saga identity among other things
.UseInferredMessageGrouping()
.GlobalPartitioned(topology =>
{
// Creates 5 sharded RabbitMQ queues named "sequenced1" through "sequenced5"
// with matching companion local queues for sequential processing
topology.UseShardedRabbitQueues("sequenced", 5);
topology.MessagesImplementing<MySequencedCommand>();
});
}).StartAsync();

What this does is spread the work out for handling MySequencedCommand messages through five different Rabbit MQ + Local queue pairs, with each pair active on only one single node within your application. Even inside each local queue in this partitioning scheme, Wolverine is parallelizing between message groups.

Now, let’s talk about receiving any message that can be cast to MySequencedCommand. If the message is received at a completely different listener than the “sequenced1/2/3/4/5” queues defined above, like from an external system that knows absolutely nothing about your message partitioning, Wolverine is going to immediately determine the message group identity by inferring that from the saga message handler rules (that’s what the UseInferredMessageGrouping() option does for us), then forwards that message to the proper node that is currently handling that group id. If the current node happens to be assigned that message group id, Wolverine forwards the message directly to the right local queue.

Likewise, if you publish a cascading message inside one of your handlers, Wolverine will determine the message group id for that message type, then try to either route that message locally if that group happens to be assigned to the current node (and it probably would be if you were cascading from your own handlers) or sends it remotely to the right messaging endpoint (Rabbit MQ queue or a Kafka topic or an AWS SQS queue maybe).

The point being, this guarantees that related messages are processed sequentially across the entire application cluster while allowing parallel processing between unrelated messages.

Summary

These are hopefully two powerful new features that will benefit Wolverine users in the near future. Both of these features were built at the behest of JasperFx Software clients to directly support their current work. I’m very happy to just quietly fold in reasonably sized new features for JasperFx support clients without extra cost when those features likely benefit the community as a whole. Contact us at sales@jasperfx.net to find out what we can do to help your software development efforts be more successful.

And just for bragging rights tonight, I did some poking around (okay, I asked Claude to do it for me) to see if any other asynchronous messaging tools offer anything similar to what our global partitioning option does for Wolverine users. While you can certainly achieve the same goals through actor frameworks like AkkaDotNet or Orleans (I consider actor frameworks to be such a different paradigm that I don’t really think of them as direct competitors to Wolverine), it doesn’t appear that there are any equivalents out there to this feature in the .NET space. MassTransit and NServiceBus both have more limited versions of this capability, but nothing that is as easy or flexible as what Wolverine has at this point. Now, granted, we’re at this point because Marten event stream appends can be sensitive to concurrent access so we’ve had to take concurrency maybe a little more seriously than the pure play asynchronous messaging tools that don’t really have an event sourcing component.

Natural Keys in the Critter Stack

Just to level set everyone, there are two general categories of identifiers we use in software:

  • “Surrogate” keys are data elements like Guid values, database auto numbering or sequences, or snowflake generated identifiers that have no real business meaning and just try to be unique values.
  • “Natural” keys have some kind of business meaning and usually utilize some piece of existing information like email addresses or phone numbers. A natural key could also be an external supplied identifier from your clients. In fact, it’s quite common to have your own tracking identifier (usually a surrogate key) while also having to track a client or user’s own identification for the same business entity.

That very last sentence is where this post takes off. You see Marten can happily track event streams with either Guid identifiers (surrogate key) or string identifiers — or strong typed identifiers that wrap an inner Guid or string, but in this case that’s really the same thing, just with more style I guess. Likewise, in combination with Wolverine for our recommended “aggregate handler workflow” approach to building command handlers, we’ve only supported the stream id or key. Until now!

With the Marten 8.23 and Wolverine 5.18 releases last week (we’ve been very busy and there are newer releases now), you are now able to “tag” Marten (or Polecat!) event streams with a natural key in addition to its surrogate stream id and use that natural key in conjunction with Wolverine’s aggregate handler workflow.

Of course, if you use strings as the stream identifier you could already use natural keys, but let’s just focus on the case of Guid identified streams that are also tagged with some kind of natural key that will be supplied by users in the commands sent to the system.

First, to tag streams with natural keys in Marten, you have to have a strong typed identifier type for the natural key. Next, there’s a little bit of attribute decoration in the targeted document type of a single stream projection, i.e., the “write model” for an event stream. Here’s an example from the Marten documentation:

public record OrderNumber(string Value);
public record InvoiceNumber(string Value);
public class OrderAggregate
{
public Guid Id { get; set; }
[NaturalKey]
public OrderNumber OrderNum { get; set; }
public decimal TotalAmount { get; set; }
public string CustomerName { get; set; }
public bool IsComplete { get; set; }
[NaturalKeySource]
public void Apply(OrderCreated e)
{
OrderNum = e.OrderNumber;
CustomerName = e.CustomerName;
}
public void Apply(OrderItemAdded e)
{
TotalAmount += e.Price;
}
[NaturalKeySource]
public void Apply(OrderNumberChanged e)
{
OrderNum = e.NewOrderNumber;
}
public void Apply(OrderCompleted e)
{
IsComplete = true;
}
}

In particular, see the usage of [NaturalKey] which should be self-explanatory. Also see the [NaturalKeySource] attribute that we’re using to mark when a natural key value might change. Marten is starting to use source generators for some projection internals (in place of some nasty, not entirely as efficient as it should have been, Expression-compiled-to-Lambda functions).

And that’s that, really. You’re now able to use the designated natural keys as the input to an “aggregate handler workflow” command handler with Wolverine. See Natural Keys from the Wolverine documentation for more information.

For a little more information:

  • The natural keys are stored in a separate table, and when using FetchForWriting(), Marten is doing an inner join from the tag table for that natural key type to the mt_streams table in the Marten database
  • You can change the natural key against the surrogate key
  • We expect this to be most useful when you want to use the Guid surrogate keys for uniqueness in your own system, but you frequently receive a natural key from API users of your system — or at least this has been encountered by a couple different JasperFx Software customers.
  • The natural key storage does have a unique value constraint on the “natural key” part of the storage
  • Really only a curiosity, but this was done in the same wave of development as Marten’s new DCB support

Validation Options in Wolverine

Wolverine — the event-driven messaging and HTTP framework for .NET — provides a rich, layered set of options for validating incoming data. Whether you are building HTTP endpoints or message handlers, Wolverine meets you where you are: from zero-configuration inline checks to full Fluent Validation or Data Annotation middleware support for both command handlers and HTTP endpoints.

Let’s maybe over simplify validation scenarios say they’ll fall into two buckets:

  1. Run of the mill field level validation rules like required fields or value ranges. These rules are the bread and butter of dedicated validation frameworks like Fluent Validation or Microsoft’s Data Annotations markup.
  2. Custom validation rules that are custom to your business domain and might involve checks against the existing state of your system beyond the command messages.

Let’s first look at Wolverine’s Data Annotation integration that is completely baked into the core WolverineFx Nuget. To get started, just opt into the Data Annotations middleware for message handlers like this:

using var host = await Host.CreateDefaultBuilder()
.UseWolverine(opts =>
{
// Apply the validation middleware
opts.UseDataAnnotationsValidation();
}).StartAsync();

In message handlers, this middleware will kick in for any message type that has any validation attributes as this example:

public record CreateCustomer(
// you can use the attributes on a record, but you need to
// add the `property` modifier to the attribute
[property: Required] string FirstName,
[property: MinLength(5)] string LastName,
[property: PostalCodeValidator] string PostalCode
) : IValidatableObject
{
public IEnumerable<ValidationResult> Validate(ValidationContext validationContext)
{
// you can implement `IValidatableObject` for custom
// validation logic
yield break;
}
};
public class PostalCodeValidatorAttribute : ValidationAttribute
{
public override bool IsValid(object? value)
{
// custom attributes are supported
return true;
}
}
public static class CreateCustomerHandler
{
public static void Handle(CreateCustomer customer)
{
// do whatever you'd do here, but this won't be called
// at all if the DataAnnotations Validation rules fail
}
}

By default for message handlers, any validation errors are logged, then the current execution is stopped through the usage of the HandlerContinuation value we’ll discuss later.

For Wolverine.HTTP integration with Data Annotations, use:

app.MapWolverineEndpoints(opts =>
{
// Use Data Annotations that are built
// into the Wolverine.HTTP library
opts.UseDataAnnotationsValidationProblemDetailMiddleware();
});

Likewise, this middleware will only apply to HTTP endpoints that have a request input model that contains data annotation attributes. In this case though, Wolverine is using the ProblemDetails specification to report validation errors back to the caller with a status code of 400 by default.

Fluent Validation Middleware

Similarly, the Fluent Validation integration works more or less the same, but requires the WolverineFx.FluentValidation package for message handlers and the WolverineFx.Http.FluentValidation package for HTTP endpoints. There are some Wolverine helpers for discovering and registering FluentValidation validators in a way that applies some Wolverine-specific performance optimizations by trying to register most validators with a Singleton lifetime just to allow Wolverine to generate more optimized code.

It is possible to override how Wolverine handles validation failures, but I’d personally recommend just using the ProblemDetails default in most cases.

I would like to note that the way that Wolverine generates code for the Fluent Validation middleware is generally going to be more efficient at runtime than the typical IoC dependent equivalents you’ll frequently find in the MediatR space.

Explicit Validation

Let’s move on to validation rules that are more specific to your own problem domain, and especially the type of validation rules that would require you to examine the state of your system by exercising some kind of data access. These kinds of rules certainly can be done with custom Fluent Validation validators, but I strongly recommend you put that kind of validation directly into your message handlers or HTTP endpoints to colocate business logic together with the actual message handler or HTTP endpoint happy path.

One of the unique features of Wolverine in comparison to the typical “IHandler of T” application frameworks in .NET is Wolverine’s built in support for a type of low code ceremony Railway Programming, and this turns out to be perfect for one off validation rules.

In message handlers we’ve long had support for returning the HandlerContinuation enum from Validate() or Before() methods as a way to signal to Wolverine to conditionally stop all additional processing:

public static class ShipOrderHandler
{
// This would be called first
public static async Task<(HandlerContinuation, Order?, Customer?)> LoadAsync(ShipOrder command, IDocumentSession session)
{
var order = await session.LoadAsync<Order>(command.OrderId);
if (order == null)
{
return (HandlerContinuation.Stop, null, null);
}
var customer = await session.LoadAsync<Customer>(command.CustomerId);
return (HandlerContinuation.Continue, order, customer);
}
// The main method becomes the "happy path", which also helps simplify it
public static IEnumerable<object> Handle(ShipOrder command, Order order, Customer customer)
{
// use the command data, plus the related Order & Customer data to
// "decide" what action to take next
yield return new MailOvernight(order.Id);
}
}

But of course, with the example above, you could also write that with Wolverine’s declarative persistence like this:

public static class ShipOrderHandler
{
// The main method becomes the "happy path", which also helps simplify it
public static IEnumerable<object> Handle(
ShipOrder command,
// This is loaded by the OrderId on the ShipOrder command
[Entity(Required = true)]
Order order,
// This is loaded by the CustomerId value on the ShipOrder command
[Entity(Required = true)]
Customer customer)
{
// use the command data, plus the related Order & Customer data to
// "decide" what action to take next
yield return new MailOvernight(order.Id);
}
}

In the code above, Wolverine would stop the processing if either the Order or Customer entity referenced by the command message is missing. Similarly, if this code were in an HTTP endpoint instead, Wolverine would emit a ProblemDetails with a 400 status code and a message stating the data that is missing.

If you were using the code above with the integration with Marten or Polecat, Wolverine can even emit code that uses Marten or Polecat’s batch querying functionality to make your system more efficient by eliminating database round trips.

Likewise in the HTTP space, you could also return a ProblemDetails object directly from a Validate() method like:

public class ProblemDetailsUsageEndpoint
{
public ProblemDetails Validate(NumberMessage message)
{
if (message.Number > 5)
return new ProblemDetails
{
Detail = "Number is bigger than 5",
Status = 400
};
// All good — continue!
return WolverineContinue.NoProblems;
}
[WolverinePost("/problems")]
public static string Post(NumberMessage message) => "Ok";
}

Even More Lightweight Validation!

When reviewing client code that uses the HandlerContinuation or ProblemDetails syntax, I definitely noticed the code can become verbose and noisy, especially compared to just embedding throw new InvalidOperationException("something is not right here"); code directly in the main methods — which isn’t something I’d like to see people tempted to do.

Instead, Wolverine 5.18 added a more lightweight approach that allows you to just return an array of strings from a Before/Validation() method:

    public static IEnumerable<string> Validate(SimpleValidateEnumerableMessage message)
    {
        if (message.Number > 10)
        {
            yield return "Number must be 10 or less";
        }
    }

    // or

    public static string[] Validate(SimpleValidateStringArrayMessage message)
    {
        if (message.Number > 10)
        {
            return ["Number must be 10 or less"];
        }

        return [];
    }

At runtime, Wolverine will stop a handler if there are any messages or emit a ProblemDetails response in HTTP endpoints.

Summary

Hopefully, Wolverine has you covered no matter what with options. A few practical takeaways:

  • Reach for Validate() / ValidateAsync() first whenever IoC services or database queries are involved or the validation logic is just specific to your message handler or HTTP endpoint.
  • Use Data Annotations middleware when your model types are already decorated with attributes and you want zero validator classes.
  • Use Fluent Validation middleware when you want reusable, composable validators shared across multiple handlers or endpoints.

All three strategies generate efficient, ahead-of-time compiled middleware via Wolverine’s code generation engine, keeping the runtime overhead minimal regardless of which path you choose.

SignalR + the Critter Stack

It’s early so I should be too cocky, but JasperFx Software is having success in integrating SignalR with both Wolverine and Marten in our forthcoming CritterWatch product. In this post I’ll show you how we’re doing that from the server side C# code all the way down to the client side TypeScript.

Last week I did a live stream talking about many of the details and a way too early demonstration of CritterWatch, JasperFx Software‘s long planned management console for the “Critter Stack” tools (Marten, Wolverine, and soon to be Polecat).

A big technical wrinkle in the CritterWatch approach so far is our utilization of the SignalR messaging support built into Wolverine. Just like with external messaging brokers like Rabbit MQ or Azure Service Bus, Wolverine does a lot of work to remove the technical details of SignalR and let’s you focus on just writing your application code.

In some ways, CritterWatch is kind of a man in the middle between the intended CritterWatch user interface (Vue.js) and the Wolverine enabled applications in your system:

Note that Wolverine will be required for CritterWatch, but if you only today use Marten and want to use CritterWatch to manage just the event sourcing, know that you will be able to use a very minimalistic Wolverine setup just for communication to CritterWatch without having to migrate your entire messaging infrastructure to Wolverine. And for that matter, Wolverine now has a pretty robust HTTP transport for asynchronous messaging that would work fine for CritterWatch integration.

As I said earlier, CritterWatch is going to depend very heavily on two way WebSockets communication between the user interface and the CritterWatch server, and we’re utilizing Wolverine’s SignalR messaging transport (which was purposefully built for CritterWatch in the first place) to get that done. In the CritterWatch codebase, we have this little bit of Wolverine configuration:

    public static void AddCritterWatchServices(this WolverineOptions opts, NpgsqlDataSource postgresSource)
    {
        // Much more of course...
        opts.Services.AddWolverineHttp();
        
        opts.UseSignalR();
        
        // The publishing rule to route any message type that implements
        // a marker interface to the connected SignalR Hub
        opts.Publish(x =>
        {
            x.MessagesImplementing<ICritterStackWebSocketMessage>();
            x.ToSignalR();
        });


        // Really need this so we can handle messages in order for 
        // a particular service
        opts.MessagePartitioning.UseInferredMessageGrouping();
        opts.Policies.AllListeners(x => x.PartitionProcessingByGroupId(PartitionSlots.Five));
    }

And at the bottom of the ASP.Net Core application hosting CritterWatch, we’ll have this to configure the request pipeline:

builder.Services.AddWolverineHttp();
var app = builder.Build();
// Little bit more in the real code of course...
app.MapWolverineSignalRHub("/api/messages");
return await app.RunJasperFxCommands(args);

As you can infer from the Wolverine publishing rule above, we’re using a marker interface to let Wolverine “know” what messages should always be sent to SignalR:

/// <summary>
/// Marker interface for all messages that are sent to the CritterWatch web client
/// via web sockets
/// </summary>
public interface ICritterStackWebSocketMessage : ICritterWatchMessage, WebSocketMessage;

We also use that marker interface in a homegrown command line integration to generate TypeScript versions of all those messages with NJsonSchema as well as message types that go from the user interface to the CritterWatch server. Wolverine’s SignalR integration assumes that all messages sent to SignalR or received from SignalR are wrapped in a Cloud Events compliant JSON wrapper, but the only required members are type that should identify what type of message it is and data that holds the actual message body as JSON. To make this easier, when we generate the TypeScript code we also insert a little method like this that we can use to identify the message type sent from the client to the Wolverine powered back end:

export class CompactStreamResult implements WebsocketMessage {
serviceName!: string;
streamId!: string;
success!: boolean;
error!: string | undefined;
queryId!: string | undefined;
// THIS method is injected by our custom codegen
// and helps us communicate with the server as
// this matches Wolverine's internal identification of
// this message
get messageTypeName() : string{
return "compact_stream_result";
}
// other stuff...
init(_data?: any) {
if (_data) {
this.serviceName = _data["serviceName"];
this.streamId = _data["streamId"];
this.success = _data["success"];
this.error = _data["error"];
this.queryId = _data["queryId"];
}
}
static fromJS(data: any): CompactStreamResult {
data = typeof data === 'object' ? data : {};
let result = new CompactStreamResult();
result.init(data);
return result;
}
toJSON(data?: any) {
data = typeof data === 'object' ? data : {};
data["serviceName"] = this.serviceName;
data["streamId"] = this.streamId;
data["success"] = this.success;
data["error"] = this.error;
data["queryId"] = this.queryId;
return data;
}
}

Most of the code above is generated by NJsonSchema, but our custom codegen inserts in the get messageTypeName() method, that we use in the client side code below to wrap up messages to send back up to our server:

  async function sendMessage(msg: WebsocketMessage) {
    if (conn.state === HubConnectionState.Connected) {
      const payload = 'toJSON' in msg ? (msg as any).toJSON() : msg
      const cloudEvent = JSON.stringify({
        id: crypto.randomUUID(),
        specversion: '1.0',
        type: msg.messageTypeName,
        source: 'Client',
        datacontenttype: 'application/json; charset=utf-8',
        time: new Date().toISOString(),
        data: payload,
      })
      await conn.invoke('ReceiveMessage', cloudEvent)
    }
  }

In the reverse direction, we receive the raw message from a connected WebSocket with the SignalR client, interrogate the expected CloudEvents wrapper, figure out what the message type is from there, deserialize the raw JSON to the right TypeScript type, and generally just relay that to a Pinia store where all the normal Vue.js + Pinia reactive user interface magic happens.

export function relayToStore(data: any){
const servicesStore = useServicesStore();
const dlqStore = useDlqStore();
const metricsStore = useMetricsStore();
const projectionsStore = useProjectionsStore();
const durabilityStore = useDurabilityStore();
const eventsStore = useEventsStore();
const scheduledMessagesStore = useScheduledMessagesStore();
const envelope = typeof data === 'string' ? JSON.parse(data) : data;
switch (envelope.type){
case "dead_letter_details":
dlqStore.handleDeadLetterDetails(DeadLetterDetails.fromJS(envelope.data));
break;
case "dead_letter_queue_summary_results":
dlqStore.handleDeadLetterQueueSummaryResults(DeadLetterQueueSummaryResults.fromJS(envelope.data));
break;
case "all_service_summaries":
const allSummaries = AllServiceSummaries.fromJS(envelope.data);
servicesStore.handleAllServiceSummaries(allSummaries);
if (allSummaries.persistenceCounts) {
for (const pc of allSummaries.persistenceCounts) {
durabilityStore.handlePersistenceCountsChanged(pc);
}
}
if (allSummaries.metricsRollups) {
metricsStore.handleAllMetricsRollups(allSummaries.metricsRollups);
}
break;
case "summary_updated":
servicesStore.handleSummaryUpdated(SummaryUpdated.fromJS(envelope.data));
break;
case "agent_and_node_state_changed":
servicesStore.handleAgentAndNodeStateChanged(AgentAndNodeStateChanged.fromJS(envelope.data));
break;
case "service_summary_changed":
servicesStore.handleServiceSummaryChanged(ServiceSummaryChanged.fromJS(envelope.data));
break;
case "metrics_rollup":
metricsStore.handleMetricsRollup(MetricsRollup.fromJS(envelope.data));
break;
case "all_metrics_rollups":
metricsStore.handleAllMetricsRollups(AllMetricsRollups.fromJS(envelope.data));
break;
case "shard_states_changed":
projectionsStore.handleShardStatesChanged(ShardStatesChanged.fromJS(envelope.data));
break;
case "persistence_counts_changed":
durabilityStore.handlePersistenceCountsChanged(PersistenceCountsChanged.fromJS(envelope.data));
break;
case "stream_details":
eventsStore.handleStreamDetails(StreamDetails.fromJS(envelope.data));
break;
case "event_query_results":
eventsStore.handleEventQueryResults(EventQueryResults.fromJS(envelope.data));
break;
case "compact_stream_result":
eventsStore.handleCompactStreamResult(CompactStreamResult.fromJS(envelope.data));
break;
// *CASE ABOVE* -- do not remove this comment for the codegen please!
}
}

And that’s really it. I omitted some of our custom codegen code (because it’s hokey), but it doesn’t do much more than find the message types in the .NET code that are marked as going to or coming from the Vue.js client and writes them as matching TypeScript types.

But wait, Marten gets into the act too!

With the Marten + Wolverine integration through this:

        opts.Services.AddMarten(m =>
        {
            // Other stuff...
            m.Projections.Add<ServiceSummaryProjection>(ProjectionLifecycle.Async);
        }).IntegrateWithWolverine(w =>
        {
            w.UseWolverineManagedEventSubscriptionDistribution = true;
        });

Marten can also get into the SignalR act through its support for “Side Effects” in projections. As a certain projection for ServiceSummary is updated with new events in CritterWatch, we can raise messages reflecting the new changes in state to notify our clients with code like this from a SingleStreamProjection:

    public override ValueTask RaiseSideEffects(IDocumentOperations operations, IEventSlice<ServiceSummary> slice)
    {
        var hasShardStates = slice.Events().Any(x => x.Data is ShardStatesUpdated);

        if (hasShardStates)
        {
            var shardEvent = slice.Events().Last(x => x.Data is ShardStatesUpdated).Data as ShardStatesUpdated;
            slice.PublishMessage(new ShardStatesChanged(slice.Snapshot.Id, shardEvent!.States));
        }

        if (slice.Events().All(x => x.Data is IImpactsAgentOrNodes || x.Data is ShardStatesUpdated))
        {
            if (!hasShardStates)
            {
                slice.PublishMessage(new AgentAndNodeStateChanged(slice.Snapshot.Id, slice.Snapshot.Nodes, slice.Snapshot.Agents));
            }
        }
        else
        {
            slice.PublishMessage(new ServiceSummaryChanged(slice.Snapshot));
        }

        return new ValueTask();
    }

The Marten projection itself knows absolutely nothing about where those messages will go or how, but Wolverine kicks in to help its Critter Stack sibling and it deals with all the message delivery. The message types above all implement the ICritterStackWebSocketMessage interface, so they will get routed by Wolverine to SignalR. To rewind, the workflow here is:

  1. CritterWatch constantly receives messages from Wolverine applications with changes in state like new messaging endpoints being used, agents being reassigned, or nodes being started or shut down
  2. CritterWatch saves any changes in state as events to Marten (or later to SQL Server backed Polecat)
  3. The Marten async daemon processes those events to update CritterWatch’s ServiceSummary projection
  4. As pages of events are applied to individual services, Marten calls that RaiseSideEffects() method to relay some state changes to Wolverine, which will..
  5. Send those messages to SignalR based on Wolverine’s routing rules and on to the client side code which…
  6. Relays the incoming messages to the proper Pinia store

Summary

I won’t say that using Wolverine for processing and sending messages via SignalR is justified in every application, but it more than pays off if you have a highly interactive application that sends any number of messages between the user interface and the server.

Sometime last week I said online that no project is truly a failure if you learned something valuable from that effort that could help a later project succeed. When I wrote that I was absolutely thinking about the work shown above and relating that to a failed effort of mine called Storyteller (Redux + early React.js + roll your own WebSockets support on the server) that went nowhere in the end, but taught me a lot of valuable lessons about using WebSockets in a highly interactive application that has directly informed my work on CritterWatch.

Big Critter Stack Releases

The Critter Stack had a big day today with releases for both Marten and Wolverine.

First up, we have Marten 8.22 that included:

  • Lots of bug fixes, including several old LINQ related bugs and issues related to full text search that finally got addressed
  • Some improvements for the newer Composite Projections feature as users start to use it in real project work. Hat tip to Anne Erdtsieck on this one (and a JasperFx client needing an addition to it as well)
  • Some optimizations, including a potentially big one as Marten can now use a source generator to build some of the projection code that before depended on not perfectly efficient Expression compilation. This will impact “self aggregating” snapshot projections that use the Apply / Create / ShouldDelete conventions

Next, a giant Wolverine 5.16 release that brings:

  • Many, many bug fixes
  • Several small feature requests for our HTTP support
  • Improved resiliency for Kafka especially but also for any usage of external message brokers with Wolverine. See Sending Error Handling. Plus better error handling for durable listener endpoints when the transactional inbox database is unavailable
  • Wait, what? Wolverine has experimental support for CosmosDb as a transactional inbox/outbox and all of Wolverine’s declarative persistence helpers?
  • The ability to mark some message handlers or HTTP endpoints as opting out of automatic transactional middleware (for a JasperFx client). See this, but it applies to all persistence options.
  • Modular monolith usage improvements for a pair of JasperFx clients who are helping us stretch Wolverine to yet more use cases.
  • More to come on this, but we’ve recently slipped in Sqlite and Oracle support for Wolverine

Building a Greenfield System with the Critter Stack

JasperFx Software works hand in hand with our clients to improve our client’s outcomes on software projects using the “Critter Stack” (Marten and Wolverine). Based on our engagements with client projects as well as the greater Critter Stack user base, we’ve built up quite a few optional usages and settings in the two frameworks to solve specific technical challenges.

The unfortunate reality of managing a long lived application framework such as Wolverine or a complicated library like Marten is the need to both continuously improve the tools as well as trying really hard not to introduce regression errors to our clients when they upgrade tools. To that end, we’ve had to make several potentially helpful features be “opt in” in the tools, meaning that users have to explicitly turn on feature flag type settings for these features. A common cause of this is any change that introduces database schema changes as we try really hard to only do that in major version releases (Wolverine 5.0 added some new tables to SQL Server or PostgreSQL storage for example).

And yes, we’ve still introduced regression bugs in Marten or Wolverine far more times than I’d like, even with trying to be careful. In the end, I think the only guaranteed way to constantly and safely improve tools like the Critter Stack is to just be responsive to whatever problems slip through your quality gates and try to fix those problems quickly to regain trust.

With all that being said, let’s pretend we’re starting a greenfield project with the Critter Stack and we want to build in the best performing system possible with some added options for improved resiliency as well. To jump to the end state, this is what I’m proposing for a new optimized greenfield setup for users:

 var builder = Host.CreateApplicationBuilder();

builder.Services.AddMarten(m =>
{
    // Much more coming...
    m.Connection(builder.Configuration.GetConnectionString("marten"));

    // 50% improvement in throughput, less "event skipping"
    m.Events.AppendMode = EventAppendMode.Quick;
    // or if you care about the timestamps -->
    m.Events.AppendMode = EventAppendMode.QuickWithServerTimestamps;

    // 100% do this, but be aggressive about taking advantage of it
    m.Events.UseArchivedStreamPartitioning = true;

    // These cause some database changes, so can't be defaults,
    // but these might help "heal" systems that have problems
    // later
    m.Events.EnableAdvancedAsyncTracking = true;

    // Enables you to mark events as just plain bad so they are skipped
    // in projections from here on out.
    m.Events.EnableEventSkippingInProjectionsOrSubscriptions = true;

    // If you do this, just now you pretty well have to use FetchForWriting
    // in your commands
    // But also, you should use FetchForWriting() for command handlers 
    // any way
    // This will optimize the usage of Inline projections, but will force
    // you to treat your aggregate projection "write models" as being 
    // immutable in your command handler code
    // You'll want to use the "Decider Pattern" / "Aggregate Handler Workflow"
    // style for your commands rather than a self-mutating "AggregateRoot"
    m.Events.UseIdentityMapForAggregates = true;

    // Future proofing a bit. Will help with some future optimizations
    // for rebuild optimizations
    m.Events.UseMandatoryStreamTypeDeclaration = true;

    // This is just annoying anyway
    m.DisableNpgsqlLogging = true;
})
// This will remove some runtime overhead from Marten
.UseLightweightSessions()

.IntegrateWithWolverine(x =>
{
    // Let Wolverine do the load distribution better than
    // what Marten by itself can do
    x.UseWolverineManagedEventSubscriptionDistribution = true;
});

builder.Services.AddWolverine(opts =>
{
    // This *should* have some performance improvements, but would
    // require downtime to enable in existing systems
    opts.Durability.EnableInboxPartitioning = true;

    // Extra resiliency for unexpected problems, but can't be
    // defaults because this causes database changes
    opts.Durability.InboxStaleTime = 10.Minutes();
    opts.Durability.OutboxStaleTime = 10.Minutes();

    // Just annoying
    opts.EnableAutomaticFailureAcks = false;

    // Relatively new behavior that will store "unknown" messages
    // in the dead letter queue for possible recovery later
    opts.UnknownMessageBehavior = UnknownMessageBehavior.DeadLetterQueue;
});

using var host = builder.Build();

return await host.RunJasperFxCommands(args);

Now, let’s talk more about some of these settings…

Lightweight Sessions with Marten

The first option we’re going to explicitly add is to use “lightweight” sessions in Marten:

var builder = Host.CreateApplicationBuilder();

builder.Services.AddMarten(m =>
{
    // Elided configuration...
})
// This will remove some runtime overhead from Marten
.UseLightweightSessions()

By default, Marten will use a heavier version of IDocumentSession that incorporates an Identity Map internally to track documents (entities) already loaded by that session. Likewise, when you request to load an entity by its identity, Marten’s session will happily check if it has already loaded that entity and gives you the same object back to you without making the database call.

The identity map usage is mostly helpful when you have unclear or deeply nested call stacks where different elements of the code might try to load the same data as part of the same HTTP request or command handling. If you follow “Critter Stack” and what we call the best practices especially for Wolverine usage, you’ll know that we very strongly recommend against deep call stacks and excessive layering.

Moreover, I would argue that you should never need the identity map behavior if you were building a system with an idiomatic Critter Stack approach, so the default session type is actually harmful in that it adds extra runtime overhead. The “lightweight” sessions run leaner by completely eliminating all the dictionary storage and lookups.

Why you ask is the identity map behavior the default?

  1. We were originally designing Marten as a near drop in replacement for RavenDb in a big system, so we had to mimic that behavior right off the bat to be able to make the replacement in a timely fashion
  2. If we changed the default behavior, it can easily break code in existing systems that upgrade in ways that are very hard to predict and unfortunately hard to diagnose. And of course, this is most likely a problem in the exact kind of codebases that are hard to reason about. How do I know this and why am I so very certain this is so you ask? Scar tissue.

Wolverine Idioms for MediatR Users

The Wolverine community fields a lot of questions from people who are moving to Wolverine from their previous MediatR usage. A quite natural response is to try to use Wolverine as a pure drop in replacement for MediatR and even try to use the existing MediatR idioms they’re already used to. However, Wolverine comes from a different philosophy than MediatR and most of the other “mediator” tools it’s inspired and using Wolverine with its idioms might lead to much simpler code or more efficient execution. Inspired by a conversation I had online today, let’s just into an example that I think shows quite a bit of contrast between the tools.

We’ve tried to lay out some of the differences between the tools in our Wolverine for MediatR Users guide, including the section this post is taken from.

Here’s an example of MediatR usage I borrowed from this blog post that shows the usage of MediatR within a shopping cart subsystem:

public class AddToCartRequest : IRequest<Result>
{
public int ProductId { get; set; }
public int Quantity { get; set; }
}
public class AddToCartHandler : IRequestHandler<AddToCartRequest, Result>
{
private readonly ICartService _cartService;
public AddToCartHandler(ICartService cartService)
{
_cartService = cartService;
}
public async Task<Result> Handle(AddToCartRequest request, CancellationToken cancellationToken)
{
// Logic to add the product to the cart using the cart service
bool addToCartResult = await _cartService.AddToCart(request.ProductId, request.Quantity);
bool isAddToCartSuccessful = addToCartResult; // Check if adding the product to the cart was successful.
return Result.SuccessIf(isAddToCartSuccessful, "Failed to add the product to the cart."); // Return failure if adding to cart fails.
}
}
public class CartController : ControllerBase
{
private readonly IMediator _mediator;
public CartController(IMediator mediator)
{
_mediator = mediator;
}
[HttpPost]
public async Task<IActionResult> AddToCart([FromBody] AddToCartRequest request)
{
var result = await _mediator.Send(request);
if (result.IsSuccess)
{
return Ok("Product added to the cart successfully.");
}
else
{
return BadRequest(result.ErrorMessage);
}
}
}

Note the usage of the custom Result<T> type from the message handler. Folks using MediatR love using these custom Result types when you’re passing information between logical layers because it avoids the usage of throwing exceptions and communicates failure cases more clearly.

See Andrew Lock on Working with the result pattern for more information about the Result pattern.

Wolverine is all about reducing code ceremony and we always strive to write application code as synchronous pure functions whenever possible, so let’s just write the exact same functionality as above using Wolverine idioms to shrink down the code:

public static class AddToCartRequestEndpoint
{
// Remember, we can do validation in middleware, or
// even do a custom Validate() : ProblemDetails method
// to act as a filter so the main method is the happy path
[WolverinePost("/api/cart/add"), EmptyResponse]
public static Update<Cart> Post(
AddToCartRequest request,
// This usage will return a 400 status code if the Cart
// cannot be found
[Entity(OnMissing = OnMissing.ProblemDetailsWith400)] Cart cart)
{
return cart.TryAddRequest(request) ? Storage.Update(cart) : Storage.Nothing(cart);
}
}

There’s a lot going on above, so let’s dive into some of the details:

I used Wolverine.HTTP to write the HTTP endpoint so we only have one piece of code for our “vertical slice” instead of having both the Controller method and the matching message handler for the same logical command. Wolverine.HTTP embraces our Railway Programming model and direct support for the ProblemDetails specification as a means of stopping the HTTP request such that validation pre-conditions can be validated by middleware such that the main endpoint method is really the “happy path”.

The code above is using Wolverine’s “declarative data access” helpers you see in the [Entity] usage. We realized early on that a lot of message handlers or HTTP endpoints need to work on a single domain entity or a handful of entities loaded by identity values riding on either command messages, HTTP requests, or HTTP routes. At runtime, if the Cart isn’t found by loading it from your configured application persistence (which could be EF Core, Marten, or RavenDb at this time), the whole HTTP request would stop with status code 400 and a message communicated through ProblemDetails that the requested Cart cannot be found.

The key point I’m trying to prove is that idiomatic Wolverine results in potentially less repetitive code, less code ceremony, and less layering than MediatR idioms. Sure, it’s going to take a bit to get used to Wolverine idioms, but the potential payoff is code that’s easier to reason about and much easier to unit test — especially if you’ll buy into our A-Frame Architecture approach for organizing code within your slices.

Validation Middleware

As another example just to show how Wolverine’s runtime is different than MediatR’s, let’s consider the very common case of using Fluent Validation (or now DataAnnotations too!) middleware in front of message handlers or HTTP requests. With MediatR, you might use an IPipelineBehavior<T> implementation like this that will wrap all requests:

    public class ValidationBehaviour<TRequest, TResponse> : IPipelineBehavior<TRequest, TResponse> where TRequest : IRequest<TResponse>
    {
        private readonly IEnumerable<IValidator<TRequest>> _validators;
        public ValidationBehaviour(IEnumerable<IValidator<TRequest>> validators)
        {
            _validators = validators;
        }
      
        public async Task<TResponse> Handle(TRequest request, CancellationToken cancellationToken, RequestHandlerDelegate<TResponse> next)
        {
            if (_validators.Any())
            {
                var context = new ValidationContext<TRequest>(request);
                var validationResults = await Task.WhenAll(_validators.Select(v => v.ValidateAsync(context, cancellationToken)));
                var failures = validationResults.SelectMany(r => r.Errors).Where(f => f != null).ToList();
                if (failures.Count != 0)
                    throw new ValidationException(failures);
            }
          
            return await next();
        }
    }

    I’ve seen plenty of alternatives out there with slightly different implementations. In some cases folks will use service location to probe the application’s IoC container for any possible IValidator<T> implementations for the current request. In all cases though, the implementations are using runtime logic on every possible request to check if there is any validation logic. With the Wolverine version of Fluent Validation middleware, we do things a bit differently with less runtime overhead that will also result in cleaner Exception stack traces when things go wrong — don’t laugh, we really did design Wolverine quite purposely to avoid the really nasty kind of Exception stack traces you get from many other middleware or “behavior” using frameworks like Wolverine’s predecessor tool FubuMVC did 😦

    Let’s say that you have a Wolverine.HTTP endpoint like so:

    public record CreateCustomer
    (
    string FirstName,
    string LastName,
    string PostalCode
    )
    {
    public class CreateCustomerValidator : AbstractValidator<CreateCustomer>
    {
    public CreateCustomerValidator()
    {
    RuleFor(x => x.FirstName).NotNull();
    RuleFor(x => x.LastName).NotNull();
    RuleFor(x => x.PostalCode).NotNull();
    }
    }
    }
    public static class CreateCustomerEndpoint
    {
    [WolverinePost("/validate/customer")]
    public static string Post(CreateCustomer customer)
    {
    return "Got a new customer";
    }
    [WolverinePost("/validate/customer2")]
    public static string Post2([FromQuery] CreateCustomer customer)
    {
    return "Got a new customer";
    }
    }

    In the application bootstrapping, I’ve added this option:

    app.MapWolverineEndpoints(opts =>
    {
    // more configuration for HTTP...
    // Opting into the Fluent Validation middleware from
    // Wolverine.Http.FluentValidation
    opts.UseFluentValidationProblemDetailMiddleware();
    }

    Just like with MediatR, you would need to register the Fluent Validation validator types in your IoC container as part of application bootstrapping. Now, here’s how Wolverine’s model is very different from MediatR’s pipeline behaviors. While MediatR is applying that ValidationBehaviour to each and every message handler in your application whether or not that message type actually has any registered validators, Wolverine is able to peek into the IoC configuration and “know” whether there are registered validators for any given message type. If there are any registered validators, Wolverine will utilize them in the code it generates to execute the HTTP endpoint method shown above for creating a customer. If there is only one validator, and that validator is registered as a Singleton scope in the IoC container, Wolverine generates this code:

        public class POST_validate_customer : Wolverine.Http.HttpHandler
        {
            private readonly Wolverine.Http.WolverineHttpOptions _wolverineHttpOptions;
            private readonly Wolverine.Http.FluentValidation.IProblemDetailSource<WolverineWebApi.Validation.CreateCustomer> _problemDetailSource;
            private readonly FluentValidation.IValidator<WolverineWebApi.Validation.CreateCustomer> _validator;
    
            public POST_validate_customer(Wolverine.Http.WolverineHttpOptions wolverineHttpOptions, Wolverine.Http.FluentValidation.IProblemDetailSource<WolverineWebApi.Validation.CreateCustomer> problemDetailSource, FluentValidation.IValidator<WolverineWebApi.Validation.CreateCustomer> validator) : base(wolverineHttpOptions)
            {
                _wolverineHttpOptions = wolverineHttpOptions;
                _problemDetailSource = problemDetailSource;
                _validator = validator;
            }
    
    
    
            public override async System.Threading.Tasks.Task Handle(Microsoft.AspNetCore.Http.HttpContext httpContext)
            {
                // Reading the request body via JSON deserialization
                var (customer, jsonContinue) = await ReadJsonAsync<WolverineWebApi.Validation.CreateCustomer>(httpContext);
                if (jsonContinue == Wolverine.HandlerContinuation.Stop) return;
                
                // Execute FluentValidation validators
                var result1 = await Wolverine.Http.FluentValidation.Internals.FluentValidationHttpExecutor.ExecuteOne<WolverineWebApi.Validation.CreateCustomer>(_validator, _problemDetailSource, customer).ConfigureAwait(false);
    
                // Evaluate whether or not the execution should be stopped based on the IResult value
                if (result1 != null && !(result1 is Wolverine.Http.WolverineContinue))
                {
                    await result1.ExecuteAsync(httpContext).ConfigureAwait(false);
                    return;
                }
    
    
                
                // The actual HTTP request handler execution
                var result_of_Post = WolverineWebApi.Validation.ValidatedEndpoint.Post(customer);
    
                await WriteString(httpContext, result_of_Post);
            }
    
        }

    I should note that Wolverine’s Fluent Validation middleware will not generate any code for any HTTP endpoint where there are no known Fluent Validation validators for the endpoint’s request model. Moreover, Wolverine can even generate slightly different code for having multiple validators versus a singular validator as a way of wringing out a little more efficiency in the common case of having only a single validator registered for the request type.

    The point here is that Wolverine is trying to generate the most efficient code possible based on what it can glean from the IoC container registrations and the signature of the HTTP endpoint or message handler methods while the MediatR model has to effectively use runtime wrappers and conditional logic at runtime.

    Marten’s Aggregation Projection Subsystem

    Marten has very rich support for projecting events into read, write, or query models. While there are other capabilities as well, the most common usage is probably to aggregate related events into a singular view. Marten projections can be executed Live, meaning that Marten does the creation of the view by loading the target events into memory and building the view on the fly. Projections can also be executed Inline, meaning that the projected views are persisted as part of the same transaction that captures the events that apply to that projection. For this post though, I’m mostly talking about projections running asynchronously in the background as events are captured into the database (think eventual consistency).

    Aggregate Projections in Marten combine some sort of grouping of events and process them to create a single aggregated document representing the state of those events. These projections come in two flavors:

    Single Stream Projections create a rolled up view of all or a segment of the events within a single event stream. These projections are done either by using the SingleStreamProjection<TDoc, TId> base type or by creating a “self aggregating” Snapshot approach with conventional Create/Apply/ShouldDelete methods that mutate or evolve the snapshot based on new events.

    Multi Stream Projections create a rolled up view of a user-defined grouping of events across streams. These projections are done by sub-classing the MultiStreamProjection<TDoc, TId> class and is further described in Multi-Stream Projections. An example of a multi-stream projection might be a “query model” within an accounting system of some sort that rolls up the value of all unpaid invoices by active client.

    You can also use a MultiStreamProjection to create views that are a segment of a single stream over time or version. Imagine that you have a system that models the activity of a bank account with event sourcing. You could use a MultiStreamProjection to create a view that summarizes the activity of a single bank account within a calendar month.

    The ability to use explicit code to define projections was hugely improved in the Marten 8.0 release.

    Within your aggregation projection, you can express the logic about how Marten combines events into a view through either conventional methods (original, old school Marten) or through completely explicit code.

    Within an aggregation, you have advanced options to:

    Simple Example

    The most common usage is to create a “write model” that projects the current state for a single stream, so on that note, let’s jump into a simple example.

    I’m huge into epic fantasy book series, hence the silly original problem domain in the very oldest code samples. Hilariously, Marten has fielded and accepted pull requests that corrected our modeling of the timeline of the Lord of the Rings in sample code.

    Martens on a Quest

    Let’s say that we’re building a system to track the progress of a traveling party on a quest within an epic fantasy series like “The Lord of the Rings” or the “Wheel of Time” and we’re using event sourcing to capture state changes when the “quest party” adds or subtracts members. We might very well need a “write model” for the current state of the quest for our command handlers like this one:

    public sealed record QuestParty(Guid Id, List<string> Members)
    {
    // These methods take in events and update the QuestParty
    public static QuestParty Create(QuestStarted started) => new(started.QuestId, []);
    public static QuestParty Apply(MembersJoined joined, QuestParty party) =>
    party with
    {
    Members = party.Members.Union(joined.Members).ToList()
    };
    public static QuestParty Apply(MembersDeparted departed, QuestParty party) =>
    party with
    {
    Members = party.Members.Where(x => !departed.Members.Contains(x)).ToList()
    };
    public static QuestParty Apply(MembersEscaped escaped, QuestParty party) =>
    party with
    {
    Members = party.Members.Where(x => !escaped.Members.Contains(x)).ToList()
    };
    }

    For a little more context, the QuestParty above might be consumed in a command handler like this:

    public record AddMembers(Guid Id, int Day, string Location, string[] Members);
    public static class AddMembersHandler
    {
    public static async Task HandleAsync(AddMembers command, IDocumentSession session)
    {
    // Fetch the current state of the quest
    var quest = await session.Events.FetchForWriting<QuestParty>(command.Id);
    if (quest.Aggregate == null)
    {
    // Bad quest id, do nothing in this sample case
    }
    var newMembers = command.Members.Where(x => !quest.Aggregate.Members.Contains(x)).ToArray();
    if (!newMembers.Any())
    {
    return;
    }
    quest.AppendOne(new MembersJoined(command.Id, command.Day, command.Location, newMembers));
    await session.SaveChangesAsync();
    }
    }

    How Aggregation Works

    Just to understand a little bit more about the capabilities of Marten’s aggregation projections, let’s look at the diagram below that tries to visualize the runtime workflow of aggregation projections inside of the Async Daemon background process:

    How Aggregation Works
    1. The Daemon is constantly pushing a range of events at a time to an aggregation projection. For example, Events 1,000 to 2,000 by sequence number
    2. The aggregation “slices” the incoming range of events into a group of EventSlice objects that establishes a relationship between the identity of an aggregated document and the events that should be applied during this batch of updates for that identity. To be more concrete, a single stream projection for QuestParty would be creating an EventSlice for each quest id it sees in the current range of events. Multi-stream projections will have some kind of custom “slicing” or grouping. For example, maybe in our Quest tracking system we have a multi-stream projection that tries to track how many monsters of each type are defeated. That projection might “slice” by looking for all MonsterDefeated events across all streams and group or slice incoming events by the type of monster. The “slicing” logic is automatic for single stream projections, but will require explicit configuration or explicitly written logic for multi stream projections.
    3. Once the projection has a known list of all the aggregate documents that will be updated by the current range of events, the projection will fetch each persisted document, first from any active aggregate cache in memory, then by making a single batched request to the Marten document storage for any missing documents and adding these to any active cache (see Optimizing Performance for more information about the potential caching).
    4. The projection will execute any event enrichment against the now known group of EventSlice. This process gives you a hook to efficiently “enrich” the raw event data with extra data lookups from Marten document storage or even other sources.
    5. Most of the work as a developer is in the application or “Evolve” step of the diagram above. After the “slicing”, the aggregation has turned the range of raw event data into EventSlice objects that contain the current snapshot of a projected document by its identity (if one exists), the identity itself, and the events from within that original range that should be applied on top of the current snapshot to “evolve” it to reflect those events. This can be coded either with the conventional Apply/Create/ShouldDelete methods or using explicit code — which is almost inevitably means a switch statement. Using the QuestParty example again, the aggregation projection would get an EventSlice that contains the identity of an active quest, the snapshot of the current QuestParty document that is persisted by Marten, and the new MembersJoined et al events that should be applied to the existing QuestParty object to derive the new version of QuestParty.
    6. Just before Marten persists all the changes from the application / evolve step, you have the RaiseSideEffects() hook to potentially raise “side effects” like appending additional events based on the now updated state of the projected aggregates or publishing the new state of an aggregate through messaging (Wolverine has first class support for Marten projection side effects through its Marten integration into the full “Critter Stack”)
    7. For the current event range and event slices, Marten will send all aggregate document updates or deletions, new event appending operations, and even outboxed, outgoing messages sent via side effects (if you’re using the Wolverine integration) in batches to the underlying PostgreSQL database. I’m calling this out because we’ve constantly found in Marten development that command batching to PostgreSQL is a huge factor in system performance and the async daemon has been designed to try to minimize the number of network round trips between your application and PostgreSQL at every turn.
    8. Assuming the transaction succeeds for the current event range and the operation batch in the previous step, Marten will call “after commit” observers. This notification for example will release any messages raised as a side effect and actually send those messages via whatever is doing the actual publishing (probably Wolverine).

    Marten happily supports immutable data types for the aggregate documents produced by projections, but also happily supports mutable types as well. The usage of the application code is a little different though.

    Starting with Marten 8.0, we’ve tried somewhat to conform to the terminology used by the Functional Event Sourcing Decider paper by Jeremie Chassaing. To that end, the API now refers to a “snapshot” that really just means a version of the projection and “evolve” as the step of applying new events to an existing “snapshot” to calculate a new “snapshot.”