Natural Keys in the Critter Stack

Just to level set everyone, there are two general categories of identifiers we use in software:

  • “Surrogate” keys are data elements like Guid values, database auto numbering or sequences, or snowflake generated identifiers that have no real business meaning and just try to be unique values.
  • “Natural” keys have some kind of business meaning and usually utilize some piece of existing information like email addresses or phone numbers. A natural key could also be an external supplied identifier from your clients. In fact, it’s quite common to have your own tracking identifier (usually a surrogate key) while also having to track a client or user’s own identification for the same business entity.

That very last sentence is where this post takes off. You see Marten can happily track event streams with either Guid identifiers (surrogate key) or string identifiers — or strong typed identifiers that wrap an inner Guid or string, but in this case that’s really the same thing, just with more style I guess. Likewise, in combination with Wolverine for our recommended “aggregate handler workflow” approach to building command handlers, we’ve only supported the stream id or key. Until now!

With the Marten 8.23 and Wolverine 5.18 releases last week (we’ve been very busy and there are newer releases now), you are now able to “tag” Marten (or Polecat!) event streams with a natural key in addition to its surrogate stream id and use that natural key in conjunction with Wolverine’s aggregate handler workflow.

Of course, if you use strings as the stream identifier you could already use natural keys, but let’s just focus on the case of Guid identified streams that are also tagged with some kind of natural key that will be supplied by users in the commands sent to the system.

First, to tag streams with natural keys in Marten, you have to have a strong typed identifier type for the natural key. Next, there’s a little bit of attribute decoration in the targeted document type of a single stream projection, i.e., the “write model” for an event stream. Here’s an example from the Marten documentation:

public record OrderNumber(string Value);
public record InvoiceNumber(string Value);
public class OrderAggregate
{
public Guid Id { get; set; }
[NaturalKey]
public OrderNumber OrderNum { get; set; }
public decimal TotalAmount { get; set; }
public string CustomerName { get; set; }
public bool IsComplete { get; set; }
[NaturalKeySource]
public void Apply(OrderCreated e)
{
OrderNum = e.OrderNumber;
CustomerName = e.CustomerName;
}
public void Apply(OrderItemAdded e)
{
TotalAmount += e.Price;
}
[NaturalKeySource]
public void Apply(OrderNumberChanged e)
{
OrderNum = e.NewOrderNumber;
}
public void Apply(OrderCompleted e)
{
IsComplete = true;
}
}

In particular, see the usage of [NaturalKey] which should be self-explanatory. Also see the [NaturalKeySource] attribute that we’re using to mark when a natural key value might change. Marten is starting to use source generators for some projection internals (in place of some nasty, not entirely as efficient as it should have been, Expression-compiled-to-Lambda functions).

And that’s that, really. You’re now able to use the designated natural keys as the input to an “aggregate handler workflow” command handler with Wolverine. See Natural Keys from the Wolverine documentation for more information.

For a little more information:

  • The natural keys are stored in a separate table, and when using FetchForWriting(), Marten is doing an inner join from the tag table for that natural key type to the mt_streams table in the Marten database
  • You can change the natural key against the surrogate key
  • We expect this to be most useful when you want to use the Guid surrogate keys for uniqueness in your own system, but you frequently receive a natural key from API users of your system — or at least this has been encountered by a couple different JasperFx Software customers.
  • The natural key storage does have a unique value constraint on the “natural key” part of the storage
  • Really only a curiosity, but this was done in the same wave of development as Marten’s new DCB support

Validation Options in Wolverine

Wolverine — the event-driven messaging and HTTP framework for .NET — provides a rich, layered set of options for validating incoming data. Whether you are building HTTP endpoints or message handlers, Wolverine meets you where you are: from zero-configuration inline checks to full Fluent Validation or Data Annotation middleware support for both command handlers and HTTP endpoints.

Let’s maybe over simplify validation scenarios say they’ll fall into two buckets:

  1. Run of the mill field level validation rules like required fields or value ranges. These rules are the bread and butter of dedicated validation frameworks like Fluent Validation or Microsoft’s Data Annotations markup.
  2. Custom validation rules that are custom to your business domain and might involve checks against the existing state of your system beyond the command messages.

Let’s first look at Wolverine’s Data Annotation integration that is completely baked into the core WolverineFx Nuget. To get started, just opt into the Data Annotations middleware for message handlers like this:

using var host = await Host.CreateDefaultBuilder()
.UseWolverine(opts =>
{
// Apply the validation middleware
opts.UseDataAnnotationsValidation();
}).StartAsync();

In message handlers, this middleware will kick in for any message type that has any validation attributes as this example:

public record CreateCustomer(
// you can use the attributes on a record, but you need to
// add the `property` modifier to the attribute
[property: Required] string FirstName,
[property: MinLength(5)] string LastName,
[property: PostalCodeValidator] string PostalCode
) : IValidatableObject
{
public IEnumerable<ValidationResult> Validate(ValidationContext validationContext)
{
// you can implement `IValidatableObject` for custom
// validation logic
yield break;
}
};
public class PostalCodeValidatorAttribute : ValidationAttribute
{
public override bool IsValid(object? value)
{
// custom attributes are supported
return true;
}
}
public static class CreateCustomerHandler
{
public static void Handle(CreateCustomer customer)
{
// do whatever you'd do here, but this won't be called
// at all if the DataAnnotations Validation rules fail
}
}

By default for message handlers, any validation errors are logged, then the current execution is stopped through the usage of the HandlerContinuation value we’ll discuss later.

For Wolverine.HTTP integration with Data Annotations, use:

app.MapWolverineEndpoints(opts =>
{
// Use Data Annotations that are built
// into the Wolverine.HTTP library
opts.UseDataAnnotationsValidationProblemDetailMiddleware();
});

Likewise, this middleware will only apply to HTTP endpoints that have a request input model that contains data annotation attributes. In this case though, Wolverine is using the ProblemDetails specification to report validation errors back to the caller with a status code of 400 by default.

Fluent Validation Middleware

Similarly, the Fluent Validation integration works more or less the same, but requires the WolverineFx.FluentValidation package for message handlers and the WolverineFx.Http.FluentValidation package for HTTP endpoints. There are some Wolverine helpers for discovering and registering FluentValidation validators in a way that applies some Wolverine-specific performance optimizations by trying to register most validators with a Singleton lifetime just to allow Wolverine to generate more optimized code.

It is possible to override how Wolverine handles validation failures, but I’d personally recommend just using the ProblemDetails default in most cases.

I would like to note that the way that Wolverine generates code for the Fluent Validation middleware is generally going to be more efficient at runtime than the typical IoC dependent equivalents you’ll frequently find in the MediatR space.

Explicit Validation

Let’s move on to validation rules that are more specific to your own problem domain, and especially the type of validation rules that would require you to examine the state of your system by exercising some kind of data access. These kinds of rules certainly can be done with custom Fluent Validation validators, but I strongly recommend you put that kind of validation directly into your message handlers or HTTP endpoints to colocate business logic together with the actual message handler or HTTP endpoint happy path.

One of the unique features of Wolverine in comparison to the typical “IHandler of T” application frameworks in .NET is Wolverine’s built in support for a type of low code ceremony Railway Programming, and this turns out to be perfect for one off validation rules.

In message handlers we’ve long had support for returning the HandlerContinuation enum from Validate() or Before() methods as a way to signal to Wolverine to conditionally stop all additional processing:

public static class ShipOrderHandler
{
// This would be called first
public static async Task<(HandlerContinuation, Order?, Customer?)> LoadAsync(ShipOrder command, IDocumentSession session)
{
var order = await session.LoadAsync<Order>(command.OrderId);
if (order == null)
{
return (HandlerContinuation.Stop, null, null);
}
var customer = await session.LoadAsync<Customer>(command.CustomerId);
return (HandlerContinuation.Continue, order, customer);
}
// The main method becomes the "happy path", which also helps simplify it
public static IEnumerable<object> Handle(ShipOrder command, Order order, Customer customer)
{
// use the command data, plus the related Order & Customer data to
// "decide" what action to take next
yield return new MailOvernight(order.Id);
}
}

But of course, with the example above, you could also write that with Wolverine’s declarative persistence like this:

public static class ShipOrderHandler
{
// The main method becomes the "happy path", which also helps simplify it
public static IEnumerable<object> Handle(
ShipOrder command,
// This is loaded by the OrderId on the ShipOrder command
[Entity(Required = true)]
Order order,
// This is loaded by the CustomerId value on the ShipOrder command
[Entity(Required = true)]
Customer customer)
{
// use the command data, plus the related Order & Customer data to
// "decide" what action to take next
yield return new MailOvernight(order.Id);
}
}

In the code above, Wolverine would stop the processing if either the Order or Customer entity referenced by the command message is missing. Similarly, if this code were in an HTTP endpoint instead, Wolverine would emit a ProblemDetails with a 400 status code and a message stating the data that is missing.

If you were using the code above with the integration with Marten or Polecat, Wolverine can even emit code that uses Marten or Polecat’s batch querying functionality to make your system more efficient by eliminating database round trips.

Likewise in the HTTP space, you could also return a ProblemDetails object directly from a Validate() method like:

public class ProblemDetailsUsageEndpoint
{
public ProblemDetails Validate(NumberMessage message)
{
if (message.Number > 5)
return new ProblemDetails
{
Detail = "Number is bigger than 5",
Status = 400
};
// All good — continue!
return WolverineContinue.NoProblems;
}
[WolverinePost("/problems")]
public static string Post(NumberMessage message) => "Ok";
}

Even More Lightweight Validation!

When reviewing client code that uses the HandlerContinuation or ProblemDetails syntax, I definitely noticed the code can become verbose and noisy, especially compared to just embedding throw new InvalidOperationException("something is not right here"); code directly in the main methods — which isn’t something I’d like to see people tempted to do.

Instead, Wolverine 5.18 added a more lightweight approach that allows you to just return an array of strings from a Before/Validation() method:

    public static IEnumerable<string> Validate(SimpleValidateEnumerableMessage message)
    {
        if (message.Number > 10)
        {
            yield return "Number must be 10 or less";
        }
    }

    // or

    public static string[] Validate(SimpleValidateStringArrayMessage message)
    {
        if (message.Number > 10)
        {
            return ["Number must be 10 or less"];
        }

        return [];
    }

At runtime, Wolverine will stop a handler if there are any messages or emit a ProblemDetails response in HTTP endpoints.

Summary

Hopefully, Wolverine has you covered no matter what with options. A few practical takeaways:

  • Reach for Validate() / ValidateAsync() first whenever IoC services or database queries are involved or the validation logic is just specific to your message handler or HTTP endpoint.
  • Use Data Annotations middleware when your model types are already decorated with attributes and you want zero validator classes.
  • Use Fluent Validation middleware when you want reusable, composable validators shared across multiple handlers or endpoints.

All three strategies generate efficient, ahead-of-time compiled middleware via Wolverine’s code generation engine, keeping the runtime overhead minimal regardless of which path you choose.

SignalR + the Critter Stack

It’s early so I should be too cocky, but JasperFx Software is having success in integrating SignalR with both Wolverine and Marten in our forthcoming CritterWatch product. In this post I’ll show you how we’re doing that from the server side C# code all the way down to the client side TypeScript.

Last week I did a live stream talking about many of the details and a way too early demonstration of CritterWatch, JasperFx Software‘s long planned management console for the “Critter Stack” tools (Marten, Wolverine, and soon to be Polecat).

A big technical wrinkle in the CritterWatch approach so far is our utilization of the SignalR messaging support built into Wolverine. Just like with external messaging brokers like Rabbit MQ or Azure Service Bus, Wolverine does a lot of work to remove the technical details of SignalR and let’s you focus on just writing your application code.

In some ways, CritterWatch is kind of a man in the middle between the intended CritterWatch user interface (Vue.js) and the Wolverine enabled applications in your system:

Note that Wolverine will be required for CritterWatch, but if you only today use Marten and want to use CritterWatch to manage just the event sourcing, know that you will be able to use a very minimalistic Wolverine setup just for communication to CritterWatch without having to migrate your entire messaging infrastructure to Wolverine. And for that matter, Wolverine now has a pretty robust HTTP transport for asynchronous messaging that would work fine for CritterWatch integration.

As I said earlier, CritterWatch is going to depend very heavily on two way WebSockets communication between the user interface and the CritterWatch server, and we’re utilizing Wolverine’s SignalR messaging transport (which was purposefully built for CritterWatch in the first place) to get that done. In the CritterWatch codebase, we have this little bit of Wolverine configuration:

    public static void AddCritterWatchServices(this WolverineOptions opts, NpgsqlDataSource postgresSource)
    {
        // Much more of course...
        opts.Services.AddWolverineHttp();
        
        opts.UseSignalR();
        
        // The publishing rule to route any message type that implements
        // a marker interface to the connected SignalR Hub
        opts.Publish(x =>
        {
            x.MessagesImplementing<ICritterStackWebSocketMessage>();
            x.ToSignalR();
        });


        // Really need this so we can handle messages in order for 
        // a particular service
        opts.MessagePartitioning.UseInferredMessageGrouping();
        opts.Policies.AllListeners(x => x.PartitionProcessingByGroupId(PartitionSlots.Five));
    }

And at the bottom of the ASP.Net Core application hosting CritterWatch, we’ll have this to configure the request pipeline:

builder.Services.AddWolverineHttp();
var app = builder.Build();
// Little bit more in the real code of course...
app.MapWolverineSignalRHub("/api/messages");
return await app.RunJasperFxCommands(args);

As you can infer from the Wolverine publishing rule above, we’re using a marker interface to let Wolverine “know” what messages should always be sent to SignalR:

/// <summary>
/// Marker interface for all messages that are sent to the CritterWatch web client
/// via web sockets
/// </summary>
public interface ICritterStackWebSocketMessage : ICritterWatchMessage, WebSocketMessage;

We also use that marker interface in a homegrown command line integration to generate TypeScript versions of all those messages with NJsonSchema as well as message types that go from the user interface to the CritterWatch server. Wolverine’s SignalR integration assumes that all messages sent to SignalR or received from SignalR are wrapped in a Cloud Events compliant JSON wrapper, but the only required members are type that should identify what type of message it is and data that holds the actual message body as JSON. To make this easier, when we generate the TypeScript code we also insert a little method like this that we can use to identify the message type sent from the client to the Wolverine powered back end:

export class CompactStreamResult implements WebsocketMessage {
serviceName!: string;
streamId!: string;
success!: boolean;
error!: string | undefined;
queryId!: string | undefined;
// THIS method is injected by our custom codegen
// and helps us communicate with the server as
// this matches Wolverine's internal identification of
// this message
get messageTypeName() : string{
return "compact_stream_result";
}
// other stuff...
init(_data?: any) {
if (_data) {
this.serviceName = _data["serviceName"];
this.streamId = _data["streamId"];
this.success = _data["success"];
this.error = _data["error"];
this.queryId = _data["queryId"];
}
}
static fromJS(data: any): CompactStreamResult {
data = typeof data === 'object' ? data : {};
let result = new CompactStreamResult();
result.init(data);
return result;
}
toJSON(data?: any) {
data = typeof data === 'object' ? data : {};
data["serviceName"] = this.serviceName;
data["streamId"] = this.streamId;
data["success"] = this.success;
data["error"] = this.error;
data["queryId"] = this.queryId;
return data;
}
}

Most of the code above is generated by NJsonSchema, but our custom codegen inserts in the get messageTypeName() method, that we use in the client side code below to wrap up messages to send back up to our server:

  async function sendMessage(msg: WebsocketMessage) {
    if (conn.state === HubConnectionState.Connected) {
      const payload = 'toJSON' in msg ? (msg as any).toJSON() : msg
      const cloudEvent = JSON.stringify({
        id: crypto.randomUUID(),
        specversion: '1.0',
        type: msg.messageTypeName,
        source: 'Client',
        datacontenttype: 'application/json; charset=utf-8',
        time: new Date().toISOString(),
        data: payload,
      })
      await conn.invoke('ReceiveMessage', cloudEvent)
    }
  }

In the reverse direction, we receive the raw message from a connected WebSocket with the SignalR client, interrogate the expected CloudEvents wrapper, figure out what the message type is from there, deserialize the raw JSON to the right TypeScript type, and generally just relay that to a Pinia store where all the normal Vue.js + Pinia reactive user interface magic happens.

export function relayToStore(data: any){
const servicesStore = useServicesStore();
const dlqStore = useDlqStore();
const metricsStore = useMetricsStore();
const projectionsStore = useProjectionsStore();
const durabilityStore = useDurabilityStore();
const eventsStore = useEventsStore();
const scheduledMessagesStore = useScheduledMessagesStore();
const envelope = typeof data === 'string' ? JSON.parse(data) : data;
switch (envelope.type){
case "dead_letter_details":
dlqStore.handleDeadLetterDetails(DeadLetterDetails.fromJS(envelope.data));
break;
case "dead_letter_queue_summary_results":
dlqStore.handleDeadLetterQueueSummaryResults(DeadLetterQueueSummaryResults.fromJS(envelope.data));
break;
case "all_service_summaries":
const allSummaries = AllServiceSummaries.fromJS(envelope.data);
servicesStore.handleAllServiceSummaries(allSummaries);
if (allSummaries.persistenceCounts) {
for (const pc of allSummaries.persistenceCounts) {
durabilityStore.handlePersistenceCountsChanged(pc);
}
}
if (allSummaries.metricsRollups) {
metricsStore.handleAllMetricsRollups(allSummaries.metricsRollups);
}
break;
case "summary_updated":
servicesStore.handleSummaryUpdated(SummaryUpdated.fromJS(envelope.data));
break;
case "agent_and_node_state_changed":
servicesStore.handleAgentAndNodeStateChanged(AgentAndNodeStateChanged.fromJS(envelope.data));
break;
case "service_summary_changed":
servicesStore.handleServiceSummaryChanged(ServiceSummaryChanged.fromJS(envelope.data));
break;
case "metrics_rollup":
metricsStore.handleMetricsRollup(MetricsRollup.fromJS(envelope.data));
break;
case "all_metrics_rollups":
metricsStore.handleAllMetricsRollups(AllMetricsRollups.fromJS(envelope.data));
break;
case "shard_states_changed":
projectionsStore.handleShardStatesChanged(ShardStatesChanged.fromJS(envelope.data));
break;
case "persistence_counts_changed":
durabilityStore.handlePersistenceCountsChanged(PersistenceCountsChanged.fromJS(envelope.data));
break;
case "stream_details":
eventsStore.handleStreamDetails(StreamDetails.fromJS(envelope.data));
break;
case "event_query_results":
eventsStore.handleEventQueryResults(EventQueryResults.fromJS(envelope.data));
break;
case "compact_stream_result":
eventsStore.handleCompactStreamResult(CompactStreamResult.fromJS(envelope.data));
break;
// *CASE ABOVE* -- do not remove this comment for the codegen please!
}
}

And that’s really it. I omitted some of our custom codegen code (because it’s hokey), but it doesn’t do much more than find the message types in the .NET code that are marked as going to or coming from the Vue.js client and writes them as matching TypeScript types.

But wait, Marten gets into the act too!

With the Marten + Wolverine integration through this:

        opts.Services.AddMarten(m =>
        {
            // Other stuff...
            m.Projections.Add<ServiceSummaryProjection>(ProjectionLifecycle.Async);
        }).IntegrateWithWolverine(w =>
        {
            w.UseWolverineManagedEventSubscriptionDistribution = true;
        });

Marten can also get into the SignalR act through its support for “Side Effects” in projections. As a certain projection for ServiceSummary is updated with new events in CritterWatch, we can raise messages reflecting the new changes in state to notify our clients with code like this from a SingleStreamProjection:

    public override ValueTask RaiseSideEffects(IDocumentOperations operations, IEventSlice<ServiceSummary> slice)
    {
        var hasShardStates = slice.Events().Any(x => x.Data is ShardStatesUpdated);

        if (hasShardStates)
        {
            var shardEvent = slice.Events().Last(x => x.Data is ShardStatesUpdated).Data as ShardStatesUpdated;
            slice.PublishMessage(new ShardStatesChanged(slice.Snapshot.Id, shardEvent!.States));
        }

        if (slice.Events().All(x => x.Data is IImpactsAgentOrNodes || x.Data is ShardStatesUpdated))
        {
            if (!hasShardStates)
            {
                slice.PublishMessage(new AgentAndNodeStateChanged(slice.Snapshot.Id, slice.Snapshot.Nodes, slice.Snapshot.Agents));
            }
        }
        else
        {
            slice.PublishMessage(new ServiceSummaryChanged(slice.Snapshot));
        }

        return new ValueTask();
    }

The Marten projection itself knows absolutely nothing about where those messages will go or how, but Wolverine kicks in to help its Critter Stack sibling and it deals with all the message delivery. The message types above all implement the ICritterStackWebSocketMessage interface, so they will get routed by Wolverine to SignalR. To rewind, the workflow here is:

  1. CritterWatch constantly receives messages from Wolverine applications with changes in state like new messaging endpoints being used, agents being reassigned, or nodes being started or shut down
  2. CritterWatch saves any changes in state as events to Marten (or later to SQL Server backed Polecat)
  3. The Marten async daemon processes those events to update CritterWatch’s ServiceSummary projection
  4. As pages of events are applied to individual services, Marten calls that RaiseSideEffects() method to relay some state changes to Wolverine, which will..
  5. Send those messages to SignalR based on Wolverine’s routing rules and on to the client side code which…
  6. Relays the incoming messages to the proper Pinia store

Summary

I won’t say that using Wolverine for processing and sending messages via SignalR is justified in every application, but it more than pays off if you have a highly interactive application that sends any number of messages between the user interface and the server.

Sometime last week I said online that no project is truly a failure if you learned something valuable from that effort that could help a later project succeed. When I wrote that I was absolutely thinking about the work shown above and relating that to a failed effort of mine called Storyteller (Redux + early React.js + roll your own WebSockets support on the server) that went nowhere in the end, but taught me a lot of valuable lessons about using WebSockets in a highly interactive application that has directly informed my work on CritterWatch.

Big Critter Stack Releases

The Critter Stack had a big day today with releases for both Marten and Wolverine.

First up, we have Marten 8.22 that included:

  • Lots of bug fixes, including several old LINQ related bugs and issues related to full text search that finally got addressed
  • Some improvements for the newer Composite Projections feature as users start to use it in real project work. Hat tip to Anne Erdtsieck on this one (and a JasperFx client needing an addition to it as well)
  • Some optimizations, including a potentially big one as Marten can now use a source generator to build some of the projection code that before depended on not perfectly efficient Expression compilation. This will impact “self aggregating” snapshot projections that use the Apply / Create / ShouldDelete conventions

Next, a giant Wolverine 5.16 release that brings:

  • Many, many bug fixes
  • Several small feature requests for our HTTP support
  • Improved resiliency for Kafka especially but also for any usage of external message brokers with Wolverine. See Sending Error Handling. Plus better error handling for durable listener endpoints when the transactional inbox database is unavailable
  • Wait, what? Wolverine has experimental support for CosmosDb as a transactional inbox/outbox and all of Wolverine’s declarative persistence helpers?
  • The ability to mark some message handlers or HTTP endpoints as opting out of automatic transactional middleware (for a JasperFx client). See this, but it applies to all persistence options.
  • Modular monolith usage improvements for a pair of JasperFx clients who are helping us stretch Wolverine to yet more use cases.
  • More to come on this, but we’ve recently slipped in Sqlite and Oracle support for Wolverine

Building a Greenfield System with the Critter Stack

JasperFx Software works hand in hand with our clients to improve our client’s outcomes on software projects using the “Critter Stack” (Marten and Wolverine). Based on our engagements with client projects as well as the greater Critter Stack user base, we’ve built up quite a few optional usages and settings in the two frameworks to solve specific technical challenges.

The unfortunate reality of managing a long lived application framework such as Wolverine or a complicated library like Marten is the need to both continuously improve the tools as well as trying really hard not to introduce regression errors to our clients when they upgrade tools. To that end, we’ve had to make several potentially helpful features be “opt in” in the tools, meaning that users have to explicitly turn on feature flag type settings for these features. A common cause of this is any change that introduces database schema changes as we try really hard to only do that in major version releases (Wolverine 5.0 added some new tables to SQL Server or PostgreSQL storage for example).

And yes, we’ve still introduced regression bugs in Marten or Wolverine far more times than I’d like, even with trying to be careful. In the end, I think the only guaranteed way to constantly and safely improve tools like the Critter Stack is to just be responsive to whatever problems slip through your quality gates and try to fix those problems quickly to regain trust.

With all that being said, let’s pretend we’re starting a greenfield project with the Critter Stack and we want to build in the best performing system possible with some added options for improved resiliency as well. To jump to the end state, this is what I’m proposing for a new optimized greenfield setup for users:

 var builder = Host.CreateApplicationBuilder();

builder.Services.AddMarten(m =>
{
    // Much more coming...
    m.Connection(builder.Configuration.GetConnectionString("marten"));

    // 50% improvement in throughput, less "event skipping"
    m.Events.AppendMode = EventAppendMode.Quick;
    // or if you care about the timestamps -->
    m.Events.AppendMode = EventAppendMode.QuickWithServerTimestamps;

    // 100% do this, but be aggressive about taking advantage of it
    m.Events.UseArchivedStreamPartitioning = true;

    // These cause some database changes, so can't be defaults,
    // but these might help "heal" systems that have problems
    // later
    m.Events.EnableAdvancedAsyncTracking = true;

    // Enables you to mark events as just plain bad so they are skipped
    // in projections from here on out.
    m.Events.EnableEventSkippingInProjectionsOrSubscriptions = true;

    // If you do this, just now you pretty well have to use FetchForWriting
    // in your commands
    // But also, you should use FetchForWriting() for command handlers 
    // any way
    // This will optimize the usage of Inline projections, but will force
    // you to treat your aggregate projection "write models" as being 
    // immutable in your command handler code
    // You'll want to use the "Decider Pattern" / "Aggregate Handler Workflow"
    // style for your commands rather than a self-mutating "AggregateRoot"
    m.Events.UseIdentityMapForAggregates = true;

    // Future proofing a bit. Will help with some future optimizations
    // for rebuild optimizations
    m.Events.UseMandatoryStreamTypeDeclaration = true;

    // This is just annoying anyway
    m.DisableNpgsqlLogging = true;
})
// This will remove some runtime overhead from Marten
.UseLightweightSessions()

.IntegrateWithWolverine(x =>
{
    // Let Wolverine do the load distribution better than
    // what Marten by itself can do
    x.UseWolverineManagedEventSubscriptionDistribution = true;
});

builder.Services.AddWolverine(opts =>
{
    // This *should* have some performance improvements, but would
    // require downtime to enable in existing systems
    opts.Durability.EnableInboxPartitioning = true;

    // Extra resiliency for unexpected problems, but can't be
    // defaults because this causes database changes
    opts.Durability.InboxStaleTime = 10.Minutes();
    opts.Durability.OutboxStaleTime = 10.Minutes();

    // Just annoying
    opts.EnableAutomaticFailureAcks = false;

    // Relatively new behavior that will store "unknown" messages
    // in the dead letter queue for possible recovery later
    opts.UnknownMessageBehavior = UnknownMessageBehavior.DeadLetterQueue;
});

using var host = builder.Build();

return await host.RunJasperFxCommands(args);

Now, let’s talk more about some of these settings…

Lightweight Sessions with Marten

The first option we’re going to explicitly add is to use “lightweight” sessions in Marten:

var builder = Host.CreateApplicationBuilder();

builder.Services.AddMarten(m =>
{
    // Elided configuration...
})
// This will remove some runtime overhead from Marten
.UseLightweightSessions()

By default, Marten will use a heavier version of IDocumentSession that incorporates an Identity Map internally to track documents (entities) already loaded by that session. Likewise, when you request to load an entity by its identity, Marten’s session will happily check if it has already loaded that entity and gives you the same object back to you without making the database call.

The identity map usage is mostly helpful when you have unclear or deeply nested call stacks where different elements of the code might try to load the same data as part of the same HTTP request or command handling. If you follow “Critter Stack” and what we call the best practices especially for Wolverine usage, you’ll know that we very strongly recommend against deep call stacks and excessive layering.

Moreover, I would argue that you should never need the identity map behavior if you were building a system with an idiomatic Critter Stack approach, so the default session type is actually harmful in that it adds extra runtime overhead. The “lightweight” sessions run leaner by completely eliminating all the dictionary storage and lookups.

Why you ask is the identity map behavior the default?

  1. We were originally designing Marten as a near drop in replacement for RavenDb in a big system, so we had to mimic that behavior right off the bat to be able to make the replacement in a timely fashion
  2. If we changed the default behavior, it can easily break code in existing systems that upgrade in ways that are very hard to predict and unfortunately hard to diagnose. And of course, this is most likely a problem in the exact kind of codebases that are hard to reason about. How do I know this and why am I so very certain this is so you ask? Scar tissue.

Wolverine Idioms for MediatR Users

The Wolverine community fields a lot of questions from people who are moving to Wolverine from their previous MediatR usage. A quite natural response is to try to use Wolverine as a pure drop in replacement for MediatR and even try to use the existing MediatR idioms they’re already used to. However, Wolverine comes from a different philosophy than MediatR and most of the other “mediator” tools it’s inspired and using Wolverine with its idioms might lead to much simpler code or more efficient execution. Inspired by a conversation I had online today, let’s just into an example that I think shows quite a bit of contrast between the tools.

We’ve tried to lay out some of the differences between the tools in our Wolverine for MediatR Users guide, including the section this post is taken from.

Here’s an example of MediatR usage I borrowed from this blog post that shows the usage of MediatR within a shopping cart subsystem:

public class AddToCartRequest : IRequest<Result>
{
public int ProductId { get; set; }
public int Quantity { get; set; }
}
public class AddToCartHandler : IRequestHandler<AddToCartRequest, Result>
{
private readonly ICartService _cartService;
public AddToCartHandler(ICartService cartService)
{
_cartService = cartService;
}
public async Task<Result> Handle(AddToCartRequest request, CancellationToken cancellationToken)
{
// Logic to add the product to the cart using the cart service
bool addToCartResult = await _cartService.AddToCart(request.ProductId, request.Quantity);
bool isAddToCartSuccessful = addToCartResult; // Check if adding the product to the cart was successful.
return Result.SuccessIf(isAddToCartSuccessful, "Failed to add the product to the cart."); // Return failure if adding to cart fails.
}
}
public class CartController : ControllerBase
{
private readonly IMediator _mediator;
public CartController(IMediator mediator)
{
_mediator = mediator;
}
[HttpPost]
public async Task<IActionResult> AddToCart([FromBody] AddToCartRequest request)
{
var result = await _mediator.Send(request);
if (result.IsSuccess)
{
return Ok("Product added to the cart successfully.");
}
else
{
return BadRequest(result.ErrorMessage);
}
}
}

Note the usage of the custom Result<T> type from the message handler. Folks using MediatR love using these custom Result types when you’re passing information between logical layers because it avoids the usage of throwing exceptions and communicates failure cases more clearly.

See Andrew Lock on Working with the result pattern for more information about the Result pattern.

Wolverine is all about reducing code ceremony and we always strive to write application code as synchronous pure functions whenever possible, so let’s just write the exact same functionality as above using Wolverine idioms to shrink down the code:

public static class AddToCartRequestEndpoint
{
// Remember, we can do validation in middleware, or
// even do a custom Validate() : ProblemDetails method
// to act as a filter so the main method is the happy path
[WolverinePost("/api/cart/add"), EmptyResponse]
public static Update<Cart> Post(
AddToCartRequest request,
// This usage will return a 400 status code if the Cart
// cannot be found
[Entity(OnMissing = OnMissing.ProblemDetailsWith400)] Cart cart)
{
return cart.TryAddRequest(request) ? Storage.Update(cart) : Storage.Nothing(cart);
}
}

There’s a lot going on above, so let’s dive into some of the details:

I used Wolverine.HTTP to write the HTTP endpoint so we only have one piece of code for our “vertical slice” instead of having both the Controller method and the matching message handler for the same logical command. Wolverine.HTTP embraces our Railway Programming model and direct support for the ProblemDetails specification as a means of stopping the HTTP request such that validation pre-conditions can be validated by middleware such that the main endpoint method is really the “happy path”.

The code above is using Wolverine’s “declarative data access” helpers you see in the [Entity] usage. We realized early on that a lot of message handlers or HTTP endpoints need to work on a single domain entity or a handful of entities loaded by identity values riding on either command messages, HTTP requests, or HTTP routes. At runtime, if the Cart isn’t found by loading it from your configured application persistence (which could be EF Core, Marten, or RavenDb at this time), the whole HTTP request would stop with status code 400 and a message communicated through ProblemDetails that the requested Cart cannot be found.

The key point I’m trying to prove is that idiomatic Wolverine results in potentially less repetitive code, less code ceremony, and less layering than MediatR idioms. Sure, it’s going to take a bit to get used to Wolverine idioms, but the potential payoff is code that’s easier to reason about and much easier to unit test — especially if you’ll buy into our A-Frame Architecture approach for organizing code within your slices.

Validation Middleware

As another example just to show how Wolverine’s runtime is different than MediatR’s, let’s consider the very common case of using Fluent Validation (or now DataAnnotations too!) middleware in front of message handlers or HTTP requests. With MediatR, you might use an IPipelineBehavior<T> implementation like this that will wrap all requests:

    public class ValidationBehaviour<TRequest, TResponse> : IPipelineBehavior<TRequest, TResponse> where TRequest : IRequest<TResponse>
    {
        private readonly IEnumerable<IValidator<TRequest>> _validators;
        public ValidationBehaviour(IEnumerable<IValidator<TRequest>> validators)
        {
            _validators = validators;
        }
      
        public async Task<TResponse> Handle(TRequest request, CancellationToken cancellationToken, RequestHandlerDelegate<TResponse> next)
        {
            if (_validators.Any())
            {
                var context = new ValidationContext<TRequest>(request);
                var validationResults = await Task.WhenAll(_validators.Select(v => v.ValidateAsync(context, cancellationToken)));
                var failures = validationResults.SelectMany(r => r.Errors).Where(f => f != null).ToList();
                if (failures.Count != 0)
                    throw new ValidationException(failures);
            }
          
            return await next();
        }
    }

    I’ve seen plenty of alternatives out there with slightly different implementations. In some cases folks will use service location to probe the application’s IoC container for any possible IValidator<T> implementations for the current request. In all cases though, the implementations are using runtime logic on every possible request to check if there is any validation logic. With the Wolverine version of Fluent Validation middleware, we do things a bit differently with less runtime overhead that will also result in cleaner Exception stack traces when things go wrong — don’t laugh, we really did design Wolverine quite purposely to avoid the really nasty kind of Exception stack traces you get from many other middleware or “behavior” using frameworks like Wolverine’s predecessor tool FubuMVC did 😦

    Let’s say that you have a Wolverine.HTTP endpoint like so:

    public record CreateCustomer
    (
    string FirstName,
    string LastName,
    string PostalCode
    )
    {
    public class CreateCustomerValidator : AbstractValidator<CreateCustomer>
    {
    public CreateCustomerValidator()
    {
    RuleFor(x => x.FirstName).NotNull();
    RuleFor(x => x.LastName).NotNull();
    RuleFor(x => x.PostalCode).NotNull();
    }
    }
    }
    public static class CreateCustomerEndpoint
    {
    [WolverinePost("/validate/customer")]
    public static string Post(CreateCustomer customer)
    {
    return "Got a new customer";
    }
    [WolverinePost("/validate/customer2")]
    public static string Post2([FromQuery] CreateCustomer customer)
    {
    return "Got a new customer";
    }
    }

    In the application bootstrapping, I’ve added this option:

    app.MapWolverineEndpoints(opts =>
    {
    // more configuration for HTTP...
    // Opting into the Fluent Validation middleware from
    // Wolverine.Http.FluentValidation
    opts.UseFluentValidationProblemDetailMiddleware();
    }

    Just like with MediatR, you would need to register the Fluent Validation validator types in your IoC container as part of application bootstrapping. Now, here’s how Wolverine’s model is very different from MediatR’s pipeline behaviors. While MediatR is applying that ValidationBehaviour to each and every message handler in your application whether or not that message type actually has any registered validators, Wolverine is able to peek into the IoC configuration and “know” whether there are registered validators for any given message type. If there are any registered validators, Wolverine will utilize them in the code it generates to execute the HTTP endpoint method shown above for creating a customer. If there is only one validator, and that validator is registered as a Singleton scope in the IoC container, Wolverine generates this code:

        public class POST_validate_customer : Wolverine.Http.HttpHandler
        {
            private readonly Wolverine.Http.WolverineHttpOptions _wolverineHttpOptions;
            private readonly Wolverine.Http.FluentValidation.IProblemDetailSource<WolverineWebApi.Validation.CreateCustomer> _problemDetailSource;
            private readonly FluentValidation.IValidator<WolverineWebApi.Validation.CreateCustomer> _validator;
    
            public POST_validate_customer(Wolverine.Http.WolverineHttpOptions wolverineHttpOptions, Wolverine.Http.FluentValidation.IProblemDetailSource<WolverineWebApi.Validation.CreateCustomer> problemDetailSource, FluentValidation.IValidator<WolverineWebApi.Validation.CreateCustomer> validator) : base(wolverineHttpOptions)
            {
                _wolverineHttpOptions = wolverineHttpOptions;
                _problemDetailSource = problemDetailSource;
                _validator = validator;
            }
    
    
    
            public override async System.Threading.Tasks.Task Handle(Microsoft.AspNetCore.Http.HttpContext httpContext)
            {
                // Reading the request body via JSON deserialization
                var (customer, jsonContinue) = await ReadJsonAsync<WolverineWebApi.Validation.CreateCustomer>(httpContext);
                if (jsonContinue == Wolverine.HandlerContinuation.Stop) return;
                
                // Execute FluentValidation validators
                var result1 = await Wolverine.Http.FluentValidation.Internals.FluentValidationHttpExecutor.ExecuteOne<WolverineWebApi.Validation.CreateCustomer>(_validator, _problemDetailSource, customer).ConfigureAwait(false);
    
                // Evaluate whether or not the execution should be stopped based on the IResult value
                if (result1 != null && !(result1 is Wolverine.Http.WolverineContinue))
                {
                    await result1.ExecuteAsync(httpContext).ConfigureAwait(false);
                    return;
                }
    
    
                
                // The actual HTTP request handler execution
                var result_of_Post = WolverineWebApi.Validation.ValidatedEndpoint.Post(customer);
    
                await WriteString(httpContext, result_of_Post);
            }
    
        }

    I should note that Wolverine’s Fluent Validation middleware will not generate any code for any HTTP endpoint where there are no known Fluent Validation validators for the endpoint’s request model. Moreover, Wolverine can even generate slightly different code for having multiple validators versus a singular validator as a way of wringing out a little more efficiency in the common case of having only a single validator registered for the request type.

    The point here is that Wolverine is trying to generate the most efficient code possible based on what it can glean from the IoC container registrations and the signature of the HTTP endpoint or message handler methods while the MediatR model has to effectively use runtime wrappers and conditional logic at runtime.

    Marten’s Aggregation Projection Subsystem

    Marten has very rich support for projecting events into read, write, or query models. While there are other capabilities as well, the most common usage is probably to aggregate related events into a singular view. Marten projections can be executed Live, meaning that Marten does the creation of the view by loading the target events into memory and building the view on the fly. Projections can also be executed Inline, meaning that the projected views are persisted as part of the same transaction that captures the events that apply to that projection. For this post though, I’m mostly talking about projections running asynchronously in the background as events are captured into the database (think eventual consistency).

    Aggregate Projections in Marten combine some sort of grouping of events and process them to create a single aggregated document representing the state of those events. These projections come in two flavors:

    Single Stream Projections create a rolled up view of all or a segment of the events within a single event stream. These projections are done either by using the SingleStreamProjection<TDoc, TId> base type or by creating a “self aggregating” Snapshot approach with conventional Create/Apply/ShouldDelete methods that mutate or evolve the snapshot based on new events.

    Multi Stream Projections create a rolled up view of a user-defined grouping of events across streams. These projections are done by sub-classing the MultiStreamProjection<TDoc, TId> class and is further described in Multi-Stream Projections. An example of a multi-stream projection might be a “query model” within an accounting system of some sort that rolls up the value of all unpaid invoices by active client.

    You can also use a MultiStreamProjection to create views that are a segment of a single stream over time or version. Imagine that you have a system that models the activity of a bank account with event sourcing. You could use a MultiStreamProjection to create a view that summarizes the activity of a single bank account within a calendar month.

    The ability to use explicit code to define projections was hugely improved in the Marten 8.0 release.

    Within your aggregation projection, you can express the logic about how Marten combines events into a view through either conventional methods (original, old school Marten) or through completely explicit code.

    Within an aggregation, you have advanced options to:

    Simple Example

    The most common usage is to create a “write model” that projects the current state for a single stream, so on that note, let’s jump into a simple example.

    I’m huge into epic fantasy book series, hence the silly original problem domain in the very oldest code samples. Hilariously, Marten has fielded and accepted pull requests that corrected our modeling of the timeline of the Lord of the Rings in sample code.

    Martens on a Quest

    Let’s say that we’re building a system to track the progress of a traveling party on a quest within an epic fantasy series like “The Lord of the Rings” or the “Wheel of Time” and we’re using event sourcing to capture state changes when the “quest party” adds or subtracts members. We might very well need a “write model” for the current state of the quest for our command handlers like this one:

    public sealed record QuestParty(Guid Id, List<string> Members)
    {
    // These methods take in events and update the QuestParty
    public static QuestParty Create(QuestStarted started) => new(started.QuestId, []);
    public static QuestParty Apply(MembersJoined joined, QuestParty party) =>
    party with
    {
    Members = party.Members.Union(joined.Members).ToList()
    };
    public static QuestParty Apply(MembersDeparted departed, QuestParty party) =>
    party with
    {
    Members = party.Members.Where(x => !departed.Members.Contains(x)).ToList()
    };
    public static QuestParty Apply(MembersEscaped escaped, QuestParty party) =>
    party with
    {
    Members = party.Members.Where(x => !escaped.Members.Contains(x)).ToList()
    };
    }

    For a little more context, the QuestParty above might be consumed in a command handler like this:

    public record AddMembers(Guid Id, int Day, string Location, string[] Members);
    public static class AddMembersHandler
    {
    public static async Task HandleAsync(AddMembers command, IDocumentSession session)
    {
    // Fetch the current state of the quest
    var quest = await session.Events.FetchForWriting<QuestParty>(command.Id);
    if (quest.Aggregate == null)
    {
    // Bad quest id, do nothing in this sample case
    }
    var newMembers = command.Members.Where(x => !quest.Aggregate.Members.Contains(x)).ToArray();
    if (!newMembers.Any())
    {
    return;
    }
    quest.AppendOne(new MembersJoined(command.Id, command.Day, command.Location, newMembers));
    await session.SaveChangesAsync();
    }
    }

    How Aggregation Works

    Just to understand a little bit more about the capabilities of Marten’s aggregation projections, let’s look at the diagram below that tries to visualize the runtime workflow of aggregation projections inside of the Async Daemon background process:

    How Aggregation Works
    1. The Daemon is constantly pushing a range of events at a time to an aggregation projection. For example, Events 1,000 to 2,000 by sequence number
    2. The aggregation “slices” the incoming range of events into a group of EventSlice objects that establishes a relationship between the identity of an aggregated document and the events that should be applied during this batch of updates for that identity. To be more concrete, a single stream projection for QuestParty would be creating an EventSlice for each quest id it sees in the current range of events. Multi-stream projections will have some kind of custom “slicing” or grouping. For example, maybe in our Quest tracking system we have a multi-stream projection that tries to track how many monsters of each type are defeated. That projection might “slice” by looking for all MonsterDefeated events across all streams and group or slice incoming events by the type of monster. The “slicing” logic is automatic for single stream projections, but will require explicit configuration or explicitly written logic for multi stream projections.
    3. Once the projection has a known list of all the aggregate documents that will be updated by the current range of events, the projection will fetch each persisted document, first from any active aggregate cache in memory, then by making a single batched request to the Marten document storage for any missing documents and adding these to any active cache (see Optimizing Performance for more information about the potential caching).
    4. The projection will execute any event enrichment against the now known group of EventSlice. This process gives you a hook to efficiently “enrich” the raw event data with extra data lookups from Marten document storage or even other sources.
    5. Most of the work as a developer is in the application or “Evolve” step of the diagram above. After the “slicing”, the aggregation has turned the range of raw event data into EventSlice objects that contain the current snapshot of a projected document by its identity (if one exists), the identity itself, and the events from within that original range that should be applied on top of the current snapshot to “evolve” it to reflect those events. This can be coded either with the conventional Apply/Create/ShouldDelete methods or using explicit code — which is almost inevitably means a switch statement. Using the QuestParty example again, the aggregation projection would get an EventSlice that contains the identity of an active quest, the snapshot of the current QuestParty document that is persisted by Marten, and the new MembersJoined et al events that should be applied to the existing QuestParty object to derive the new version of QuestParty.
    6. Just before Marten persists all the changes from the application / evolve step, you have the RaiseSideEffects() hook to potentially raise “side effects” like appending additional events based on the now updated state of the projected aggregates or publishing the new state of an aggregate through messaging (Wolverine has first class support for Marten projection side effects through its Marten integration into the full “Critter Stack”)
    7. For the current event range and event slices, Marten will send all aggregate document updates or deletions, new event appending operations, and even outboxed, outgoing messages sent via side effects (if you’re using the Wolverine integration) in batches to the underlying PostgreSQL database. I’m calling this out because we’ve constantly found in Marten development that command batching to PostgreSQL is a huge factor in system performance and the async daemon has been designed to try to minimize the number of network round trips between your application and PostgreSQL at every turn.
    8. Assuming the transaction succeeds for the current event range and the operation batch in the previous step, Marten will call “after commit” observers. This notification for example will release any messages raised as a side effect and actually send those messages via whatever is doing the actual publishing (probably Wolverine).

    Marten happily supports immutable data types for the aggregate documents produced by projections, but also happily supports mutable types as well. The usage of the application code is a little different though.

    Starting with Marten 8.0, we’ve tried somewhat to conform to the terminology used by the Functional Event Sourcing Decider paper by Jeremie Chassaing. To that end, the API now refers to a “snapshot” that really just means a version of the projection and “evolve” as the step of applying new events to an existing “snapshot” to calculate a new “snapshot.”

    Catching Up with Recent Wolverine Releases

    Wolverine has had a very frequent release cadence the past couple months as community contributions, requests from JasperFx Software clients, and yes, sigh, bug reports have flowed in. Right now I think I can justifiably claim that Wolverine is innovating much faster than any of the other comparable tools in the .NET ecosystem.

    Some folks clearly don’t like that level of change of course, and I’ve always had to field some only criticism for our frequency of releases. I don’t think that continues forever of course.

    I thought that now would be a good time to write a little bit about the new features and improvements just because so much of it happened over the holiday season. Starting somewhat arbitrarily with the first of December to now

    Inferred Message Grouping in Wolverine 5.5

    A massively important new feature in Wolverine 5 was our “Partitioned Sequential Messaging” that seeks to effectively head off problems with concurrent message processing by segregating message processing by some kind of business entity identity. Long story short, this feature can almost completely eliminate issues with concurrent access to data without eliminating parallel processing across unrelated messages.

    In Wolverine 5.5 we added the now obvious capability to let Wolverine automatically infer the messaging group id for messages handled by a Saga (the saga identity) or with the Aggregate Handler Workflow (the stream id of the primary event stream being altered in the handler):

    // Telling Wolverine how to assign a GroupId to a message, that we'll use
    // to predictably sort into "slots" in the processing
    opts.MessagePartitioning
    // This tells Wolverine to use the Saga identity as the group id for any message
    // that impacts a Saga or the stream id of any command that is part of the "aggregate handler workflow"
    // integration with Marten
    .UseInferredMessageGrouping()
    .PublishToPartitionedLocalMessaging("letters", 4, topology =>
    {
    topology.MessagesImplementing<ILetterMessage>();
    topology.MaxDegreeOfParallelism = PartitionSlots.Five;
    topology.ConfigureQueues(queue =>
    {
    queue.BufferedInMemory();
    });
    });

    “Classic” .NET Domain Events with EF Core in Wolverine 5.6

    Wolverine is attracting a lot of new users lately who might honestly only have been originally interested because of other tool’s recent licensing changes, and those users tend to come with a more typical .NET approach to application architecture than Wolverine’s idiomatic vertical slice architecture approach. These new users are also a lot more likely to be using EF Core than Marten, so we’ve had to invest more in EF Core integration.

    Wolverine 5.6 brought an ability to cleanly and effectively utilize a traditional .NET approach for “Domain Event” publishing through EF Core to Wolverine’s messaging.

    I wrote about that at the time in “Classic” .NET Domain Events with Wolverine and EF Core.

    Wolverine 5.7 Knocked Out Bugs

    There wasn’t many new features of note, but Wolverine 5.7 less than a week after 5.6 had five contributors and knocked out a dozen issues. The open issue count in Wolverine crested in December in the low 70’s and it’s down to the low 30’s right now.

    Client Requests in Wolverine 5.8

    Wolverine 5.8 gave us some bug fixes, but also a couple new features requested by JasperFx clients:

    The Community Went Into High Gear with Wolverine 5.9

    Wolverine 5.9 dropped the week before Christmas with contributions from 7 different people.

    The highlights are:

    • Sandeep Desai has been absolutely on fire as a contributor to Wolverine and he made the HTTP Messaging Transport finally usable in this release with several other pull requests in later versions that also improved that feature. This is enabling Wolverine to use HTTP as a messaging transport. I’ve long wanted this feature as a prerequisite for CritterWatch.
    • Lodewijk Sioen added Wolverine middleware support for using Data Annotations with Wolverine.HTTP
    • The Rabbit MQ integration got more robust about reconnecting on errors

    Wolverine 5.10 Kicked off 2026 with a Bang!

    Wolverine 5.10 came out last week with contributions from eleven different folks. Plenty of bug fixes and contributions built up over the holidays. The highlights include:

    And several random requests for JasperFx clients because that’s something we do to support our clients.

    Wolverine 5.11 adds More Idempotency Options

    Wolverine 5.11 dropped this week with more bug fixes and new capabilities from five contributors. The big new feature was an improved option for enforcing message idempotency on non-transactional handlers as a request from a JasperFx support client.

    using var host = await Host.CreateDefaultBuilder()
    .UseWolverine(opts =>
    {
    opts.Durability.Mode = DurabilityMode.Solo;
    opts.Services.AddDbContextWithWolverineIntegration<CleanDbContext>(x =>
    x.UseSqlServer(Servers.SqlServerConnectionString));
    opts.Services.AddResourceSetupOnStartup(StartupAction.ResetState);
    opts.Policies.AutoApplyTransactions(IdempotencyStyle.Eager);
    opts.PersistMessagesWithSqlServer(Servers.SqlServerConnectionString, "idempotency");
    opts.UseEntityFrameworkCoreTransactions();
    // THIS RIGHT HERE
    opts.Policies.AutoApplyIdempotencyOnNonTransactionalHandlers();
    }).StartAsync();

    That release also included several bug fixes and an effort from me to go fill in some gaps in the documentation website. That release got us down to the lowest open issue count in years.

    Summary

    The Wolverine community has been very busy, it is actually a community of developers from all over the world, and we’re improving fast.

    I do think that the release cadence will slow down somewhat though as this has been an unusual burst of activity.

    How JasperFx Supports our Customers

    Reach out anytime to sales@jasperfx.net to ask us about how we could potentially help your shop with software development using the Critter Stack.

    It’s a New Year and hopefully we all get to start on some great new software initiatives. If you happen to be starting something this year that’s going to get you into Event Driven Architecture or Event Sourcing, the Critter Stack (Marten and Wolverine) is a great toolset to get you where you’re going. And of course, JasperFx Software is around to help our clients get the most out of the Critter Stack and support you through architectural decisions, business modeling, and test automation as well.

    A JasperFx support plan is more than just a throat to choke when things go wrong. We build in consulting time, and mostly interact with our clients through IM tools like Discord or Slack and occasional Zoom calls when that’s appropriate. And GitHub issues of course for tracking problems or feature requests.

    Just thinking about the past week or so, JasperFx has helped clients with:

    • Helped troubleshoot a couple production or development issues with clients
    • Modeling events, event streams, and strategies for projections
    • A deep dive into the multi-tenancy support in Marten and Wolverine, the implications of different options, possible performance optimizations that probably have to be done upfront as well as performance optimizations that could be done later, and how these options fit our client’s problem domain and business.
    • For a greenfield project, we laid out several options with Marten to optimize the future performance and scalability with several opt in features and of course, the potential drawbacks of those features (like event archiving or stream compacting).
    • Worked with a couple clients on how best to configure Wolverine when multiple applications or multiple modules within the same application are targeting the same database
    • Worked with a client on how to configure Wolverine to enable a modular monolith approach to utilize completely separate databases and a mix and match of database per tenant with separate databases per module.
    • How authorization and authentication can be integrated into Wolverine.HTTP — which basically boils down to “basically the same as MVC Core”
    • A lot of conversations about how to protect your system against concurrency issues and what features in both Marten and Wolverine will help you be more resilient
    • Talked through many of the configuration possibilities for message sequencing or parallelism in Wolverine and how to match that to different needs
    • Fielded several small feature requests to improve Wolverine’s usage within modular monolith applications where the same message might need to be handled independently by separate modules
    • Pushed a new Wolverine release that included some small requests from a client for their particular usage
    • Conferred with a current client on some very large, forthcoming features in Marten that will hopefully improve its usability for applications that require complex dashboard screens that display very rich data. The feature isn’t directly part of the client’s support agreement per se, but we absolutely pay attention to our client’s use cases within our own internal roadmap for the Critter Stack tools.

    But again, that’s only the past couple weeks. If you’re interested in learning more, or want JasperFx to be helping your shop, drop us an email at sales@jasperfx.net or you can DM me just about anywhere.

    Critter Stack Roadmap for 2026

    I normally write this out in January, but I’m feeling like now is a good time to get this out as some of it is in flight. So with plenty of feedback from the other Critter Stack Core team members and a lot of experience seeing where JasperFx Software clients have hit friction in the past couple years, here’s my current thinking about where the Critter Stack development goes for 2026.

    As I’m sure you can guess, every time I’ve written this yearly post, it’s been absurdly off the mark of what actually gets done through the year.

    Critter Watch

    For the love of all that’s good in this world, JasperFx Software needs to get an MVP out the door that’s usable for early adopters who are already clamoring for it. The “Critter Watch” tool, in a nutshell, should be able to tell you everything you need to know about how or why a Critter Stack application is unhealthy and then also give you the tools you need to heal your systems when anything does go wrong.

    The MVP is still shaping up as:

    • A visualization and explanation of the configuration of your Critter Stack application
    • Performance metrics integration from both Marten and Wolverine
    • Event Store monitoring and management of projections and subscriptions
    • Wolverine node visualization and monitoring
    • Dead Letter Queue querying and management
    • Alerting – but I don’t have a huge amount of detail yet. I’m paying close attention to the issues JasperFx clients see in production applications though, and using that to inform what information Critter Watch will surface through its user interface and push notifications

    This work is heavily in flight, and will hopefully accelerate over the holidays and January as JasperFx Software clients tend to be much quieter. I will be publishing a separate vision document soon for users to review.

    The Entire “Critter Stack”

    • We’re standing up the new docs.jasperfx.net (Babu is already working on this) to hold documentation on supporting libraries and more tutorials and sample projects that cross Marten & Wolverine. This will finally add some documentation for Weasel (database utilities and migration support), our command line support, the stateful resource model, the code generation model, and everything to do with DevOps recipes.
    • Play the “Cold Start Optimization” epic across both Marten and Wolverine (and possibly Lamar). I don’t think that true AOT support is feasible, but maybe we can get a lot closer. Have an optimized start mode of some sort that eliminates all or at least most of:
      • Reflection usage in bootstrapping
      • Reflection usage at runtime, which today is really just occasional calls to object.GetType()
      • Assembly scanning of any kind, which we know can be very expensive for some systems with very large dependency trees.
    • Increased and improved integration with EF Core across the stack

    Marten

    The biggest set of complaints I’m hearing lately is all around views between multiple entity types or projections involving multiple stream types or multiple entity types. I also got some feedback from multiple past clients about the limitation of Marten as a data source underneath UI grids, which isn’t particularly a new bit of feedback. In general, there also appears to be a massive opportunity to improve Marten’s usability for many users by having more robust support in the box for projecting event data to flat, denormalized tables.

    I think I’d like to prioritize a series of work in 2026 to alleviate the complicated view problem:

    • The “Composite Projections” Epic where you might use the build products of upstream projections to create multi-stream projection views. This is also an opportunity to ratchet up even more scalability and throughput in the daemon. I’ve gotten positive feedback from a couple JasperFx clients about this. It’s also a big opportunity to increase the throughput and scalability of the Async Daemon by making fewer database requests
    • Revisit GroupJoin in the LINQ support even though that’s going to be absolutely miserable to build. GroupJoin() might end up being a much easier usage that all our Include() functionality. 
    • A first class model to project Marten event data with EF Core. In this proposed model, you’d use an EF Core DbContext to do all the actual writes to a database. 

    Other than that, some other ideas that have kicked around for awhile are:

    • Improve the documentation and sample projects, especially around the usage of projections
    • Take a better look at the full text search features in Marten
    • Finally support the PostGIS extension in Marten. I think that could be something flashy and quick to build, but I’d strongly prefer to do this in the context of an actual client use case.
    • Continue to improve our story around multi-stream operations. I’m not enthusiastic about “Dynamic Boundary Consistency” (DCB) in regards to Marten though, so I’m not sure what this actually means yet. This might end up centering much more on the integration with Wolverine’s “aggregate handler workflow” which is already perfectly happy to support strong consistency models even with operations that touch more than one event stream.

    Wolverine

    Wolverine is by far and away the busiest part of the Critter Stack in terms of active development right now, but I think that slows down soon. To be honest, most work at this point is us reacting tactically to JasperFx client or user needs. In terms of general, strategic themes, I think that 2026 will involve:

    • In conjunction with “CritterWatch”, improving Wolverine’s management story around dead letter queueing
    • I would love to expand Wolverine’s database support beyond “just” SQL Server and PostgreSQL
    • Improving the Kafka integration. That’s not our most widely used messaging broker, but that seems to be the leading source of enhancement requests right now

    New Critters?

    We’ve done a lot of preliminary work to potentially build new Critter Stack event store alternatives based on different database engines. I’ve always believed that SQL Server would be the logical next database engine, but we’ve gotten fewer and fewer requests for this as PostgreSQL has become a much more popular database choice in the .NET ecosystem.

    I’m not sure this will be a high priority in 2026, but you never know…