Wolverine Idioms for MediatR Users

The Wolverine community fields a lot of questions from people who are moving to Wolverine from their previous MediatR usage. A quite natural response is to try to use Wolverine as a pure drop in replacement for MediatR and even try to use the existing MediatR idioms they’re already used to. However, Wolverine comes from a different philosophy than MediatR and most of the other “mediator” tools it’s inspired and using Wolverine with its idioms might lead to much simpler code or more efficient execution. Inspired by a conversation I had online today, let’s just into an example that I think shows quite a bit of contrast between the tools.

We’ve tried to lay out some of the differences between the tools in our Wolverine for MediatR Users guide, including the section this post is taken from.

Here’s an example of MediatR usage I borrowed from this blog post that shows the usage of MediatR within a shopping cart subsystem:

public class AddToCartRequest : IRequest<Result>
{
public int ProductId { get; set; }
public int Quantity { get; set; }
}
public class AddToCartHandler : IRequestHandler<AddToCartRequest, Result>
{
private readonly ICartService _cartService;
public AddToCartHandler(ICartService cartService)
{
_cartService = cartService;
}
public async Task<Result> Handle(AddToCartRequest request, CancellationToken cancellationToken)
{
// Logic to add the product to the cart using the cart service
bool addToCartResult = await _cartService.AddToCart(request.ProductId, request.Quantity);
bool isAddToCartSuccessful = addToCartResult; // Check if adding the product to the cart was successful.
return Result.SuccessIf(isAddToCartSuccessful, "Failed to add the product to the cart."); // Return failure if adding to cart fails.
}
}
public class CartController : ControllerBase
{
private readonly IMediator _mediator;
public CartController(IMediator mediator)
{
_mediator = mediator;
}
[HttpPost]
public async Task<IActionResult> AddToCart([FromBody] AddToCartRequest request)
{
var result = await _mediator.Send(request);
if (result.IsSuccess)
{
return Ok("Product added to the cart successfully.");
}
else
{
return BadRequest(result.ErrorMessage);
}
}
}

Note the usage of the custom Result<T> type from the message handler. Folks using MediatR love using these custom Result types when you’re passing information between logical layers because it avoids the usage of throwing exceptions and communicates failure cases more clearly.

See Andrew Lock on Working with the result pattern for more information about the Result pattern.

Wolverine is all about reducing code ceremony and we always strive to write application code as synchronous pure functions whenever possible, so let’s just write the exact same functionality as above using Wolverine idioms to shrink down the code:

public static class AddToCartRequestEndpoint
{
// Remember, we can do validation in middleware, or
// even do a custom Validate() : ProblemDetails method
// to act as a filter so the main method is the happy path
[WolverinePost("/api/cart/add"), EmptyResponse]
public static Update<Cart> Post(
AddToCartRequest request,
// This usage will return a 400 status code if the Cart
// cannot be found
[Entity(OnMissing = OnMissing.ProblemDetailsWith400)] Cart cart)
{
return cart.TryAddRequest(request) ? Storage.Update(cart) : Storage.Nothing(cart);
}
}

There’s a lot going on above, so let’s dive into some of the details:

I used Wolverine.HTTP to write the HTTP endpoint so we only have one piece of code for our “vertical slice” instead of having both the Controller method and the matching message handler for the same logical command. Wolverine.HTTP embraces our Railway Programming model and direct support for the ProblemDetails specification as a means of stopping the HTTP request such that validation pre-conditions can be validated by middleware such that the main endpoint method is really the “happy path”.

The code above is using Wolverine’s “declarative data access” helpers you see in the [Entity] usage. We realized early on that a lot of message handlers or HTTP endpoints need to work on a single domain entity or a handful of entities loaded by identity values riding on either command messages, HTTP requests, or HTTP routes. At runtime, if the Cart isn’t found by loading it from your configured application persistence (which could be EF Core, Marten, or RavenDb at this time), the whole HTTP request would stop with status code 400 and a message communicated through ProblemDetails that the requested Cart cannot be found.

The key point I’m trying to prove is that idiomatic Wolverine results in potentially less repetitive code, less code ceremony, and less layering than MediatR idioms. Sure, it’s going to take a bit to get used to Wolverine idioms, but the potential payoff is code that’s easier to reason about and much easier to unit test — especially if you’ll buy into our A-Frame Architecture approach for organizing code within your slices.

Validation Middleware

As another example just to show how Wolverine’s runtime is different than MediatR’s, let’s consider the very common case of using Fluent Validation (or now DataAnnotations too!) middleware in front of message handlers or HTTP requests. With MediatR, you might use an IPipelineBehavior<T> implementation like this that will wrap all requests:

    public class ValidationBehaviour<TRequest, TResponse> : IPipelineBehavior<TRequest, TResponse> where TRequest : IRequest<TResponse>
    {
        private readonly IEnumerable<IValidator<TRequest>> _validators;
        public ValidationBehaviour(IEnumerable<IValidator<TRequest>> validators)
        {
            _validators = validators;
        }
      
        public async Task<TResponse> Handle(TRequest request, CancellationToken cancellationToken, RequestHandlerDelegate<TResponse> next)
        {
            if (_validators.Any())
            {
                var context = new ValidationContext<TRequest>(request);
                var validationResults = await Task.WhenAll(_validators.Select(v => v.ValidateAsync(context, cancellationToken)));
                var failures = validationResults.SelectMany(r => r.Errors).Where(f => f != null).ToList();
                if (failures.Count != 0)
                    throw new ValidationException(failures);
            }
          
            return await next();
        }
    }

    I’ve seen plenty of alternatives out there with slightly different implementations. In some cases folks will use service location to probe the application’s IoC container for any possible IValidator<T> implementations for the current request. In all cases though, the implementations are using runtime logic on every possible request to check if there is any validation logic. With the Wolverine version of Fluent Validation middleware, we do things a bit differently with less runtime overhead that will also result in cleaner Exception stack traces when things go wrong — don’t laugh, we really did design Wolverine quite purposely to avoid the really nasty kind of Exception stack traces you get from many other middleware or “behavior” using frameworks like Wolverine’s predecessor tool FubuMVC did 😦

    Let’s say that you have a Wolverine.HTTP endpoint like so:

    public record CreateCustomer
    (
    string FirstName,
    string LastName,
    string PostalCode
    )
    {
    public class CreateCustomerValidator : AbstractValidator<CreateCustomer>
    {
    public CreateCustomerValidator()
    {
    RuleFor(x => x.FirstName).NotNull();
    RuleFor(x => x.LastName).NotNull();
    RuleFor(x => x.PostalCode).NotNull();
    }
    }
    }
    public static class CreateCustomerEndpoint
    {
    [WolverinePost("/validate/customer")]
    public static string Post(CreateCustomer customer)
    {
    return "Got a new customer";
    }
    [WolverinePost("/validate/customer2")]
    public static string Post2([FromQuery] CreateCustomer customer)
    {
    return "Got a new customer";
    }
    }

    In the application bootstrapping, I’ve added this option:

    app.MapWolverineEndpoints(opts =>
    {
    // more configuration for HTTP...
    // Opting into the Fluent Validation middleware from
    // Wolverine.Http.FluentValidation
    opts.UseFluentValidationProblemDetailMiddleware();
    }

    Just like with MediatR, you would need to register the Fluent Validation validator types in your IoC container as part of application bootstrapping. Now, here’s how Wolverine’s model is very different from MediatR’s pipeline behaviors. While MediatR is applying that ValidationBehaviour to each and every message handler in your application whether or not that message type actually has any registered validators, Wolverine is able to peek into the IoC configuration and “know” whether there are registered validators for any given message type. If there are any registered validators, Wolverine will utilize them in the code it generates to execute the HTTP endpoint method shown above for creating a customer. If there is only one validator, and that validator is registered as a Singleton scope in the IoC container, Wolverine generates this code:

        public class POST_validate_customer : Wolverine.Http.HttpHandler
        {
            private readonly Wolverine.Http.WolverineHttpOptions _wolverineHttpOptions;
            private readonly Wolverine.Http.FluentValidation.IProblemDetailSource<WolverineWebApi.Validation.CreateCustomer> _problemDetailSource;
            private readonly FluentValidation.IValidator<WolverineWebApi.Validation.CreateCustomer> _validator;
    
            public POST_validate_customer(Wolverine.Http.WolverineHttpOptions wolverineHttpOptions, Wolverine.Http.FluentValidation.IProblemDetailSource<WolverineWebApi.Validation.CreateCustomer> problemDetailSource, FluentValidation.IValidator<WolverineWebApi.Validation.CreateCustomer> validator) : base(wolverineHttpOptions)
            {
                _wolverineHttpOptions = wolverineHttpOptions;
                _problemDetailSource = problemDetailSource;
                _validator = validator;
            }
    
    
    
            public override async System.Threading.Tasks.Task Handle(Microsoft.AspNetCore.Http.HttpContext httpContext)
            {
                // Reading the request body via JSON deserialization
                var (customer, jsonContinue) = await ReadJsonAsync<WolverineWebApi.Validation.CreateCustomer>(httpContext);
                if (jsonContinue == Wolverine.HandlerContinuation.Stop) return;
                
                // Execute FluentValidation validators
                var result1 = await Wolverine.Http.FluentValidation.Internals.FluentValidationHttpExecutor.ExecuteOne<WolverineWebApi.Validation.CreateCustomer>(_validator, _problemDetailSource, customer).ConfigureAwait(false);
    
                // Evaluate whether or not the execution should be stopped based on the IResult value
                if (result1 != null && !(result1 is Wolverine.Http.WolverineContinue))
                {
                    await result1.ExecuteAsync(httpContext).ConfigureAwait(false);
                    return;
                }
    
    
                
                // The actual HTTP request handler execution
                var result_of_Post = WolverineWebApi.Validation.ValidatedEndpoint.Post(customer);
    
                await WriteString(httpContext, result_of_Post);
            }
    
        }

    I should note that Wolverine’s Fluent Validation middleware will not generate any code for any HTTP endpoint where there are no known Fluent Validation validators for the endpoint’s request model. Moreover, Wolverine can even generate slightly different code for having multiple validators versus a singular validator as a way of wringing out a little more efficiency in the common case of having only a single validator registered for the request type.

    The point here is that Wolverine is trying to generate the most efficient code possible based on what it can glean from the IoC container registrations and the signature of the HTTP endpoint or message handler methods while the MediatR model has to effectively use runtime wrappers and conditional logic at runtime.

    Marten’s Aggregation Projection Subsystem

    Marten has very rich support for projecting events into read, write, or query models. While there are other capabilities as well, the most common usage is probably to aggregate related events into a singular view. Marten projections can be executed Live, meaning that Marten does the creation of the view by loading the target events into memory and building the view on the fly. Projections can also be executed Inline, meaning that the projected views are persisted as part of the same transaction that captures the events that apply to that projection. For this post though, I’m mostly talking about projections running asynchronously in the background as events are captured into the database (think eventual consistency).

    Aggregate Projections in Marten combine some sort of grouping of events and process them to create a single aggregated document representing the state of those events. These projections come in two flavors:

    Single Stream Projections create a rolled up view of all or a segment of the events within a single event stream. These projections are done either by using the SingleStreamProjection<TDoc, TId> base type or by creating a “self aggregating” Snapshot approach with conventional Create/Apply/ShouldDelete methods that mutate or evolve the snapshot based on new events.

    Multi Stream Projections create a rolled up view of a user-defined grouping of events across streams. These projections are done by sub-classing the MultiStreamProjection<TDoc, TId> class and is further described in Multi-Stream Projections. An example of a multi-stream projection might be a “query model” within an accounting system of some sort that rolls up the value of all unpaid invoices by active client.

    You can also use a MultiStreamProjection to create views that are a segment of a single stream over time or version. Imagine that you have a system that models the activity of a bank account with event sourcing. You could use a MultiStreamProjection to create a view that summarizes the activity of a single bank account within a calendar month.

    The ability to use explicit code to define projections was hugely improved in the Marten 8.0 release.

    Within your aggregation projection, you can express the logic about how Marten combines events into a view through either conventional methods (original, old school Marten) or through completely explicit code.

    Within an aggregation, you have advanced options to:

    Simple Example

    The most common usage is to create a “write model” that projects the current state for a single stream, so on that note, let’s jump into a simple example.

    I’m huge into epic fantasy book series, hence the silly original problem domain in the very oldest code samples. Hilariously, Marten has fielded and accepted pull requests that corrected our modeling of the timeline of the Lord of the Rings in sample code.

    Martens on a Quest

    Let’s say that we’re building a system to track the progress of a traveling party on a quest within an epic fantasy series like “The Lord of the Rings” or the “Wheel of Time” and we’re using event sourcing to capture state changes when the “quest party” adds or subtracts members. We might very well need a “write model” for the current state of the quest for our command handlers like this one:

    public sealed record QuestParty(Guid Id, List<string> Members)
    {
    // These methods take in events and update the QuestParty
    public static QuestParty Create(QuestStarted started) => new(started.QuestId, []);
    public static QuestParty Apply(MembersJoined joined, QuestParty party) =>
    party with
    {
    Members = party.Members.Union(joined.Members).ToList()
    };
    public static QuestParty Apply(MembersDeparted departed, QuestParty party) =>
    party with
    {
    Members = party.Members.Where(x => !departed.Members.Contains(x)).ToList()
    };
    public static QuestParty Apply(MembersEscaped escaped, QuestParty party) =>
    party with
    {
    Members = party.Members.Where(x => !escaped.Members.Contains(x)).ToList()
    };
    }

    For a little more context, the QuestParty above might be consumed in a command handler like this:

    public record AddMembers(Guid Id, int Day, string Location, string[] Members);
    public static class AddMembersHandler
    {
    public static async Task HandleAsync(AddMembers command, IDocumentSession session)
    {
    // Fetch the current state of the quest
    var quest = await session.Events.FetchForWriting<QuestParty>(command.Id);
    if (quest.Aggregate == null)
    {
    // Bad quest id, do nothing in this sample case
    }
    var newMembers = command.Members.Where(x => !quest.Aggregate.Members.Contains(x)).ToArray();
    if (!newMembers.Any())
    {
    return;
    }
    quest.AppendOne(new MembersJoined(command.Id, command.Day, command.Location, newMembers));
    await session.SaveChangesAsync();
    }
    }

    How Aggregation Works

    Just to understand a little bit more about the capabilities of Marten’s aggregation projections, let’s look at the diagram below that tries to visualize the runtime workflow of aggregation projections inside of the Async Daemon background process:

    How Aggregation Works
    1. The Daemon is constantly pushing a range of events at a time to an aggregation projection. For example, Events 1,000 to 2,000 by sequence number
    2. The aggregation “slices” the incoming range of events into a group of EventSlice objects that establishes a relationship between the identity of an aggregated document and the events that should be applied during this batch of updates for that identity. To be more concrete, a single stream projection for QuestParty would be creating an EventSlice for each quest id it sees in the current range of events. Multi-stream projections will have some kind of custom “slicing” or grouping. For example, maybe in our Quest tracking system we have a multi-stream projection that tries to track how many monsters of each type are defeated. That projection might “slice” by looking for all MonsterDefeated events across all streams and group or slice incoming events by the type of monster. The “slicing” logic is automatic for single stream projections, but will require explicit configuration or explicitly written logic for multi stream projections.
    3. Once the projection has a known list of all the aggregate documents that will be updated by the current range of events, the projection will fetch each persisted document, first from any active aggregate cache in memory, then by making a single batched request to the Marten document storage for any missing documents and adding these to any active cache (see Optimizing Performance for more information about the potential caching).
    4. The projection will execute any event enrichment against the now known group of EventSlice. This process gives you a hook to efficiently “enrich” the raw event data with extra data lookups from Marten document storage or even other sources.
    5. Most of the work as a developer is in the application or “Evolve” step of the diagram above. After the “slicing”, the aggregation has turned the range of raw event data into EventSlice objects that contain the current snapshot of a projected document by its identity (if one exists), the identity itself, and the events from within that original range that should be applied on top of the current snapshot to “evolve” it to reflect those events. This can be coded either with the conventional Apply/Create/ShouldDelete methods or using explicit code — which is almost inevitably means a switch statement. Using the QuestParty example again, the aggregation projection would get an EventSlice that contains the identity of an active quest, the snapshot of the current QuestParty document that is persisted by Marten, and the new MembersJoined et al events that should be applied to the existing QuestParty object to derive the new version of QuestParty.
    6. Just before Marten persists all the changes from the application / evolve step, you have the RaiseSideEffects() hook to potentially raise “side effects” like appending additional events based on the now updated state of the projected aggregates or publishing the new state of an aggregate through messaging (Wolverine has first class support for Marten projection side effects through its Marten integration into the full “Critter Stack”)
    7. For the current event range and event slices, Marten will send all aggregate document updates or deletions, new event appending operations, and even outboxed, outgoing messages sent via side effects (if you’re using the Wolverine integration) in batches to the underlying PostgreSQL database. I’m calling this out because we’ve constantly found in Marten development that command batching to PostgreSQL is a huge factor in system performance and the async daemon has been designed to try to minimize the number of network round trips between your application and PostgreSQL at every turn.
    8. Assuming the transaction succeeds for the current event range and the operation batch in the previous step, Marten will call “after commit” observers. This notification for example will release any messages raised as a side effect and actually send those messages via whatever is doing the actual publishing (probably Wolverine).

    Marten happily supports immutable data types for the aggregate documents produced by projections, but also happily supports mutable types as well. The usage of the application code is a little different though.

    Starting with Marten 8.0, we’ve tried somewhat to conform to the terminology used by the Functional Event Sourcing Decider paper by Jeremie Chassaing. To that end, the API now refers to a “snapshot” that really just means a version of the projection and “evolve” as the step of applying new events to an existing “snapshot” to calculate a new “snapshot.”

    Catching Up with Recent Wolverine Releases

    Wolverine has had a very frequent release cadence the past couple months as community contributions, requests from JasperFx Software clients, and yes, sigh, bug reports have flowed in. Right now I think I can justifiably claim that Wolverine is innovating much faster than any of the other comparable tools in the .NET ecosystem.

    Some folks clearly don’t like that level of change of course, and I’ve always had to field some only criticism for our frequency of releases. I don’t think that continues forever of course.

    I thought that now would be a good time to write a little bit about the new features and improvements just because so much of it happened over the holiday season. Starting somewhat arbitrarily with the first of December to now

    Inferred Message Grouping in Wolverine 5.5

    A massively important new feature in Wolverine 5 was our “Partitioned Sequential Messaging” that seeks to effectively head off problems with concurrent message processing by segregating message processing by some kind of business entity identity. Long story short, this feature can almost completely eliminate issues with concurrent access to data without eliminating parallel processing across unrelated messages.

    In Wolverine 5.5 we added the now obvious capability to let Wolverine automatically infer the messaging group id for messages handled by a Saga (the saga identity) or with the Aggregate Handler Workflow (the stream id of the primary event stream being altered in the handler):

    // Telling Wolverine how to assign a GroupId to a message, that we'll use
    // to predictably sort into "slots" in the processing
    opts.MessagePartitioning
    // This tells Wolverine to use the Saga identity as the group id for any message
    // that impacts a Saga or the stream id of any command that is part of the "aggregate handler workflow"
    // integration with Marten
    .UseInferredMessageGrouping()
    .PublishToPartitionedLocalMessaging("letters", 4, topology =>
    {
    topology.MessagesImplementing<ILetterMessage>();
    topology.MaxDegreeOfParallelism = PartitionSlots.Five;
    topology.ConfigureQueues(queue =>
    {
    queue.BufferedInMemory();
    });
    });

    “Classic” .NET Domain Events with EF Core in Wolverine 5.6

    Wolverine is attracting a lot of new users lately who might honestly only have been originally interested because of other tool’s recent licensing changes, and those users tend to come with a more typical .NET approach to application architecture than Wolverine’s idiomatic vertical slice architecture approach. These new users are also a lot more likely to be using EF Core than Marten, so we’ve had to invest more in EF Core integration.

    Wolverine 5.6 brought an ability to cleanly and effectively utilize a traditional .NET approach for “Domain Event” publishing through EF Core to Wolverine’s messaging.

    I wrote about that at the time in “Classic” .NET Domain Events with Wolverine and EF Core.

    Wolverine 5.7 Knocked Out Bugs

    There wasn’t many new features of note, but Wolverine 5.7 less than a week after 5.6 had five contributors and knocked out a dozen issues. The open issue count in Wolverine crested in December in the low 70’s and it’s down to the low 30’s right now.

    Client Requests in Wolverine 5.8

    Wolverine 5.8 gave us some bug fixes, but also a couple new features requested by JasperFx clients:

    The Community Went Into High Gear with Wolverine 5.9

    Wolverine 5.9 dropped the week before Christmas with contributions from 7 different people.

    The highlights are:

    • Sandeep Desai has been absolutely on fire as a contributor to Wolverine and he made the HTTP Messaging Transport finally usable in this release with several other pull requests in later versions that also improved that feature. This is enabling Wolverine to use HTTP as a messaging transport. I’ve long wanted this feature as a prerequisite for CritterWatch.
    • Lodewijk Sioen added Wolverine middleware support for using Data Annotations with Wolverine.HTTP
    • The Rabbit MQ integration got more robust about reconnecting on errors

    Wolverine 5.10 Kicked off 2026 with a Bang!

    Wolverine 5.10 came out last week with contributions from eleven different folks. Plenty of bug fixes and contributions built up over the holidays. The highlights include:

    And several random requests for JasperFx clients because that’s something we do to support our clients.

    Wolverine 5.11 adds More Idempotency Options

    Wolverine 5.11 dropped this week with more bug fixes and new capabilities from five contributors. The big new feature was an improved option for enforcing message idempotency on non-transactional handlers as a request from a JasperFx support client.

    using var host = await Host.CreateDefaultBuilder()
    .UseWolverine(opts =>
    {
    opts.Durability.Mode = DurabilityMode.Solo;
    opts.Services.AddDbContextWithWolverineIntegration<CleanDbContext>(x =>
    x.UseSqlServer(Servers.SqlServerConnectionString));
    opts.Services.AddResourceSetupOnStartup(StartupAction.ResetState);
    opts.Policies.AutoApplyTransactions(IdempotencyStyle.Eager);
    opts.PersistMessagesWithSqlServer(Servers.SqlServerConnectionString, "idempotency");
    opts.UseEntityFrameworkCoreTransactions();
    // THIS RIGHT HERE
    opts.Policies.AutoApplyIdempotencyOnNonTransactionalHandlers();
    }).StartAsync();

    That release also included several bug fixes and an effort from me to go fill in some gaps in the documentation website. That release got us down to the lowest open issue count in years.

    Summary

    The Wolverine community has been very busy, it is actually a community of developers from all over the world, and we’re improving fast.

    I do think that the release cadence will slow down somewhat though as this has been an unusual burst of activity.

    How JasperFx Supports our Customers

    Reach out anytime to sales@jasperfx.net to ask us about how we could potentially help your shop with software development using the Critter Stack.

    It’s a New Year and hopefully we all get to start on some great new software initiatives. If you happen to be starting something this year that’s going to get you into Event Driven Architecture or Event Sourcing, the Critter Stack (Marten and Wolverine) is a great toolset to get you where you’re going. And of course, JasperFx Software is around to help our clients get the most out of the Critter Stack and support you through architectural decisions, business modeling, and test automation as well.

    A JasperFx support plan is more than just a throat to choke when things go wrong. We build in consulting time, and mostly interact with our clients through IM tools like Discord or Slack and occasional Zoom calls when that’s appropriate. And GitHub issues of course for tracking problems or feature requests.

    Just thinking about the past week or so, JasperFx has helped clients with:

    • Helped troubleshoot a couple production or development issues with clients
    • Modeling events, event streams, and strategies for projections
    • A deep dive into the multi-tenancy support in Marten and Wolverine, the implications of different options, possible performance optimizations that probably have to be done upfront as well as performance optimizations that could be done later, and how these options fit our client’s problem domain and business.
    • For a greenfield project, we laid out several options with Marten to optimize the future performance and scalability with several opt in features and of course, the potential drawbacks of those features (like event archiving or stream compacting).
    • Worked with a couple clients on how best to configure Wolverine when multiple applications or multiple modules within the same application are targeting the same database
    • Worked with a client on how to configure Wolverine to enable a modular monolith approach to utilize completely separate databases and a mix and match of database per tenant with separate databases per module.
    • How authorization and authentication can be integrated into Wolverine.HTTP — which basically boils down to “basically the same as MVC Core”
    • A lot of conversations about how to protect your system against concurrency issues and what features in both Marten and Wolverine will help you be more resilient
    • Talked through many of the configuration possibilities for message sequencing or parallelism in Wolverine and how to match that to different needs
    • Fielded several small feature requests to improve Wolverine’s usage within modular monolith applications where the same message might need to be handled independently by separate modules
    • Pushed a new Wolverine release that included some small requests from a client for their particular usage
    • Conferred with a current client on some very large, forthcoming features in Marten that will hopefully improve its usability for applications that require complex dashboard screens that display very rich data. The feature isn’t directly part of the client’s support agreement per se, but we absolutely pay attention to our client’s use cases within our own internal roadmap for the Critter Stack tools.

    But again, that’s only the past couple weeks. If you’re interested in learning more, or want JasperFx to be helping your shop, drop us an email at sales@jasperfx.net or you can DM me just about anywhere.

    Critter Stack Roadmap for 2026

    I normally write this out in January, but I’m feeling like now is a good time to get this out as some of it is in flight. So with plenty of feedback from the other Critter Stack Core team members and a lot of experience seeing where JasperFx Software clients have hit friction in the past couple years, here’s my current thinking about where the Critter Stack development goes for 2026.

    As I’m sure you can guess, every time I’ve written this yearly post, it’s been absurdly off the mark of what actually gets done through the year.

    Critter Watch

    For the love of all that’s good in this world, JasperFx Software needs to get an MVP out the door that’s usable for early adopters who are already clamoring for it. The “Critter Watch” tool, in a nutshell, should be able to tell you everything you need to know about how or why a Critter Stack application is unhealthy and then also give you the tools you need to heal your systems when anything does go wrong.

    The MVP is still shaping up as:

    • A visualization and explanation of the configuration of your Critter Stack application
    • Performance metrics integration from both Marten and Wolverine
    • Event Store monitoring and management of projections and subscriptions
    • Wolverine node visualization and monitoring
    • Dead Letter Queue querying and management
    • Alerting – but I don’t have a huge amount of detail yet. I’m paying close attention to the issues JasperFx clients see in production applications though, and using that to inform what information Critter Watch will surface through its user interface and push notifications

    This work is heavily in flight, and will hopefully accelerate over the holidays and January as JasperFx Software clients tend to be much quieter. I will be publishing a separate vision document soon for users to review.

    The Entire “Critter Stack”

    • We’re standing up the new docs.jasperfx.net (Babu is already working on this) to hold documentation on supporting libraries and more tutorials and sample projects that cross Marten & Wolverine. This will finally add some documentation for Weasel (database utilities and migration support), our command line support, the stateful resource model, the code generation model, and everything to do with DevOps recipes.
    • Play the “Cold Start Optimization” epic across both Marten and Wolverine (and possibly Lamar). I don’t think that true AOT support is feasible, but maybe we can get a lot closer. Have an optimized start mode of some sort that eliminates all or at least most of:
      • Reflection usage in bootstrapping
      • Reflection usage at runtime, which today is really just occasional calls to object.GetType()
      • Assembly scanning of any kind, which we know can be very expensive for some systems with very large dependency trees.
    • Increased and improved integration with EF Core across the stack

    Marten

    The biggest set of complaints I’m hearing lately is all around views between multiple entity types or projections involving multiple stream types or multiple entity types. I also got some feedback from multiple past clients about the limitation of Marten as a data source underneath UI grids, which isn’t particularly a new bit of feedback. In general, there also appears to be a massive opportunity to improve Marten’s usability for many users by having more robust support in the box for projecting event data to flat, denormalized tables.

    I think I’d like to prioritize a series of work in 2026 to alleviate the complicated view problem:

    • The “Composite Projections” Epic where you might use the build products of upstream projections to create multi-stream projection views. This is also an opportunity to ratchet up even more scalability and throughput in the daemon. I’ve gotten positive feedback from a couple JasperFx clients about this. It’s also a big opportunity to increase the throughput and scalability of the Async Daemon by making fewer database requests
    • Revisit GroupJoin in the LINQ support even though that’s going to be absolutely miserable to build. GroupJoin() might end up being a much easier usage that all our Include() functionality. 
    • A first class model to project Marten event data with EF Core. In this proposed model, you’d use an EF Core DbContext to do all the actual writes to a database. 

    Other than that, some other ideas that have kicked around for awhile are:

    • Improve the documentation and sample projects, especially around the usage of projections
    • Take a better look at the full text search features in Marten
    • Finally support the PostGIS extension in Marten. I think that could be something flashy and quick to build, but I’d strongly prefer to do this in the context of an actual client use case.
    • Continue to improve our story around multi-stream operations. I’m not enthusiastic about “Dynamic Boundary Consistency” (DCB) in regards to Marten though, so I’m not sure what this actually means yet. This might end up centering much more on the integration with Wolverine’s “aggregate handler workflow” which is already perfectly happy to support strong consistency models even with operations that touch more than one event stream.

    Wolverine

    Wolverine is by far and away the busiest part of the Critter Stack in terms of active development right now, but I think that slows down soon. To be honest, most work at this point is us reacting tactically to JasperFx client or user needs. In terms of general, strategic themes, I think that 2026 will involve:

    • In conjunction with “CritterWatch”, improving Wolverine’s management story around dead letter queueing
    • I would love to expand Wolverine’s database support beyond “just” SQL Server and PostgreSQL
    • Improving the Kafka integration. That’s not our most widely used messaging broker, but that seems to be the leading source of enhancement requests right now

    New Critters?

    We’ve done a lot of preliminary work to potentially build new Critter Stack event store alternatives based on different database engines. I’ve always believed that SQL Server would be the logical next database engine, but we’ve gotten fewer and fewer requests for this as PostgreSQL has become a much more popular database choice in the .NET ecosystem.

    I’m not sure this will be a high priority in 2026, but you never know…

    “Classic” .NET Domain Events with Wolverine and EF Core

    I was helping a new JasperFx Software client this week to best integrate a Domain Events strategy into their new Wolverine codebase. This client wanted to use the common model of using an EF Core DbContext to harvest domain events raised by different entities and relay those to Wolverine messaging with proper Wolverine transactional outbox support for system durability. As part of that assistance — and also to have some content for other Wolverine users trying the same thing later — I promised to write a blog post showing how I’d do this kind of integration myself with Wolverine and EF Core or at least consider a few options. To try to more permanently head this usage problem for other users, I went into mad scientist mode this evening and just rolled out a new Wolverine 5.6 with some important improvements to make this Domain Events pattern much easier to use in combination with EF Core.

    Let’s start with some context about the general kind of approach I’m referring to with…

    Typical .NET Approach with EF Core and MediatR

    I’m largely basing all the samples in this post on Camron Frenzel’s Simple Domain Events with EFCore and MediatR. In his example there was a domain entity like this:

        // Base class that establishes the pattern for publishing
        // domain events within an entity
        public abstract class Entity : IEntity
        {     
            [NotMapped]
            private readonly ConcurrentQueue<IDomainEvent> _domainEvents = new ConcurrentQueue<IDomainEvent>();
    
            [NotMapped]
            public IProducerConsumerCollection<IDomainEvent> DomainEvents => _domainEvents;
    
            protected void PublishEvent(IDomainEvent @event)
            {
                _domainEvents.Enqueue(@event);
            }
    
            protected Guid NewIdGuid()
            {
                return MassTransit.NewId.NextGuid();
            }
        }
    
        public class BacklogItem : Entity
        {
            public Guid Id { get; private set; }
    
            [MaxLength(255)]
            public string Description { get; private set; }
            public virtual Sprint Sprint { get; private set; }
            public DateTime CreatedAtUtc { get; private set; } = DateTime.UtcNow;
    
            private BacklogItem() { }
    
            public BacklogItem(string desc)
            {
                this.Id = NewIdGuid();
                this.Description = desc;
            }
        
            public void CommitTo(Sprint s)
            {
                this.Sprint = s;
                this.PublishEvent(new BacklogItemCommitted(this, s));
            }
        }
    

    Note the CommitTo() method that publishes a BacklogItemCommitted event that in his sample is published via MediatR with some customization of an EF Core DbContext like this from the referenced post with some comments that I added:

    public override async Task<int> SaveChangesAsync(CancellationToken cancellationToken = default(CancellationToken))
    {
        await _preSaveChanges();
        var res = await base.SaveChangesAsync(cancellationToken);
        return res;
    }
    
    private async Task _preSaveChanges()
    {
        await _dispatchDomainEvents();
    }
    
    private async Task _dispatchDomainEvents()
    {
        // Find any entity objects that were changed in any way
        // by the current DbContext, and relay them to MediatR
        var domainEventEntities = ChangeTracker.Entries<IEntity>()
            .Select(po => po.Entity)
            .Where(po => po.DomainEvents.Any())
            .ToArray();
    
        foreach (var entity in domainEventEntities)
        {
            // _dispatcher was an abstraction in his post
            // that was a light wrapper around MediatR
            IDomainEvent dev;
            while (entity.DomainEvents.TryTake(out dev))
                await _dispatcher.Dispatch(dev);
        }
    }
    

    The goal of this approach is to make DDD style entity types the entry point and governing “decider” of all business behavior and workflow and give these domain model types a way to publish event messages to the rest of the system for side effects in the system outside of the state of the entity. Like for example, maybe the backlog system has to publish a message to a Slack room about the back log item being added to the sprint. You sure as hell don’t want your domain entity to have to know about the infrastructure you use to talk to Slack or web services or whatever.

    Mechanically, I’ve seen this typically done with some kind of Entity base class that either exposes a collection of published domain events like the sample above, or puts some kind of interface like this directly into the Entity objects:

    // Just assume that this little abstraction
    // eventually relays the event messages to Wolverine
    // or whatever messaging tool you're using
    public interface IEventPublisher
    {
        void Publish<T>(T @event);
    }
    
    // Using a Nullo just so you don't have potential
    // NullReferenceExceptions
    public class NulloEventPublisher : IEventPublisher
    {
        public void Publish<T>(T @event)
        {
            // Do nothing.
        }
    }
    
    public abstract class Entity
    {
        public IEventPublisher Publisher { get; set; } = new NulloEventPublisher();
    }
    
    public class BacklogItem : Entity
    {
        public Guid Id { get; private set; } = Guid.CreateVersion7();
    
        public string Description { get; private set; }
        
        // ZOMG, I forgot how annoying ORMs are. Use a document database
        // and stop worrying about making things virtual just for lazy loading
        public virtual Sprint Sprint { get; private set; }
    
        public void CommitTo(Sprint sprint)
        {
            Sprint = sprint;
            Publisher.Publish(new BackLotItemCommitted(Id, sprint.Id));
        }
    }
    

    In the approach of using the abstraction directly inside of your entity classes, you incur the extra overhead of connecting the Entity objects loaded out of EF Core with the implementation of your IEventPublisher interface at runtime. I’ll do a few thought experiments later in this post and try out a couple different alternatives.

    Before going back to EF Core integration ideas, let me deviate into…

    Idiomatic Critter Stack Usage

    Forget EF Core for a second, let’s examine a possible usage with the full “Critter Stack” and use Marten for Event Sourcing instead. In this case, a command handler to add a backlog item to a sprint could look something like this (folks, I didn’t spend much time thinking about how a back log system would be built here):

    public record BackLotItemCommitted(Guid SprintId);
    public record CommitToSprint(Guid BacklogItemId, Guid SprintId);
    
    // This is utilizing Wolverine's "Aggregate Handler Workflow" 
    // which is the Critter Stack's flavor of the "Decider" pattern
    public static class CommitToSprintHandler
    {
        public static Events Handle(
            // The actual command
            CommitToSprint command,
    
            // Current state of the back log item, 
            // and we may decide to make the commitment here
            [WriteAggregate] BacklogItem item,
    
            // Assuming that Sprint is event sourced, 
            // this is just a read only view of that stream
            [ReadAggregate] Sprint sprint)
        {
            // Use the item & sprint to "decide" if 
            // the system can proceed with the commitment
            return [new BackLotItemCommitted(command.SprintId)];
        }
    }
    

    In the code above we’re appending the BackLotItemCommitted event to Marten that’s returned from the method. If you need to carry out side effects outside of the scope of this handler using that event as a message input, you have a couple options to have Wolverine relay that through any of its messaging through the event forwarding (faster, but un-ordered) or event subscriptions (strictly ordered, but that always means slower).

    I should also say that if the events returned from the function above are also being forwarded as messages and not just being appended to the Marten event store, that messaging is completely integrated with Wolverine’s transactional outbox support. That’s a key differentiation all by itself from a similar MediatR based approach that doesn’t come with outbox support.

    That’s it, that’s the whole handler, but here are some things I would want you to take away from that code sample above:

    • Yes, the business logic is embedded directly in the handler method instead of being buried in the BacklogItem or Sprint aggregates. We are very purposely going down a Functional Programming (adjacent? curious?) approach where the logic is primarily in pure “Decider” functions
    • I think the code above clearly shows the relationship between the system input (the CommitToSprint command message) and the potential side effects and changes in state of the system. This relative ease of reasoning about the code is of the utmost importance for system maintainability. We can look at the handler code and know that executing that message will potentially lead to events or event messages being published. I’m going to hit this point again from some of the other potential approaches because I think this is a vital point.
    • Testability of the business logic is easy with the pure function approach
    • There are no marker interfaces, Entity base classes, or jumping through layers. There’s no repository or factory
    • Yes, there is absolutely a little bit of “magic” up above, but you can get Wolverine to show you the exact generated code around your handler to explain what it’s doing

    So enough of that, let’s start with some possible alternatives for Wolverine integration of domain events from domain entity objects with EF Core.

    Relay Events from Your Entity Subclass to Wolverine

    Switching back to EF Core integration, let’s look at a possible approach to teach Wolverine how to scrape domain events for publishing from your own custom Event or IEvent layer supertype like this one that we’ll put behind our BackLogItem type:

    // Of course, if you're into DDD, you'll probably 
    // use many more marker interfaces than I do here, 
    // but you do you and I'll do me in throwaway sample code
    public abstract class Entity
    {
        public List<object> Events { get; } = new();
    
        public void Publish(object @event)
        {
            Events.Add(@event);
        }
    }
    
    public class BacklogItem : Entity
    {
        public Guid Id { get; private set; }
    
        public string Description { get; private set; }
        public virtual Sprint Sprint { get; private set; }
        public DateTime CreatedAtUtc { get; private set; } = DateTime.UtcNow;
        
        public void CommitTo(Sprint sprint)
        {
            Sprint = sprint;
            Publish(new BackLotItemCommitted(Id, sprint.Id));
        }
    }
    

    Let’s utilize this a little bit within a Wolverine handler, first with explicit code:

    public static class CommitToSprintHandler
    {
        public static async Task HandleAsync(
            CommitToSprint command,
            ItemsDbContext dbContext)
        {
            var item = await dbContext.BacklogItems.FindAsync(command.SprintId);
            var sprint = await dbContext.Sprints.FindAsync(command.SprintId);
            
            // This method would cause an event to be published within
            // the BacklogItem object here that we need to gather up and
            // relay to Wolverine later
            item.CommitTo(sprint);
            
            // Wolverine's transactional middleware handles 
            // everything around SaveChangesAsync() and transactions
        }
    }
    

    Or a little bit cleaner with some Wolverine “magic” with Wolverine’s declarative persistence support if you’re so inclined:

    public static class CommitToSprintHandler
    {
        public static IStorageAction<BacklogItem> Handle(
            CommitToSprint command,
            
            // There's a naming convention here about how
            // Wolverine "knows" the id for the BacklogItem
            // from the incoming command
            [Entity] BacklogItem item,
            [Entity] Sprint sprint
            )
        {
            // This method would cause an event to be published within
            // the BacklogItem object here that we need to gather up and
            // relay to Wolverine later
            item.CommitTo(sprint);
    
            // This is necessary to "tell" Wolverine to put transactional middleware around the handler
            // Just taking in the right DbContext type as a dependency
            // work work just as well if you don't like the Wolverine
            // magic
            return Storage.Update(item);
        }
    }
    

    Now, let’s add some Wolverine configuration to just make this pattern work:

    builder.Host.UseWolverine(opts =>
    {
        // Setting up Sql Server-backed message storage
        // This requires a reference to Wolverine.SqlServer
        opts.PersistMessagesWithSqlServer(connectionString, "wolverine");
    
        // Set up Entity Framework Core as the support
        // for Wolverine's transactional middleware
        opts.UseEntityFrameworkCoreTransactions();
        
        // THIS IS A NEW API IN Wolverine 5.6!
        opts.PublishDomainEventsFromEntityFrameworkCore<Entity>(x => x.Events);
    
        // Enrolling all local queues into the
        // durable inbox/outbox processing
        opts.Policies.UseDurableLocalQueues();
    });
    

    In the Wolverine configuration above, the EF Core transactional middleware now “knows” how to scrape out possible domain events from the active DbContext.ChangeTracker and publish them through Wolverine. Moreover, the EF Core transactional middleware is doing all the operation ordering for you so that the events are enqueued as outgoing messages as part of the transaction and potentially persisted to the transactional inbox or outbox (depending on configuration) before the transaction is committed.

    To make this as clear as possible, this approach is completely reliant on the EF Core transactional middleware.

    Oh, and also note that this domain event “scraping” is also supported and tested with the IDbContextOutbox<T> service if you want to use this in application code outside of Wolverine message handlers or HTTP endpoints.

    This approach could also support the thread safe approach that the sample from the first section used in the future, but I’m dubious that that’s necessary.

    If I were building a system that embeds domain event publishing directly in domain model entity classes, I would prefer this approach. But, let’s talk about another option that will not require any changes to Wolverine…

    Relay Events from Entity to Wolverine Cascading Messages

    In this approach, which I’m granting that some people won’t like at all, we’ll simply pipe the event messages from the domain entity right to Wolverine and utilize Wolverine’s cascading message feature.

    This time I’m going to change the BacklogItem entity class to something like this:

    public class BacklogItem 
    {
        public Guid Id { get; private set; }
    
        public string Description { get; private set; }
        public virtual Sprint Sprint { get; private set; }
        public DateTime CreatedAtUtc { get; private set; } = DateTime.UtcNow;
        
        // The exact return type isn't hugely important here
        public object[] CommitTo(Sprint sprint)
        {
            Sprint = sprint;
            return [new BackLotItemCommitted(Id, sprint.Id)];
        }
    }
    

    With the handler signature:

    public static class CommitToSprintHandler
    {
        public static object[] Handle(
            CommitToSprint command,
            
            // There's a naming convention here about how
            // Wolverine "knows" the id for the BacklogItem
            // from the incoming command
            [Entity] BacklogItem item,
            [Entity] Sprint sprint
            )
        {
            return item.CommitTo(sprint);
        }
    }
    

    The approach above let’s you make the handler be a single pure function which is always great for unit testing, eliminates the need to do any customization of the DbContext type, makes it unnecessary to bother with any kind of IEventPublisher interface, and let’s you keep the logic for what event messages should be raised completely in your domain model entity types.

    I’d also argue that this approach makes it more clear to later developers that “hey, additional messages may be published as part of handling the CommitToSprint command” and I think that’s invaluable. I’ll harp on this more later, but I think the traditional, MediatR-flavored approach to domain events from the first example at the top makes application code harder to reason about and therefore more buggy over time.

    Embedding IEventPublisher into the Entities

    Lastly, let’s move to what I think is my least favorite approach that I will from this moment be recommending against for any JasperFx clients but is now completely supported by Wolverine 5.6+! Let’s use an IEventPublisher interface like this:

    // Just assume that this little abstraction
    // eventually relays the event messages to Wolverine
    // or whatever messaging tool you're using
    public interface IEventPublisher
    {
        void Publish<T>(T @event) where T : IDomainEvent;
    }
    
    // Using a Nullo just so you don't have potential
    // NullReferenceExceptions
    public class NulloEventPublisher : IEventPublisher
    {
        public void Publish<T>(T @event) where T : IDomainEvent
        {
            // Do nothing.
        }
    }
    
    public abstract class Entity
    {
        public IEventPublisher Publisher { get; set; } = new NulloEventPublisher();
    }
    
    public class BacklogItem : Entity
    {
        public Guid Id { get; private set; } = Guid.CreateVersion7();
    
        public string Description { get; private set; }
        
        // ZOMG, I forgot how annoying ORMs are. Use a document database
        // and stop worrying about making things virtual just for lazy loading
        public virtual Sprint Sprint { get; private set; }
    
        public void CommitTo(Sprint sprint)
        {
            Sprint = sprint;
            Publisher.Publish(new BackLotItemCommitted(Id, sprint.Id));
        }
    }
    

    Now, on to a Wolverine implementation for this pattern. You’ll need to do just a couple things. First, add this line of configuration to Wolverine, and note there are no generic arguments here:

    // This will set you up to scrape out domain events in the
    // EF Core transactional middleware using a special service
    // I'm just about to explain
    opts.PublishDomainEventsFromEntityFrameworkCore();
    

    Now, build a real implementation of that IEventPublisher interface above:

    public class EventPublisher(OutgoingDomainEvents Events) : IEventPublisher
    {
        public void Publish<T>(T e) where T : IDomainEvent
        {
            Events.Add(e);
        }
    }
    

    OutgoingDomainEvents is a service from the WolverineFx.EntityFrameworkCore Nuget that is registered as Scoped by the usage of the EF Core transactional middleware. Next, register your custom IEventPublisher with the Scoped lifecycle:

    opts.Services.AddScoped<IEventPublisher, EventPublisher>();
    

    How you wire up IEventPublisher to your domain entities getting loaded out of the your EF Core DbContext? Frankly, I don’t want to know. Maybe a repository abstraction around your DbContext types? Dunno. I hate that kind of thing in code, but I perfectly trust *you* to do that and to not make me see that code.

    What’s important is that within a message handler or HTTP endpoint, if you resolve the IEventPublisher through DI and use the EF Core transactional middleware, the domain events published to that interface will be piped correctly into Wolverine’s active messaging context.

    Likewise, if you are using IDbContextOutbox<T>, the domain events published to IEventPublisher will be correctly piped to Wolverine if you:

    1. Pull both IEventPublisher and IDbContextOutbox<T> from the same scoped service provider (nested container in Lamar / StructureMap parlance)
    2. Call IDbContextOutbox<T>.SaveChangesAndFlushMessagesAsync()

    So, we’re going to have to do some sleight of hand to keep your domain entities synchronous

    Last note, in unit testing you might use a stand in “Spy” like this:

    public class RecordingEventPublisher : OutgoingMessages, IEventPublisher
    {
        public void Publish<T>(T @event)
        {
            Add(@event);
        }
    }
    

    Summary

    I have always hated this Domain Events pattern and much prefer the full “Critter Stack” approach with the Decider pattern and event sourcing. But, Wolverine is picking up a lot more users who combine it with EF Core (and JasperFx deeply appreciates these customers!) and I know damn well that there will be more and more demand for this pattern as people with more traditional DDD backgrounds and used to more DI-reliant tools transition to Wolverine. Now was an awfully good time to plug this gap.

    If it was me, I would also prefer having an Entity just store published domain events on itself and depend on Wolverine “scraping” these events out of the DbContext change tracking so you don’t have to do any kind of gymnastics and extra layering to attach some kind of IEventPublisher to your Entity types.

    Lastly, if you’re comparing this straight up to the MediatR approach, just keep in mind that this is not an oranges to oranges comparison because Wolverine also needs to correctly utilize its transactional outbox for resiliency, which is a feature that MediatR does not provide.

    The Critter Stack Gets Even Better at Testing

    My internal code name for one of the new features I’m describing is “multi-stage tracked sessions” which somehow got me thinking of the ZZ Top song “Stages” and their Afterburner album because the sound track for getting this work done this week. Not ZZ Top’s best stuff, but there’s still some bangers on it, or at least *I* loved how it sounded on my Dad’s old phonograph player when I was a kid. For what it’s worth, my favorite ZZ Top albums cover to cover are Degüello and their La Futura comeback album.

    I was heavily influenced by Extreme Programming in my early career and that’s made me have a very deep appreciation for the quality of “Testability” in the development tools I use and especially for the tools like Marten and Wolverine that I work on. I would say that one of the differentiators for Wolverine over other .NET messaging libraries and application frameworks is its heavy focus and support for automated testing of your application code.

    The Critter Stack community released Marten 8.14 and Wolverine 5.1 today with some significant improvements to our testing support. These new features mostly originated from my work with JasperFx Software clients that give me a first hand look into what kinds of challenges our users hit automating tests that involve multiple layers of asynchronous behavior.

    Stubbed Message Handlers in Wolverine

    The first improvement is Wolverine getting the ability to let you temporarily apply stubbed message handlers to a bootstrapped application in tests. The key driver for this feature is teams that take advantage of Wolverine’s request/reply capabilities through messaging.

    Jumping into an example, let’s say that your system interacts with another service that estimates delivery costs for ordering items. At some point in the system you might reach out through a request/reply call in Wolverine to estimate an item delivery before making a purchase like this code:

    // This query message is normally sent to an external system through Wolverine
    // messaging
    public record EstimateDelivery(int ItemId, DateOnly Date, string PostalCode);
    
    // This message type is a response from an external system
    public record DeliveryInformation(TimeOnly DeliveryTime, decimal Cost);
    
    public record MaybePurchaseItem(int ItemId, Guid LocationId, DateOnly Date, string PostalCode, decimal BudgetedCost);
    public record MakePurchase(int ItemId, Guid LocationId, DateOnly Date);
    public record PurchaseRejected(int ItemId, Guid LocationId, DateOnly Date);
    
    public static class MaybePurchaseHandler
    {
        public static Task<DeliveryInformation> LoadAsync(
            MaybePurchaseItem command, 
            IMessageBus bus, 
            CancellationToken cancellation)
        {
            var (itemId, _, date, postalCode, budget) = command;
            var estimateDelivery = new EstimateDelivery(itemId, date, postalCode);
            
            // Let's say this is doing a remote request and reply to another system
            // through Wolverine messaging
            return bus.InvokeAsync<DeliveryInformation>(estimateDelivery, cancellation);
        }
        
        public static object Handle(
            MaybePurchaseItem command, 
            DeliveryInformation estimate)
        {
    
            if (estimate.Cost <= command.BudgetedCost)
            {
                return new MakePurchase(command.ItemId, command.LocationId, command.Date);
            }
    
            return new PurchaseRejected(command.ItemId, command.LocationId, command.Date);
        }
    }
    

    And for a little more context, the EstimateDelivery message will always be sent to an external system in this configuration:

    var builder = Host.CreateApplicationBuilder();
    builder.UseWolverine(opts =>
    {
        opts
            .UseRabbitMq(builder.Configuration.GetConnectionString("rabbit"))
            .AutoProvision();
    
        // Just showing that EstimateDelivery is handled by
        // whatever system is on the other end of the "estimates" queue
        opts.PublishMessage<EstimateDelivery>()
            .ToRabbitQueue("estimates");
    });
    

    In testing scenarios, maybe the external system isn’t available at all, or it’s just much more challenging to run tests that also include the external system, or maybe you’d just like to write more isolated tests against your service’s behavior before even trying to integrate with the other system (my personal preference anyway). To that end we can now stub the remote handling like this:

    public static async Task try_application(IHost host)
    {
        host.StubWolverineMessageHandling<EstimateDelivery, DeliveryInformation>(
            query => new DeliveryInformation(new TimeOnly(17, 0), 1000));
    
        var locationId = Guid.NewGuid();
        var itemId = 111;
        var expectedDate = new DateOnly(2025, 12, 1);
        var postalCode = "78750";
    
        var maybePurchaseItem = new MaybePurchaseItem(itemId, locationId, expectedDate, postalCode,
            500);
        
        var tracked =
            await host.InvokeMessageAndWaitAsync(maybePurchaseItem);
        
        // The estimated cost from the stub was more than we budgeted
        // so this message should have been published
        
        // This line is an assertion too that there was a single message
        // of this type published as part of the message handling above
        var rejected = tracked.Sent.SingleMessage<PurchaseRejected>();
        rejected.ItemId.ShouldBe(itemId);
        rejected.LocationId.ShouldBe(locationId);
    }
    

    After calling making this call:

            host.StubWolverineMessageHandling<EstimateDelivery, DeliveryInformation>(
                query => new DeliveryInformation(new TimeOnly(17, 0), 1000));
    

    Calling this from our Wolverine application:

            // Let's say this is doing a remote request and reply to another system
            // through Wolverine messaging
            return bus.InvokeAsync<DeliveryInformation>(estimateDelivery, cancellation);
    

    Will use the stubbed logic we registered. This is enabling you to use fake behavior for difficult to use external services.

    For the next test, we can completely remove the stub behavior and revert back to the original configuration like this:

    public static void revert_stub(IHost host)
    {
        // Selectively clear out the stub behavior for only one message
        // type
        host.WolverineStubs(stubs =>
        {
            stubs.Clear<EstimateDelivery>();
        });
        
        // Or just clear out all active Wolverine message handler
        // stubs
        host.ClearAllWolverineStubs();
    }
    

    There’s a bit more to the feature you can read about in our documentation, but hopefully you can see right away how this can be useful for effectively stubbing out the behavior of external systems through Wolverine in tests.

    And yes, some older .NET messaging frameworks already had *this* feature and it’s been occasionally requested from Wolverine, so I’m happy to say we have this important and useful capability.

    Forcing Marten’s Asynchronous Daemon to “Catch Up”

    Marten has had the IDocumentStore.WaitForNonStaleProjectionDataAsync(timeout) API (see the documentation for an example) for quite awhile that lets you pause a test while any running asynchronous projections or subscriptions run and catch up to wherever the event store “high water mark” was when you originally called the method. Hopefully, this lets ongoing background work proceed until the point where it’s now safe for you to proceed to the “Assert” part of your automated tests. As a convenience, this API is also available through extension methods on both IHost and IServiceProvider.

    We’ve recently invested time into this API to make it provide much more contextual information about what’s happening asynchronously if the “waiting” does not complete. Specifically, we’ve made the API throw an exception that embeds a table of where every asynchronous projection or subscription ended up at compared to the event store’s “high water mark” (the highest sequential identifier assigned to a persisted event in the database). In this last release we made sure that that textual table also shows any projections or subscriptions that never recorded any process with a sequence of “0” so you can see what did or didn’t happen. We have also changed the API to record any exceptions thrown by the asynchronous daemon (serialization errors? application errors from *your* projection code? database errors?) and have those exceptions piped out in the failure messages when the “WaitFor” API does not successfully complete.

    Okay, with all of that out of the way, we also added a completely new, slightly alternative for the asynchronous daemon that just forces the daemon to quickly process all outstanding events through every asynchronous projection or subscription right this second and throw up any exceptions that it encounters. We call this the “catch up” API:

            using var daemon = await theStore.BuildProjectionDaemonAsync();
            await daemon.CatchUpAsync(CancellationToken.None);
    

    This mode is faster and hopefully more reliable than WaitFor***** because it’s happening inline and shortcuts a lot of the normal asynchronous polling and messaging within the normal daemon processing.

    There’s also an IHost.CatchUpAsync() or IServiceProvider.CatchUpAsync() convenience method for test usage as well.

    Multi Stage Tracked Sessions

    I’m obviously biased, but I’d say that Wolverine’s tracked session capability is a killer feature that makes Wolverine stand apart from other messaging tools in the .NET ecosystem and it goes a long way toward making integration testing through Wolverine asynchronous messaging be productive and effective.

    But, what if you have a testing scenario where you:

    1. Carry out some kind of action (an HTTP request invoked through Alba? publishing a message internally within your application?) that leads to messages being published in Wolverine that might in turn lead to even more messages getting published within your Wolverine system or other tracked systems
    2. Along the way, handling one or more commands leads to events being appended to a Marten event store
    3. An asynchronously executing projection might append other events or publish messages in Marten’s RaiseSideEffects() capability or an event subscription might in turn publish other Wolverine messages that start up an all new cycle of “when is the system really done with all the work it has started.”

    That might sound a little bit contrived, but it reflects real world scenarios I’ve discussed with multiple JasperFx clients in just the past couple weeks. With their help and some input from the community, we came up with this new extension to Wolverine’s “tracked sessions” to also track and wait for work spawned by Marten. Consider this bit of code from the tests for this feature:

    var tracked = await _host.TrackActivity()
        
        // This new helper just resets the main Marten store
        // Equivalent to calling IHost.ResetAllMartenDataAsync()
        .ResetAllMartenDataFirst()
        
        .PauseThenCatchUpOnMartenDaemonActivity(CatchUpMode.AndResumeNormally)
        .InvokeMessageAndWaitAsync(new AppendLetters(id, ["AAAACCCCBDEEE", "ABCDECCC", "BBBA", "DDDAE"]));
    
    
    

    To add some context, handling the AppendLetters command message appends events to a Marten stream and possibly cascades another Wolverine message that also appends events. At the same time, there are asynchronous projections and event subscriptions that will publish messages through Wolverine as they run. We can now make this kind of testing scenario much more feasible and hopefully reliable (async heavy tests are super prone to being blinking tests) through the usage of the PauseThenCatchUpOnMartenDaemonActivity() extension method from the Wolverine.Marten library.

    In the bit of test code above, that API is:

    1. Registering a “before” action to pause all async daemon activity before executing the “Act” part of the tracked session which in this case is calling IMessageBus.InvokeAsync() against an AppendLetters command
    2. Registering a 2nd stage of the tracked session

    When this tracked session is executed, the following sequence happens:

    1. The tracked session calls Marten’s ResetAllMartenDataAsync() in the main DocumentStore for the application to effectively rewind the database state down to your defined initial state
    2. IMessageBus.InvokeAsync(AppendLetters) is called as the actual “execution” of the tracked session
    3. The tracked session is watching everything going on with Wolverine messaging and waits until all “cascaded” messages are complete — and that is recursive. Basically, the tracked session waits until all subsequent messaging activity in the Wolverine application is complete
    4. The 2nd stage we registered to “CatchUp” means the tracked session calls Marten’s new “CatchUp” API to force all asynchronous projections and event subscriptions in the system to immediately process all persisted events. This also restarts the tracked session monitoring of any Wolverine messaging activity so that this stage will only complete when all detected Wolverine messaging activity is completed.

    By using this new capability inside of the older tracked session feature, we’re able to effectively test from the original message input through any subsequent messages triggered by the original message through asynchronous Marten behavior caused by the original messages which might in turn publish yet more messages through Wolverine.

    Long story short, this gives us a reliable way to know when the “Act” part of a test is actually complete and proceed to the “Assert” portion of a test. Moreover, this new feature also tries really hard to bring out some visibility into the asynchronous Marten behavior and the second stage messaging behavior in the case of test failures.

    Summary

    None of this is particularly easy conceptually, and it’s admittedly here because of relatively hard problems in test automation that you might eventually run into. Selfishly, I needed to get these new features into the hands of a client tomorrow and ran out of time to better document these new features, so you get this braindump blog post.

    If it helps, I’m going to talk through these new capabilities a bit more in our next Critter Stack live stream tomorrow (Nov. 6th):

    Wolverine Does More to Simplify Server Side Code

    Just to set myself up with some pressure to perform, let me hype up a live stream on Wolverine I’m doing later this week!

    I’m doing a live stream on Thursday afternoon (U.S. friendly this time) entitled Vertical Slices the Critter Stack Way based on a fun, meandering talk I did for Houston DNUG and an abbreviated version at Commit Your Code last month.

    So, yes, it’s technically about the “Vertical Slice Architecture” in general and specifically with Marten and Wolverine, but more importantly, the special sauce in Wolverine that does more — in my opinion of course — than any other server side .NET application framework to simplify your code and improve testability. In the live stream, I’m going to discuss:

    • A little bit about how I think modern layered architecture approaches and “Ports and Adapters” style approaches can sometimes lead to poor results over time
    • The qualities of a code base that I think are most important (the ability to reason about the behavior of the code, testability of all sorts, ease of iteration, and modularity)
    • How Wolverine’s low code ceremony improves outcomes and the qualities I listed above by reducing layering and shrinking your code into a much tighter vertical slice approach so you can actually see what your system does later on
    • Adopting Wolverine’s idiomatic “A-Frame Architecture” approach and “imperative shell, functional core” thinking to improve testability
    • A sampling of the ways that Wolverine can hugely simplify data access in simpler scenarios and how it can help you keep more complicated data access much closer to behavioral code so you can actually reason about the cause and effects between those two things. And all of that while happily letting you leverage every bit of power in whatever your database or data access tooling happens to be. Seriously, layering approaches and abstractions that obfuscate the database technologies and queries within your system are a very common source of poor system performance in Onion/Clean Architecture approaches.
    • Using Wolverine.HTTP as an alternative AspNetCore Endpoint model and why that’s simpler in the end than any kind of “Mediator” tooling inside of MVC Core or Minimal API
    • Wolverine’s adaptive approach to middleware
    • The full “Critter Stack” combination with Marten and how that leads to arguably the simplest and cleanest code for CQRS command handlers on the planet
    • Wolverine’s goodies for the majority of .NET devs using the venerable EF Core tooling as well

    If you’ve never heard of Wolverine or haven’t really paid much attention to it yet, I’m most certainly inviting you to the live stream to give it a chance. If you’ve blown Wolverine off in the past as “yet another messaging tool in .NET,” come find out why that is most certainly not the full story because Wolverine will do much more for you within your application code than other, mere messaging frameworks in .NET or even any of the numerous “Mediator” tools floating around.

    Wolverine 5 and Modular Monoliths

    In the announcement for the Wolverine 5.0 release last week, I left out a pretty big set of improvements for modular monolith support, specifically in how Wolverine can now work with multiple databases from one service process.

    Wolverine works closely with databases for:

    And all of those features are supported for Marten, EF Core with either PostgreSQL or SQL Server, and RavenDb.

    Back to the “modular monolith” approach and what I’m seeing folks do or want to do is some combination of:

    • Use multiple EF Core DbContext types that target the same database, but maybe with different schemas
    • Use Marten’s “ancillary or separated store” feature to divide the storage up for different modules against the same database

    Wolverine 3/4 supported the previous two bullet points, but now Wolverine 5 will be able to support any combination of every possible option in the same process. That even includes the ability to:

    • Use multiple DbContext types that target completely different databases altogether
    • Mix and match with Marten ancillary stores that target completely different database
    • Use RavenDb for some modules, even if others use PostgreSQL or SQL Server
    • Utilize either Marten’s built in multi-tenancy through a database per tenant or Wolverine’s managed EF Core multi-tenancy through a database per tenant

    And now do that in one process while being able to support Wolverine’s transactional inbox, outbox, scheduled messages, and saga support for every single database that the application utilizes. And oh, yeah, from the perspective of the future CritterWatch, you’ll be able to use Wolverine’s dead letter management services against every possible database in the service.

    Okay, this is the point where I do have to admit that the RavenDb support for the dead letter administration is lagging a little bit, but we’ll get that hole filled in soon.

    Here’s an example from the tests:

            var builder = Host.CreateApplicationBuilder();
            var sqlserver1 = builder.Configuration.GetConnectionString("sqlserver1");
            var sqlserver2 = builder.Configuration.GetConnectionString("sqlserver2");
            var postgresql = builder.Configuration.GetConnectionString("postgresql");
    
            builder.UseWolverine(opts =>
            {
                // This helps Wolverine "know" how to share inbox/outbox
                // storage across logical module databases where they're
                // sharing the same physical database but with different schemas
                opts.Durability.MessageStorageSchemaName = "wolverine";
    
                // This will be the "main" store that Wolverine will use
                // for node storage
                opts.Services.AddMarten(m =>
                {
                    m.Connection(postgresql);
                }).IntegrateWithWolverine();
    
                // "An" EF Core module using Wolverine based inbox/outbox storage
                opts.UseEntityFrameworkCoreTransactions();
                opts.Services.AddDbContextWithWolverineIntegration<SampleDbContext>(x => x.UseSqlServer(sqlserver1));
                
                // This is helping Wolverine out by telling it what database to use for inbox/outbox integration
                // when using this DbContext type in handlers or HTTP endpoints
                opts.PersistMessagesWithSqlServer(sqlserver1, role:MessageStoreRole.Ancillary).Enroll<SampleDbContext>();
                
                // Another EF Core module
                opts.Services.AddDbContextWithWolverineIntegration<ItemsDbContext>(x => x.UseSqlServer(sqlserver2));
                opts.PersistMessagesWithSqlServer(sqlserver2, role:MessageStoreRole.Ancillary).Enroll<ItemsDbContext>();
    
                // Yet another Marten backed module
                opts.Services.AddMartenStore<IFirstStore>(m =>
                {
                    m.Connection(postgresql);
                    m.DatabaseSchemaName = "first";
                });
            });
    

    I’m certainly not saying that you *should* run out and build a system that has that many different persistence options in a single deployable service, but now you *can* with Wolverine. And folks have definitely wanted to build Wolverine systems that target multiple databases for different modules and still get every bit of Wolverine functionality for each database.

    Summary

    Part of the Wolverine 5.0 work was also Jeffry Gonzalez and I pushing on JasperFx’s forthcoming “CritterWatch” tool and looking for any kind of breaking changes in the Wolverine “publinternals” that might be necessary to support CritterWatch. The “let’s let you use all the database options at one time!” improvements I tried to show in the post were suggested by the work we are doing for dead letter message management in CritterWatch.

    I shudder to think how creative folks are going to be with this mix and match ability, but it’s cool to have some bragging rights over these capabilities because I don’t think that any other .NET tool can match this.

    Using SignalR with Wolverine 5.0

    The Wolverine 5.0 release earlier last last week (finally) added a long requested SignalR transport.

    The SignalR library from Microsoft isn’t hard to use from Wolverine for simplistic WebSockets or Server Sent Events usage as it was, but what if you want a server side application to exchange any number of different messages between a browser (or other WebSocket client because that’s actually possible) and your server side code in a systematic way? To that end, Wolverine now supports a first class messaging transport for SignalR. To get started, just add a Nuget reference to the WolverineFx.SignalR library:

    dotnet add package WolverineFx.SignalR
    

    There’s a very small sample application called WolverineChat in the Wolverine codebase that just adapts Microsoft’s own little sample application to show you how to use Wolverine.SignalR from end to end in a tiny ASP.Net Core + Razor + Wolverine application. The server side bootstrapping is at minimum, this section from the Wolverine bootstrapping within your Program file:

    builder.UseWolverine(opts =>
    {
        // This is the only single line of code necessary
        // to wire SignalR services into Wolverine itself
        // This does also call IServiceCollection.AddSignalR()
        // to register DI services for SignalR as well
        opts.UseSignalR(o =>
        {
            // Optionally configure the SignalR HubOptions
            // for the WolverineHub
            o.ClientTimeoutInterval = 10.Seconds();
        });
        
        // Using explicit routing to send specific
        // messages to SignalR. This isn't required
        opts.Publish(x =>
        {
            // WolverineChatWebSocketMessage is a marker interface
            // for messages within this sample application that
            // is simply a convenience for message routing
            x.MessagesImplementing<WolverineChatWebSocketMessage>();
            x.ToSignalR();
        });
    });
    

    And a little bit down below where you configure your ASP.Net Core execution pipeline:

    // This line puts the SignalR hub for Wolverine at the 
    // designated route for your clients
    app.MapWolverineSignalRHub("/api/messages");
    

    On the client side, here’s a crude usage of the SignalR messaging support in raw JavaScript:

    // Receiving messages from the server
    connection.on("ReceiveMessage", function (json) {
        // Note that you will need to deserialize the raw JSON
        // string
        const message = JSON.parse(json);
    
        // The client code will need to effectively do a logical
        // switch on the message.type. The "real" message is 
        // the data element
        if (message.type == 'ping'){
            console.log("Got ping " + message.data.number);
        }
        else{
            const li = document.createElement("li");
            document.getElementById("messagesList").appendChild(li);
            li.textContent = `${message.data.user} says ${message.data.text}`;
        }
    });
    

    and this code to send a message to the server:

    document.getElementById("sendButton").addEventListener("click", function (event) {
        const user = document.getElementById("userInput").value;
        const text = document.getElementById("messageInput").value;
    
        // Remember that we need to wrap the raw message in this slim
        // CloudEvents wrapper
        const message = {type: 'chat_message', data: {'text': text, 'user': user}};
    
        // The WolverineHub method to call is ReceiveMessage with a single argument
        // for the raw JSON
        connection.invoke("ReceiveMessage", JSON.stringify(message)).catch(function (err) {
            return console.error(err.toString());
        });
        event.preventDefault();
    });
    

    I should note here that we’re utilizing Wolverine’s new CloudEvents support for the SignalR messaging to Wolverine, but in this case the only single elements that are required are data and type. So if you had a message like this:

    public record ChatMessage(string User, string Text) : WolverineChatWebSocketMessage;
    

    Your JSON envelope that is sent from the server to the client through the new SignalR transport would be like this:

    { “type”: “chat_message”, “data”: { “user”: “Hank”, “text”: “Hey” } }

    For web socket message types that are marked with the new WebSocketMessage interface, Wolverine is using kebab casing of the type name for Wolverine’s own message type name alias under the theory that that naming style is more or less common in JavaScript world.

    I should also say that a first class SignalR messaging transport for Wolverine has been frequently requested over the years, but I didn’t feel confident building anything until we had more concrete use cases with CritterWatch. Speaking of that…

    How we’re using this in CritterWatch

    The very first question we got about this feature was more or less “why would I care about this?” To answer that, let me talk just a little bit about the ongoing development with JasperFx Software’s forthcoming “CritterWatch” tool:

    CritterWatch is going to involve a lot of asynchronous messaging and processing between the web browser client, the CritterWatch web server application, and the CritterStack (Wolverine and/or Marten in this case) systems that CritterWatch is monitoring and administrating. The major point here is that we need to issue a about three dozen different command messages from the browser to CritterWatch that will kick off long running asynchronous processes that will trigger workflows in other CritterStack systems that will eventually lead to CritterWatch sending messages all the way back to the web browser clients.

    The new SignalR transport also provides mechanisms to get the eventual responses back to the original Web Socket connection that triggered the workflow and several mechanisms for working with SignalR connection groups as well.

    Using web sockets gives us one single mechanism to issue commands from the client to the CritterWatch service, where the command messages are handled as you’d expect by Wolverine message handlers with all the prerequisite middleware, tracing, and error handling you normally get from Wolverine as well as quick access to any service in your server’s IoC container. Likewise, we can “just” publish from our server to the client through cascading messages or IMessageBus.PublishAsync() without any regard for whether or not that message is being routed through SignalR or any other message transport that Wolverine supports.

    Web Socket Publishing from Asynchronous Marten Projection Updates

    It’s been relatively common in the past year for me to talk through the utilization of SignalR and Web Sockets (or Server Side Events) to broadcast updates from asynchronously running Marten projections.

    Let’s say that you have an application using event sourcing with Marten and you use the Wolverine integration with Marten like this bit from the CritterWatch codebase:

            opts.Services.AddMarten(m =>
            {
                // Other stuff..
    
                m.Projections.Add<CritterServiceProjection>(ProjectionLifecycle.Async);
            })
                // This is the key part, just calling IntegrateWithWolverine() adds quite a few 
                // things to Marten including the ability to use Wolverine messaging from within
                // Marten RaiseSideEffects() methods
                .IntegrateWithWolverine(w =>
            {
                w.UseWolverineManagedEventSubscriptionDistribution = true;
            });
    

    We have this little message to communicate to the client when configuration changes are detected on the server side:

        // The marker interface is just a helper for message routing
        public record CritterServiceUpdated(CritterService Service) : ICritterStackWebSocketMessage;
    

    And this little bit of routing in Wolverine:

    opts.Publish(x =>
    {
    x.MessagesImplementing<ICritterStackWebSocketMessage>();
    x.ToSignalR();
    });

    And we have a single stream projection in CritterWatch like this:

    public class CritterServiceProjection 
        : SingleStreamProjection<CritterService, string>
    

    And finally, we can use the RaiseSideEffects() hood that exists in the Marten SingleStream/MultiStreamProjection to run some code every time an aggregated projection is updated:

        public override ValueTask RaiseSideEffects(IDocumentOperations operations, IEventSlice<CritterService> slice)
        {
            // This is the latest version of CritterService
            var latest = slice.Snapshot;
            
            // CritterServiceUpdated will be routed to SignalR,
            // so this is de facto updating all connected browser
            // clients at runtime
            slice.PublishMessage(new CritterServiceUpdated(latest!));
            
            return ValueTask.CompletedTask;
        }
    

    And after admittedly a little bit of wiring, we’re at a point where we can happily send messages from asynchronous Marten projections through to Wolverine and on to SignalR (or any other Wolverine messaging mechanism too of course) in a reliable way.

    Summary

    I don’t think that this new transport is necessary for simpler usages of SignalR, but could be hugely advantageous for systems where there’s a multitude of logical messaging back and forth from the web browser clients to the backend.