Critter Stack Roadmap Update for February

The last time I wrote about the Critter Stack / JasperFx roadmap, I was admittedly feeling a little conservative about big new releases and really just focused on stabilization. In the past week though, the rest of the Critter Stack Core Team decided it was time to get going on the next round of releases for what will be Marten 8.0 and Wolverine 4.0, so let’s get into the details.

Definitely in Scope:

  • Upgrade Marten (and Weasel/Wolverine) to Npgsql 9.0
  • Drop .NET 6/7 support in Marten and .NET 7 support in Wolverine. Both will have targets for .NET 8/9
  • Consolidation of supporting libraries. What is today JasperFx.Core, JasperFx.CodeGeneration, and Oakton are getting combined into a new library called JasperFx. That’s partially to simplify setup by reducing the number of dotnet add ... calls you need to do, but also to potentially streamline configuration that’s today duplicated between Marten & Wolverine.
  • Drop the synchronous APIs that are already marked as [Obsolete] in Marten’s API surface
  • Stream Compacting” in Marten/Wolverine/CritterWatch. This feature is being done in partnership with a JasperFx client

In addition to that work, JasperFx Software is working hard on the forthcoming “Critter Watch” tooling that will be a management and monitoring console application for Wolverine and Marten, so there’s also a bit of the work to help support Critter Watch through improvements to instrumentation and additional APIs that will land in Wolverine or Marten proper.

I’ll write much more about Critter Watch soon. Right now the MVP looks to be:

  1. A dead letter message explorer and management tool for Wolverine
  2. A view of your Critter Watch application configuration, which will be able to span multiple applications to better understand how messages flow throughout your greater ecosystem of services
  3. Viewing and managing asynchronous projections in Marten, which should include performance information, a dashboard explaining what projections or subscriptions are running, and the ability to trigger projection rebuilds, rewind subscriptions, and to pause/restart projections at runtime
  4. Displaying performance metrics about your Wolverine / Marten application by integration with your Otel tooling (we’re initially thinking about PromQL integration here).

Maybe in Scope???

It may be that we go for a quick and relatively low impact Marten 8 / Wolverine 4 release, but here are the things we are considering for this round of releases and would love any feedback or requests you might have:

  • Overhaul the Marten projection support, with a very particular emphasis on simplifying the multi-stream projections especially. The core team & I did quite a bit of work on that in the 4th quarter last year in the first attempt at Marten 8, and that work might feed into this effort as well. Part of that goal is to make it as easy as possible to use purely explicit code for projections as a ready alternative to the conventional Apply/Create method conventions. There’s an existing conversation in this issue.
  • Multi-tenancy support for EF Core with Wolverine commensurate with the existing Marten + Wolverine + multi-tenancy support. I really want to be expanding the Wolverine user base this year, and better EF Core support feels like a way to help achieve that.
  • Revisit the async daemon and add support for dependencies between asynchronous projections and/or the ability to “lock” the execution of 2 or more projections together. That’s 100% about scalability and throughput for folks who have particularly nasty complicated multi-stream projections. This would also hopefully be in partnership with a JasperFx client.
  • Revisiting the event serialization in Marten and its ability to support “downcasters” or “upcasters” for event versioning. There is an opportunity to ratchet up performance by moving to higher performance serializers like MessagePack or MemoryPack for the event serialization. You’d have to make that an opt in model, probably support side by side JSON & whatever other serialization, and make sure folks know that means losing the LINQ querying support for Marten events if you opt for the better performance.
  • Potentially risky time sink: pull quite a bit of the event store support code in Marten today into a new shared library (like the IEvent model and maybe quite a bit of the projection subsystem) where that code could be shared between Marten and the long planned Sql Server-backed event store. And maybe even a CosmosDb integration.
  • Some improvements to Wolverine specifically for modular monolith usage discussed in more depth in the next section.

Wolverine 4 and Modular Monoliths

This is all related to this issue in the Wolverine backlog about mixing and matching databases in the same application. So, the modular monolith thing in Wolverine? It’s admittedly taken some serious work in the past 3-4 months to make Wolverine work the way the creative folks pushing the modular monolith concept have needed.

I think we’re in good shape with Wolverine message handler discovery and routing for modular monoliths, but there’s some challenges around database integration, the transactional inbox/outbox support, and transactional middleware within with a single application that’s potentially talking to multiple databases from a single process — and then make things more complicated still by throwing in the possibility of using multi-tenancy through separated databases.

Wolverine already does fine with an architecture like the one below where you might have separate logical “modules” in your system that generally work against the same database, but using separate database schemas for the isolation:

Where Wolverine doesn’t yet go (and I’m also not aware of any other .NET tooling that actually solves this) is the case where separate modules may be talking to completely separate physical databases as shown below:

The work I’m doing right now with “Critter Watch” touches on Wolverine’s message storage, so it’s somewhat convenient to try to improve Wolverine’s ability to allow you to mix and match different databases and even different database engines from one Wolverine application as part of this release.

Wolverine for MediatR Users

I happened to see this post from Milan Jovanović today about a little backlash to the MediatR library. For my part, I think MediatR is just a victim of its own success and any backlash is mostly due to folks misusing it very badly in unnecessarily complicated ways (that’s my experience). That aside, yes, I absolutely feel that Wolverine is a much stronger toolset that covers a much broader set of use cases while doing a lot more than MediatR to potentially simplify your application code and do more to promote testability, so here goes this post.

This is taken from the Wolverine for MediatR users guide in the Wolverine documentation.

MediatR is an extraordinarily successful OSS project in the .NET ecosystem, but it’s a very limited tool and the Wolverine team frequently fields questions from folks converting to Wolverine from MediatR. Offhand, the common reasons to do so are:

  1. Wolverine has built in support for the transactional outbox, even for its in memory, local queues
  2. Many people are using MediatR and a separate asynchronous messaging framework like MassTransit or NServiceBus while Wolverine handles the same use cases as MediatR and asynchronous messaging as well with one single set of rules for message handlers
  3. Wolverine’s programming model can easily result in significantly less application code than the same functionality would with MediatR

It’s important to note that Wolverine allows for a completely different coding model than MediatR or other “IHandler of T” application frameworks in .NET. While you can use Wolverine as a near exact drop in replacement for MediatR, that’s not taking advantages of Wolverine’s capabilities.

Handlers

MediatR is an example of what I call an “IHandler of T” framework, just meaning that the primary way to plug into the framework is by implementing an interface signature from the framework like this simple example in MediatR:

public class Ping : IRequest<Pong>
{
    public string Message { get; set; }
}

public class PingHandler : IRequestHandler<Ping, Pong> 
{
    private readonly TextWriter _writer;

    public PingHandler(TextWriter writer)
    {
        _writer = writer;
    }

    public async Task<Pong> Handle(Ping request, CancellationToken cancellationToken)
    {
        await _writer.WriteLineAsync($"--- Handled Ping: {request.Message}");
        return new Pong { Message = request.Message + " Pong" };
    }
}

Now, if you assume that TextWriter is a registered service in your application’s IoC container, Wolverine could easily run the exact class above as a Wolverine handler. While most Hollywood Principle application frameworks usually require you to implement some kind of adapter interface, Wolverine instead wraps around your code, with this being a perfectly acceptable handler implementation to Wolverine:

// No marker interface necessary, and records work well for this kind of little data structure
public record Ping(string Message);
public record Pong(string Message);

// It is legal to implement more than message handler in the same class
public static class PingHandler
{
    public static Pong Handle(Ping command, TextWriter writer)
    {
        _writer.WriteLine($"--- Handled Ping: {request.Message}");
        return new Pong(command.Message);
    }
}

So you might notice a couple of things that are different right away:

  • While Wolverine is perfectly capable of using constructor injection for your handlers and class instances, you can eschew all that ceremony and use static methods for just a wee bit fewer object allocations
  • Like MVC Core and Minimal API, Wolverine supports “method injection” such that you can pass in IoC registered services directly as arguments to the handler methods for a wee bit less ceremony
  • There are no required interfaces on either the message type or the handler type
  • Wolverine discovers message handlers through naming conventions (or you can also use marker interfaces or attributes if you have to)
  • You can use synchronous methods for your handlers when that’s valuable so you don’t have to scatter return Task.CompletedTask(); all over your code
  • Moreover, Wolverine’s best practice as much as possible is to use pure functions for the message handlers for the absolute best testability

There are more differences though. At a minimum, you probably want to look at Wolverine’s compound handler capability as a way to build more complex handlers.

Wolverine was built with the express goal of allowing you to write very low ceremony code. To that end we try to minimize the usage of adapter interfaces, mandatory base classes, or attributes in your code.

Built in Error Handling

Wolverine’s IMessageBus.InvokeAsync() is the direct equivalent to MediatR’s IMediator.Send()but, the Wolverine usage also builds in support for some of Wolverine’s error handling policies such that you can build in selective retries.

MediatR’s INotificationHandler

Point blank, you should not be using MediatR’s INotificationHandler for any kind of background work that needs a true delivery guarantee (i.e., the notification will get processed even if the process fails unexpectedly). This has consistently been one of the very first things I tell JasperFx customers when I start working with any codebase that uses MediatR.

MediatR’s INotificationHandler concept is strictly fire and forget, which is just not suitable if you need delivery guarantees of that work. Wolverine on the other hand supports both a “fire and forget” (Buffered in Wolverine parlance) or a durable, transactional inbox/outbox approach with its in memory, local queues such that work will not be lost in the case of errors. Moreover, using the Wolverine local queues allows you to take advantage of Wolverine’s error handling capabilities for a much more resilient system that you’ll achieve with MediatR.

INotificationHandler in Wolverine is just a message handler. You can publish messages anytime through the IMessageBus.PublishAsync() API, but if you’re just needing to publish additional messages (either commands or events, to Wolverine it’s all just a message), you can utilize Wolverine’s cascading message usage as a way of building more testable handler methods.

MediatR IPipelineBehavior to Wolverine Middleware

MediatR uses its IPipelineBehavior model as a “Russian Doll” model for handling cross cutting concerns across handlers. Wolverine has its own mechanism for cross cutting concerns with its middleware capabilities that are far more capable and potentially much more efficient at runtime than the nested doll approach that MediatR (and MassTransit for that matter) take in its pipeline behavior model.

The Fluent Validation example is just about the most complicated middleware solution in Wolverine, but you can expect that most custom middleware that you’d write in your own application would be much simpler.

Let’s just jump into an example. With MediatR, you might try to use a pipeline behavior to apply Fluent Validation to any handlers where there are Fluent Validation validators for the message type like this sample:

    public class ValidationBehaviour<TRequest, TResponse> : IPipelineBehavior<TRequest, TResponse> where TRequest : IRequest<TResponse>
    {
        private readonly IEnumerable<IValidator<TRequest>> _validators;
        public ValidationBehaviour(IEnumerable<IValidator<TRequest>> validators)
        {
            _validators = validators;
        }
        public async Task<TResponse> Handle(TRequest request, CancellationToken cancellationToken, RequestHandlerDelegate<TResponse> next)
        {
            if (_validators.Any())
            {
                var context = new ValidationContext<TRequest>(request);
                var validationResults = await Task.WhenAll(_validators.Select(v => v.ValidateAsync(context, cancellationToken)));
                var failures = validationResults.SelectMany(r => r.Errors).Where(f => f != null).ToList();
                if (failures.Count != 0)
                    throw new ValidationException(failures);
            }
            return await next();
        }
    }

It’s cheating a little bit, because Wolverine has both an add on for incorporating Fluent Validation middleware for message handlers and a separate one for HTTP usage that relies on the ProblemDetails specification for relaying validation errors. Let’s still dive into how that works just to see how Wolverine really differs — and why we think those differences matter for performance and also to keep exception stack traces cleaner (don’t laugh, we really did design Wolverine quite purposely to avoid the really nasty kind of Exception stack traces you get from many other middleware or “behavior” using frameworks).

Let’s say that you have a Wolverine.HTTP endpoint like so:

public record CreateCustomer
(
    string FirstName,
    string LastName,
    string PostalCode
)
{
    public class CreateCustomerValidator : AbstractValidator<CreateCustomer>
    {
        public CreateCustomerValidator()
        {
            RuleFor(x => x.FirstName).NotNull();
            RuleFor(x => x.LastName).NotNull();
            RuleFor(x => x.PostalCode).NotNull();
        }
    }
}

public static class CreateCustomerEndpoint
{
    [WolverinePost("/validate/customer")]
    public static string Post(CreateCustomer customer)
    {
        return "Got a new customer";
    }
}

In the application bootstrapping, I’ve added this option:

app.MapWolverineEndpoints(opts =>
{
    // more configuration for HTTP...

    // Opting into the Fluent Validation middleware from
    // Wolverine.Http.FluentValidation
    opts.UseFluentValidationProblemDetailMiddleware();
}

Just like with MediatR, you would need to register the Fluent Validation validator types in your IoC container as part of application bootstrapping. Now, here’s how Wolverine’s model is very different from MediatR’s pipeline behaviors. While MediatR is applying that ValidationBehaviour to each and every message handler in your application whether or not that message type actually has any registered validators, Wolverine is able to peek into the IoC configuration and “know” whether there are registered validators for any given message type. If there are any registered validators, Wolverine will utilize them in the code it generates to execute the HTTP endpoint method shown above for creating a customer. If there is only one validator, and that validator is registered as a Singleton scope in the IoC container, Wolverine generates this code:

    public class POST_validate_customer : Wolverine.Http.HttpHandler
    {
        private readonly Wolverine.Http.WolverineHttpOptions _wolverineHttpOptions;
        private readonly Wolverine.Http.FluentValidation.IProblemDetailSource<WolverineWebApi.Validation.CreateCustomer> _problemDetailSource;
        private readonly FluentValidation.IValidator<WolverineWebApi.Validation.CreateCustomer> _validator;

        public POST_validate_customer(Wolverine.Http.WolverineHttpOptions wolverineHttpOptions, Wolverine.Http.FluentValidation.IProblemDetailSource<WolverineWebApi.Validation.CreateCustomer> problemDetailSource, FluentValidation.IValidator<WolverineWebApi.Validation.CreateCustomer> validator) : base(wolverineHttpOptions)
        {
            _wolverineHttpOptions = wolverineHttpOptions;
            _problemDetailSource = problemDetailSource;
            _validator = validator;
        }



        public override async System.Threading.Tasks.Task Handle(Microsoft.AspNetCore.Http.HttpContext httpContext)
        {
            // Reading the request body via JSON deserialization
            var (customer, jsonContinue) = await ReadJsonAsync<WolverineWebApi.Validation.CreateCustomer>(httpContext);
            if (jsonContinue == Wolverine.HandlerContinuation.Stop) return;
            
            // Execute FluentValidation validators
            var result1 = await Wolverine.Http.FluentValidation.Internals.FluentValidationHttpExecutor.ExecuteOne<WolverineWebApi.Validation.CreateCustomer>(_validator, _problemDetailSource, customer).ConfigureAwait(false);

            // Evaluate whether or not the execution should be stopped based on the IResult value
            if (result1 != null && !(result1 is Wolverine.Http.WolverineContinue))
            {
                await result1.ExecuteAsync(httpContext).ConfigureAwait(false);
                return;
            }


            
            // The actual HTTP request handler execution
            var result_of_Post = WolverineWebApi.Validation.ValidatedEndpoint.Post(customer);

            await WriteString(httpContext, result_of_Post);
        }

    }

The point here is that Wolverine is trying to generate the most efficient code possible based on what it can glean from the IoC container registrations and the signature of the HTTP endpoint or message handler methods. The MediatR model has to effectively use runtime wrappers and conditional logic at runtime.

Do note that Wolverine has built in middleware for logging, validation, and transactional middleware out of the box. Most of the custom middleware that folks are building for Wolverine are much simpler than the validation middleware I talked about in this guide.

Vertical Slice Architecture

MediatR is almost synonymous with the “Vertical Slice Architecture” (VSA) approach in .NET circles, but Wolverine arguably enables a much lower ceremony version of VSA. The typical approach you’ll see is folks delegating to MediatR commands or queries from either an MVC Core Controller like this (stolen from this blog post):

public class AddToCartRequest : IRequest<Result>
{
    public int ProductId { get; set; }
    public int Quantity { get; set; }
}

public class AddToCartHandler : IRequestHandler<AddToCartRequest, Result>
{
    private readonly ICartService _cartService;

    public AddToCartHandler(ICartService cartService)
    {
        _cartService = cartService;
    }

    public async Task<Result> Handle(AddToCartRequest request, CancellationToken cancellationToken)
    {
        // Logic to add the product to the cart using the cart service
        bool addToCartResult = await _cartService.AddToCart(request.ProductId, request.Quantity);

        bool isAddToCartSuccessful = addToCartResult; // Check if adding the product to the cart was successful.
        return Result.SuccessIf(isAddToCartSuccessful, "Failed to add the product to the cart."); // Return failure if adding to cart fails.
    }
    
public class CartController : ControllerBase
{
    private readonly IMediator _mediator;

    public CartController(IMediator mediator)
    {
        _mediator = mediator;
    }

    [HttpPost]
    public async Task<IActionResult> AddToCart([FromBody] AddToCartRequest request)
    {
        var result = await _mediator.Send(request);

        if (result.IsSuccess)
        {
            return Ok("Product added to the cart successfully.");
        }
        else
        {
            return BadRequest(result.ErrorMessage);
        }
    }
}

While the introduction of MediatR probably is a valid way to sidestep the common code bloat from MVC Core Controllers, with Wolverine we’d recommend just using the Wolverine.HTTP mechanism for writing HTTP endpoints in a much lower ceremony way and ditch the “mediator” step altogether. Moreover, we’d even go so far as to drop repository and domain service layers and just put the functionality right into an HTTP endpoint method if that code isn’t going to be reused any where else in your application.

See Automatically Loading Entities to Method Parameters for some context around that [Entity] attribute usage

So something like this:

public static class AddToCartRequestEndpoint
{
    // Remember, we can do validation in middleware, or
    // even do a custom Validate() : ProblemDetails method
    // to act as a filter so the main method is the happy path
    
    [WolverinePost("/api/cart/add")]
    public static Update<Cart> Post(
        AddToCartRequest request, 
        
        // See 
        [Entity] Cart cart)
    {
        return cart.TryAddRequest(request) ? Storage.Update(cart) : Storage.Nothing(cart);
    }
}

We of course believe that Wolverine is more optimized for Vertical Slice Architecture than MediatR or any other “mediator” tool by how Wolverine can reduce the number of moving parts, layers, and code ceremony.

IoC Usage

Just know that Wolverine has a very different relationship with your application’s IoC container than MediatR. Wolverine’s philosophy all along has been to keep the usage of IoC service location at runtime to a bare minimum. Instead, Wolverine wants to mostly use the IoC tool as a service registration model at bootstrapping time.

Summary

Wolverine has some overlap with MediatR, but it’s a quite different animal altogether with a very different approach and far more functionality like the integrated transactional inbox/outbox support that’s important for building resilient server side systems. The Wolverine.HTTP mechanism cuts down the number of code artifacts compared to MediatR + MVC Core or Minimal API. Moreover, the way that you write Wolverine handlers, its integration with persistence tooling, and its middleware strategies can just much more to simplify your application code compared to just about anything else in the .NET ecosystem.

And lastly, let me just admit that I would be thrilled beyond belief if Wolverine had 1/100 the usage that MediatR already has by the end of this year. When you see a lot of posts about “why X is better than Y!” (why Golang is better than JavaScript!) it’s a clear sign that the “Y” in question is already a hugely successful project and the “X” isn’t there yet.

Why the Critter Stack is Good

JasperFx Software already has a strong track record in our short life of helping our customers be more successful using Event Sourcing, Event Driven Architecture, and Test Automation. Much of the content from these new guides came directly out of our client work. We’re certainly ready to partner with your shop as well!

I’ve had a chance the past two weeks to really buckle down and write more tutorials and guides for Wolverine by itself and the full “Critter Stack” combination with Marten. I’ll admit to being a little disappointed by the download numbers on Wolverine right now, but all that really means is that there’s a lot of untapped potential for growth!

If you do any work on the server side with .NET, or are looking for a technical platform to use for event sourcing, event driven architecture, web services, or asynchronous messaging, Wolverine is going to help you build systems that are resilient, easy to change, and highly testable without having to incur the code complexity common to Clean/Onion/Hexagonal Architecture approaches.

Please don’t make a direct comparison of Wolverine to MediatR as a straightforward “Mediator” tool, or to MassTransit or NServiceBus as an Asynchronous Messaging framework, or to MVC Core as a straight up HTTP service framework. Wolverine does far more than any of those other tools to help you write your actual application code.

On to the new guides for Wolverine:

  • Converting from MediatR – We’re getting more and more questions from users who are coming from MediatR to Wolverine to take advantage of Wolverine capabilities like a transactional outbox that MediatR lacks. Going much further though, this guide tries to explain how to first shift to Wolverine, some important features that Wolverine provides that MediatR does not , and how to lean into Wolverine to make your code a lot simpler and easier to test.
  • Vertical Slice Architecture – Wolverine has quite a bit of “special sauce” that makes it a unique fit for “Vertical Slice Architecture” (VSA). We believe that Wolverine does more to make a VSA coding style effective than any other server side tooling in the .NET ecosystem. If you haven’t looked at Wolverine recently, you’ll want to check this out because Wolverine just got even more ways to simplify code and improve testability in vertical slices without having to resort to the kind of artifact bloat that’s nearly inevitable with prescriptive Clean/Onion Architecture approaches.
  • Modular Monolith Architecture – I’ll freely admit that Wolverine was originally optimized for micro-services, and we’ve had to scramble a bit in the recent 3.6.0 release and today’s 3.7.0 release to improve Wolverine’s support for how folks are wanting to do asynchronous workflows between modules in a modular monolith approach. In this guide we’ll talk about how best to use Wolverine for modular monolith architectures, dealing with eventual consistency, database tooling usage, and test automation.
  • CQRS and Event Sourcing with Marten – Marten is already the most robust and most commonly used toolset for Event Sourcing in the .NET ecosystem. Combined with Wolverine to form the full “Critter Stack,” we think it is one of the most productive toolsets for building resilient and scalable systems using CQRS with Event Sourcing and this guide will show you how the Critter Stack gets that done. There’s also a big section on building integration testing harnesses for the Critter Stack with some of their test support. There are some YouTube videos coming soon that cover this same ground and using some of the same samples.
  • Railway Programming – Wolverine has some lightweight facilities for “Railway Programming” inside of message handlers or HTTP endpoints that can help code complex workflows with simpler individual steps — and do that without incurring loads of generics and custom “result” types. And for a bonus, this guide even shows you how Wolverine’s Railway Programming usage helps you generate OpenAPI metadata from type signatures without having to clutter up your code with noisy attributes to keep the ReST police off your back.

I personally need a break from writing documentation, but we’ll pop up soon with additional guides for:

  • Moving from NServiceBus or MassTransit to Wolverine
  • Interoperability with Wolverine

And on strictly the Marten side of things:

  • Complex workflows with Event Sourcing
  • Multi-Stream Projections

Critter Stack Roadmap for 2025

A belated Happy New Year’s to everybody!

The “Critter Stack” had a huge 2024, and I listed off some of the highlights of the improvements we made in Critter Stack Year in Review for 2024. For 2025, we’ve reordered our priority order from what I was writing last summer. I think we might genuinely focus more on sample applications, tutorials, and videos early this year than we do on coding new features.

There’s also a separate post on JasperFx Software in 2025. Please do remember that JasperFx Software is available for either ongoing support contracts for Marten and/or Wolverine and consulting engagements to help you wring the most possible value out of the tools — or to just help you with any old server side .NET architecture you have.

Marten

At this point, I believe that Marten is by far and away the most robust and most productive tooling for Event Sourcing in the .NET ecosystem. Moreover, if you believe Nuget download numbers, it’s also the most heavily used Event Sourcing tooling in .NET. I think most of the potential growth for Marten this year will simply be a result of developers hopefully being more open to using Event Sourcing as that technique becomes better known. I don’t have hard numbers to back this up, but my feeling is that Marten’s main competitor is shops choosing to roll their own Event Sourcing frameworks in house rather than any other specific tool.

  • I think we’re putting off the planned Marten 8.0 release for now. Instead, we’ll mostly be focused on dealing with whatever issues come up from our users and JasperFx clients with Marten 7 for the time being.
  • Babu is working on adding a formal “Crypto Shredding” feature to Marten 7
  • More sample applications and matching tutorials for Marten
  • Possibly adding a “Marten Events to EF Core” projection model?
  • Formal support for PostgreSQL PostGIS spatial data? I don’t know what that means yet though
  • When we’re able to reconsider Marten 8 this year, that will include:
    • A reorganization of the JasperFx building blocks to remove duplication between Marten, Wolverine, and other tools
    • Stream-lining the Projection API
    • Yet more scalability and performance improvements to the async daemon. There’s some potential features that we’re discussing with JasperFx clients that might drive this work

After the insane pace of Marten changes we made last year, I see Marten development and the torrid pace of releases (hopefully) slowing quite a bit in 2025.

Wolverine

Wolverine doesn’t yet have anywhere near the usage of Marten and exists in a much more crowded tooling space to boot. I’m hopeful that we can greatly increase Wolverine usage in 2025 by further differentiating it from its competitor tools by focusing on how Wolverine allows teams to write backend systems with much lower ceremony code without sacrificing testability, robustness, or maintainability.

We’re shelving any thoughts about a Wolverine 4.0 release early this year, but that’s opened the flood gates for planned enhancements to Wolverine 3.*:

  • Wolverine 3.6 is heavily in flight for release this month, and will be a pretty large release bringing some needed improvements for Wolverine within “Modular Monolith” usage, yet more special sauce for lower “Vertical Slice Architecture” usage, enhancements to the “aggregate handler workflow” integration with Marten, and improved EF Core integration
  • Multi-Tenancy support for EF Core in line with what Wolverine can already do with its Marten integration
  • CosmosDb integration for Transactional Inbox/Outbox support, saga storage, transactional middleware
  • More options for runtime message routing
  • Authoring more sample applications to show off how Wolverine allows for a different coding model than other messaging or mediator or HTTP endpoint tools

I think there’s a lot of untapped potential for Wolverine, and I’ll personally be focused on growing its usage in the community this year. I’m hoping the better EF Core integration, having more database options, and maybe even yet more messaging options can help us grow.

I honestly don’t know what is going to happen with Wolverine & Aspire. Aspire doesn’t really play nicely with frameworks like Wolverine right now, and I think it would take custom Wolverine/Aspire adapter libraries to get a truly good experience. My strong preference right now is to just use Docker Compose for local development, but it’s Microsoft’s world and folks like me building OSS tools just have to live in it.

Ermine & Other New Critters

Sigh, “Ermine” is the code name for a long planned port of Marten’s event sourcing functionality to Sql Server. I would still love to see this happen in 2025, but it’s going to be pushed off for a little bit. With plenty of input from other Marten contributors, I’ve done some preliminary work trying to centralize plenty of Marten’s event sourcing internals to a potentially shared assembly.

We’ve also at least considered extending Marten’s style of event sourcing to other databases, with CosmosDb, RavenDb, DynamoDb, SqlLite, and Oracle (people still use it apparently) being kicked around as options.

“Critter Watch”

This is really a JasperFx Software initiative to create a commercial tool that will be a dedicated management portal and performance monitoring tool (meant to be used in conjunction with Grafana/Prometheus/et al) for the “Critter Stack”. I’ll share concrete details of this when there are some, but Babu & I plan to be working in earnest on “Critter Watch” in the 1st quarter.

Note about Blogging

I’m planning to blog much less in the coming year and focus more on either writing more robust tutorials or samples within technical documentation sites and finally joining the modern world and moving to YouTube or Twitch video content creation.

Marten V7.35 Drops for a Little Post Christmas Cheer

And of course, JasperFx Software is available for any kind of consulting engagement around the Critter Stack tools, event sourcing, event driven architecture, test automation, or just any kind of server side .NET architecture.

Absurdly enough, the Marten community made one major release (7.0 was a big change) and 35 different releases of new functionality. Some significant, some just including a new tactical convenience method or two. I think Marten ends the 2024 calendar year with the 7.35.0 release today.

The big highlight is some work for a JasperFx Software client who needs to run some multi-stream projections asynchronously (as one probably should), but needs their user interface client in some scenarios to be showing the very latest information. That’s now possible with the QueryForNonStaleData<T>()` API shown below:

var builder = Host.CreateApplicationBuilder();
builder.Services.AddMarten(opts =>
{
    opts.Connection(builder.Configuration.GetConnectionString("marten"));
    opts.Projections.Add<TripProjection>(ProjectionLifecycle.Async);
}).AddAsyncDaemon(DaemonMode.HotCold);

using var host = builder.Build();
await host.StartAsync();

// DocumentStore() is an extension method in Marten just
// as a convenience method for test automation
await using var session = host.DocumentStore().LightweightSession();

// This query operation will first "wait" for the asynchronous projection building the
// Trip aggregate document to catch up to at least the highest event sequence number assigned
// at the time this method is called
var latest = await session.QueryForNonStaleData<Trip>(5.Seconds())
    .OrderByDescending(x => x.Started)
    .Take(10)
    .ToListAsync();

Of course, there is a non-zero risk of that operation timing out, so it’s not a silver bullet and you’ll need to be aware of that in your usage, but hey, it’s a way around needing to adopt eventual consistency while also providing a good user experience in your client by not appearing to have lost data.

See the documentation on this feature for more information.

The highlight for me personally is that as of this second, the open issue count for Marten on GitHub is sitting at 37 (bugs, enhancement requests, 8.0 planning, documentation TODOs), which is the lowest that number has been for 7/8 years. Feels good.

Critter Stack Year in Review for 2024

Just for fun, here’s what I wrote as the My Technical Plans and Aspirations for 2024 detailing what I had hoped to accomplish this year.

While there’s still just a handful of technical deliverables I’m trying to get out in this calendar year, I’m admittedly running on mental fumes rolling into the holiday season. Thinking back about how much was delivered for the “Critter Stack” (Marten, Weasel, and Wolverine) this year is making me feel a lot better about giving myself some mental recharge time during the holidays. Happily for me, most of the advances in the Critter Stack this year were either from the community (i.e., not me) or done in collaboration and with the sponsorship of JasperFx Software customers for their systems.

The biggest highlights and major releases were Marten 7.0 and Wolverine 3.0.


Performance and Scalability

  • Marten 7.0 brought a new “partial update” model based on native PostgreSQL functions that no longer required the PLv8 add on. Hat tip to Babu Annamalai for that feature!
  • The very basic database execution pipeline underneath Marten was largely rewritten to be far more parsimonious with how it uses database connections and to take advantage of more efficient Npgsql usage. That included using the very latest improvements to Npgsql for batching queries and moving to positional parameters instead of named parameters. Small ball optimizations for sure, but being more parsimonious with connections has been advantageous
  • Marten’s “quick append” model sacrifices a little bit of metadata tracking for a whole lot of throughput improvements (we’ve measured a 50% improvement) when appending events. This mode will be a default in Marten 8. This also helps stabilize “event skipping” in the async daemon under heavy loads. I think this was a big win that we need to broadcast more
  • Random optimizations in the “inline projection” model in Marten to reduce database round trips
  • Using PostgreSQL Read Replicas in Marten. Hat tip to JT.
  • First class support for PostgreSQL table partitioning in Marten. Long planned and requested, finally got here. Still admittedly shaking out some database migration issues with this though.
  • Performance optimizations for CQRS command handlers where you want to fetch the final state of a projected aggregate that has been “advanced” as part of the command handler. Mostly in Marten, but there’s a helper in Wolverine too.

Resiliency

Multi Tenancy

Multi-tenancy has been maybe the biggest single source of client requests for JasperFx Software this year. You can hear about some of that on a recent video conversation I got to do with Derek Cromartin.

Complex Workflows

I’m probably way too sloppy or at least not being precise about the differences between stateful sagas and process managers and tend to call any stateful, long lived workflow a “saga”. I’m not losing any sleep over that.

“Day 2” Improvements

By “Day 2” I just mean features for production support like instrumentation or database migrations or event versioning

Options for Querying

  • Marten 7.0 brought a near rewrite of Marten’s LINQ subsystem that closed a lot of gaps in functionality that we previously had. It also spawned plenty of regression bugs that we’ve had to address in the meantime, but the frequency of LINQ related issues has dramatically fallen
  • Marten got another, more flexible option for the specification pattern. I.e., we don’t need no stinkin’ repositories here!
  • There were quite a few improvements to Marten’s ability to allow you to use explicit SQL as a replacement or supplement to LINQ from the community

Messaging Improvements

This is mostly Wolverine related.

  • A new PostgreSQL backed messaging transport
  • Strictly ordered queuing options in Wolverine
  • “Sticky” message listeners so that only one node in a cluster listens to a certain messaging endpoint. This is super helpful for processes that are stateful. This also helps for multi-tenancy.
  • Wolverine got a GCP Pubsub transport
  • And we finally released the Pulsar transport
  • Way more options for Rabbit MQ conventional message routing
  • Rabbit MQ header exchange support

Test Automation Support

Hey, the “Critter Stack” community takes testability, test automation, and TDD very seriously. To that end, we’ve invested a lot into test automation helpers this year.

Strong Typed Identifiers

Despite all my griping along the way and frankly threatening bodily harm to the authors of some of the most popular libraries for strong typed identifiers, Marten has gotten a lot of first class support for strong typed identifiers in both the document database and event store features. There will surely be more to come because it’s a permutation hell problem where people stumble into yet more scenarios with these damn things.

But whatever, we finally have it. And quite a bit of the most time consuming parts of that work has been de facto paid for by JasperFx clients, which takes a lot of the salt out of the wound for me!

Modular Monolith Usage

This is going to be a major area of improvement for Wolverine here at the tail end of the year because suddenly everybody and their little brother wants to use this architectural pattern in ways that aren’t yet great with Wolverine.

Other Cool New Features

There was actually quite a few more refinements made to both tools, but I’ve exhausted the time I allotted myself to write this, so let’s wrap up.

Summary

Last January I wrote that an aspiration for 2024 was to:

Continue to push Marten & Wolverine to be the best possible technical platform for building event driven architectures

At this point I believe that the “Critter Stack” is already the best set of technical tooling in the .NET ecosystem for building a system using an Event Driven Architecture, especially if Event Sourcing is a significant part of your persistence strategy. There are other messaging frameworks that have more messaging options, but Wolverine already does vastly more to help you productively write code that’s testable, resilient, easier to reason about, and well instrumented than older messaging tools in the .NET space. Likewise, Wolverine.HTTP is the lowest ceremony coding model for ASP.Net Core web service development, and the only one that has a first class transactional outbox integration. In terms of just Event Sourcing, I do not believe that Marten has any technical peer in the .NET ecosystem.

But of course there are plenty of things we can do better, and we’re not standing still in 2025 by any means. After some rest, I’ll pop back in January with some aspirations and theoretical roadmap for the “Critter Stack” in 2025. Details then, but expect that to include more database options and yes, long simmering plans for commercialization. And the overarching technical goal in 2025 for the “Critter Stack” is to be the best technical platform on the planet for Event Driven Architecture development.

Combo HTTP Endpoint and Message Handler with Wolverine 3.0

With the release of Wolverine 3.0 last week, we snuck in a small feature at the last minute that was a request from a JasperFx Software customer. Specifically, they had a couple instances of a logical message type that needed to be handled both from Wolverine’s Rabbit MQ message transport, and also from the request body of an HTTP endpoint inside their BFF application.

You can certainly beat this problem a couple different ways:

  1. Use the Wolverine message handler as a mediator from within an HTTP endpoint. I’m not a fan of this approach because of the complexity, but it’s very common in .NET world of course.
  2. Just delegate from an HTTP endpoint in Wolverine directly to the (in this case) static method message handler. Simpler mechanically, and we’ve done that a few times, but there’s a wrinkle coming of course.

One of the things that Wolverine’s HTTP endpoint model does is allow you to quickly make little one off validation rules using the ProblemDetails specification that’s great for one off validations that don’t fit cleanly into Fluent Validation usage (which is also supported by Wolverine for both message handlers and HTTP endpoints). Our client was using that pattern on HTTP endpoints, but wanted to expose the same logic — and validation logic — as a message handler while still retaining the validation rules and ProblemDetails response for HTTP.

As of the Wolverine 3.0 release last week, you can now use the ProblemDetails logic with message handlers as a one off validation test if you are using Wolverine.Http as well as Wolverine core. Let’s jump right to an example of a class to both handle a message as a message handler in Wolverine and handle the same message body as an HTTP web service with a custom validation rule using ProblemDetails for the results:

public record NumberMessage(int Number);

public static class NumberMessageHandler
{
    // More likely, these one off validation rules do some kind of database
    // lookup or use other services, otherwise you'd just use Fluent Validation
    public static ProblemDetails Validate(NumberMessage message)
    {
        // Hey, this is contrived, but this is directly from
        // Wolverine.Http test suite code:)
        if (message.Number > 5)
        {
            return new ProblemDetails
            {
                Detail = "Number is bigger than 5",
                Status = 400
            };
        }
        
        // All good, keep on going!
        return WolverineContinue.NoProblems;
    }
    
    // Look at this! You can use this as an HTTP endpoint too!
    [WolverinePost("/problems2")]
    public static void Handle(NumberMessage message)
    {
        Debug.WriteLine("Handled " + message);
        Handled = true;
    }

    public static bool Handled { get; set; }
}

What’s significant about this class is that it’s a perfectly valid message handler that will be discovered by Wolverine as a message handler. Because of the presence of the [WolverinePost] attribute, Wolverine.HTTP will discover this as well and independently create an AspNetCore Endpoint route for this method.

If the Validate method returns a non-“No problems” response:

  • As a message handler, Wolverine will log a JSON serialized value of the ProblemDetails and stop all further processing
  • As an HTTP endpoint, Wolverine.HTTP will write the ProblemDetails out to the HTTP response, set the status code and content-type headers appropriately, and stop all further processing

Arguably, Wolverine’s entire schtick and raison d’être is to provide a much lower code ceremony development experience than other .NET server side development tools. I think the code above is a great example of how Wolverine really does this. Especially if you know that Wolverine.HTTP is able to glean and enhance the OpenAPI metadata created for the endpoint above to reflect the possible status code 400 and application/problem+json content type response, compare the Wolverine approach above to a more typical .NET “vertical slice architecture” approach that is probably using MVC Core controllers or Minimal API registrations with plenty of OpenAPI-related code noise to delegate to MediatR message handlers with all of its attendant code ceremony.

Besides code ceremony, I’d also point out that the functions you write for Wolverine up above are much more often going to be pure functions and/or synchronous for much easier unit testing than you can with other tools. Lastly, and I’ll try to show this in a follow up blog post about Wolverine’s middleware strategy, Wolverine’s execution pipeline results in fewer object allocations than IoC-centric tools like MediatR or MassTransit or MVC Core / Minimal API do at runtime.

Critter Stack 2025

I realize the title sounds a little too similar to somebody else’s 2025 platform proposals, but let’s please just overlook that

This is a “vision board” document I wrote up and shared with our core team (Anne, JT, Babu, and Jeffry) as well as some friendly users and JasperFx Software customers. I dearly want to step foot into January 2025 with the “Critter Stack” as a very compelling choice for any shop about to embark on any kind of Event Driven Architecture — especially with the usage of Event Sourcing as part of a system’s persistence strategy. Moreover, I want to arrive at a point where the “Critter Stack” actually convinces organizations to choose .NET just to take advantage of our tooling. I’d be grateful for any feedback.

As of now, the forthcoming Wolverine 3.0 release is almost to the finish line, Marten 7 is probably just about done growing, and work on “Critter Watch” (JasperFx Software’s envisioned management console tooling for the “Critter Stack”) is ramping up. Now is a good time to detail a technical vision for the “Critter Stack” moving into 2025. 

The big goals are:

  1. Simplify the “getting started” story for using the “Critter Stack”. Not just in getting a new codebase up, but going all the way to how a Critter Stack app could be deployed and opting into all the best practices. My concern is that there are getting to be way too many knobs and switches scattered around that have to be addressed to really make performance and deployment robust. 
  2. Deliver a usable “Critter Watch” MVP
  3. Expand the “Critter Stack” to more database options, with Sql Server and maybe CosmosDb being the leading contenders and DynamoDb or CockroachDb being later possibilities
  4. Streamline the dependency tree. Find a way to reduce the number of GitHub repositories and Nugets if possible. Both for our maintenance overhead and also to try to simplify user setup

The major initiatives are:

  1. Marten 8.0
  2. Wolverine 4.0
  3. “Critter Watch” and CritterStackPro.Projections (actually scratch the second part, that’s going to roll into the Wolverine OSS core, coming soon)
  4. Ermine 1.0 – the Sql Server port of the Marten event store functionality
  5. Out of the box project templates for Wolverine/Marten/Ermine usages – following the work done already by Jeffry Gonzalez
  6. Future CosmosDb backed event store and Wolverine integration — but I’m getting a lot of mixed feedback about whether Sql Server or CosmosDb should be a higher priority

Opportunities to grow the Critter Stack user base:

  • Folks who are concerned about DevOps issues. “Critter Watch” and maybe more templates that show how to apply monitoring, deployment steps, and Open Telemetry to existing Critter Stack systems. The key point here is a whole lot of focus on maintainability and sustainability of the event sourcing and messaging infrastructure
  • Get more interest from mainstream .NET developers. Improve the integration of Wolverine and maybe Marten/Ermine as well with EF Core. This could include reaching parity with Marten for middleware support, side effects, and multi-tenancy models using EF Core. Also, maybe, hear me out, take a heavy drink, there could be an official Marten/Ermine projection integration to write projection data to EF Core? I know of at least one Critter Stack user who would use that. At this point, I’m leaning heavily toward getting Wolverine 3.0 out and mostly tackle this in the Wolverine 4.0 timeframe this fall
  • Expand to Sql Server for more “pure” Microsoft shops. Adding databases to the general Wolverine / Event Sourcing support (the assumption here is that the document database support in Marten would be too much work to move)
  • Introduce Marten and Wolverine to more people, period. Moar “DevRel” type activity! More learning videos. I’ll keep trying to do more conferences and podcasts. More sample applications. Some ideas for new samples might be a sample application with variations using each transport, using Wolverine inside of a modular monolith with multiple Marten stores and/or EF DbContexts, HTTP services, background processing. Maybe actually invest in some SEO for the websites.

Ecosystem Realignment

With major releases coming up with both Marten 8.0 and Wolverine 4.0 and the forthcoming Ermine, there’s an “opportunity” to change the organization of the code to streamline the number of GitHub repositories and Nugets floating around while also centralizing more code. There’s also an opportunity to centralize a lot of infrastructure code that could help the Ermine effort go much faster. Lastly, there are some options like code generation settings and application assembly determination that are today independently configured for Marten and Wolverine which repeatedly trips up our users (and flat out annoys me when I build sample apps).

We’re actively working to streamline the configuration code, but in the meantime, the current thinking about some of this is in the GitHub issue for JasperFx Ecosystem Dependency Reorganization. The other half of that is the content in the next section.

Projection Model Reboot

This refers to the “Reboot Projection Model API” in the Marten GitHub issue list. The short tag line is to move toward enabling easier usage of folks just writing explicit code. I also want us to tackle the absurdly confusing API for “multi-stream projections” as well. This projection model will be shared across Marten, Ermine (Sql Server-backed event store), and any future CosmosDb/DynamoDb/CockroachDb event stores.

Wrapping up Marten 7.0

Marten 7 introduced a crazy amount of new functionality on top of the LINQ rewrite, the connection management rewrite, and introduction of Polly into the core. Besides some (important) ongoing work for JasperFx clients, the remainder of Marten 7 is hopefully just:

  • Mark all synchronous APIs that invoke database access as [Obsolete]
  • Make a pass over the projection model and see how close to the projection reboot you could get. Make anything that doesn’t conform to the new ideal be [Obsolete] with nudges
  • Introduce the new standard code generation / application assembly configuration in JasperFx.CodeGeneration today. Mark Marten’s version of that as [Obsolete] with a pointer to using the new standard – which is hopefully very close minus namespaces to where it will be in the end

Wrapping up Wolverine 3.0

  • Introduce the new standard code generation / application assembly configuration in JasperFx.CodeGeneration today. Mark Marten’s version of that as [Obsolete] with a pointer to using the new standard – which is hopefully very close minus namespaces to where it will be in the end
  • Put a little more error handling in for code generation problems just to make it easier to fix issues later
  • Maybe, reexamine what work could be done to make modular monoliths easier with Wolverine and/or Marten
  • Maybe, consider adding back into scope improvements for EF Core with Wolverine – but I’m personally tempted to let that slide to the Wolverine 4 work

Summary

The Critter Stack core & I plus the JasperFx Software folks have a pretty audaciously ambitious plan for next year. I’m excited for it, and I’ll be talking about it in public as much as y’all will let me get away with it!

Multi-Tenancy in Wolverine Messaging

Building and maintaining a large, hosted system that requires multi-tenancy comes with a fair number of technical challenges. JasperFx Software has helped several of our clients achieve better results with their particular multi-tenancy challenges with Marten and Wolverine, and we’re available to do the same for your shop! Drop us a message on our Discord server or email us at sales@jasperfx.net to start a conversation.

This is continuing a series about multi-tenancy with MartenWolverine, and ASP.Net Core:

  1. What is it and why do you care?
  2. Marten’s “Conjoined” Model
  3. Database per Tenant with Marten
  4. Multi-Tenancy in Wolverine Messaging (this post)
  5. Multi-Tenancy in Wolverine Web Services (future)
  6. Using Partitioning for Better Performance with Multi-Tenancy and Marten (future)
  7. Multi-Tenancy in Wolverine with EF Core & Sql Server (future, and honestly, future functionality as part of Wolverine 4.0)
  8. Dynamic Tenant Creation and Retirement in Marten and Wolverine (definitely in the future)

Let’s say that you’re using the Marten + PostgreSQL combination for your system’s persistence needs in a web service application. Let’s also say that you want to keep the customer data within your system in completely different databases per customer company (or whatever makes sense in your system). Lastly, let’s say that you’re using Wolverine for asynchronous messaging and as a local “mediator” tool. Fortunately, Wolverine by itself has some important built in support for multi-tenancy with Marten that’s going to make your system a lot easier to build.

Let’s get started by just showing a way to opt into multi-tenancy with separate databases using Marten and its integration with Wolverine for middleware, saga support, and the all important transactional outbox support:

// Adding Marten for persistence
builder.Services.AddMarten(m =>
    {
        // With multi-tenancy through a database per tenant
        m.MultiTenantedDatabases(tenancy =>
        {
            // You would probably be pulling the connection strings out of configuration,
            // but it's late in the afternoon and I'm being lazy building out this sample!
            tenancy.AddSingleTenantDatabase("Host=localhost;Port=5433;Database=tenant1;Username=postgres;password=postgres", "tenant1");
            tenancy.AddSingleTenantDatabase("Host=localhost;Port=5433;Database=tenant2;Username=postgres;password=postgres", "tenant2");
            tenancy.AddSingleTenantDatabase("Host=localhost;Port=5433;Database=tenant3;Username=postgres;password=postgres", "tenant3");
        });

        m.DatabaseSchemaName = "mttodo";
    })
    .IntegrateWithWolverine(masterDatabaseConnectionString:connectionString);

Just for the sake of completion, here’s some sample Wolverine configuration that pairs up with the above:

// Wolverine usage is required for WolverineFx.Http
builder.Host.UseWolverine(opts =>
{
    // This middleware will apply to the HTTP
    // endpoints as well
    opts.Policies.AutoApplyTransactions();

    // Setting up the outbox on all locally handled
    // background tasks
    opts.Policies.UseDurableLocalQueues();
});

Now that we’ve got that basic setup for Marten and Wolverine, let’s move on to the first issue, how the heck does Wolverine “know” which tenant should be used? In a later post I’ll show how Wolverine.HTTP has built in tenant id detection, but for now, let’s pretend that you’re already taking care of tenant id detection from incoming HTTP requests some how within your ASP.Net Core pipeline and you just need to pass that into a Wolverine message handler that is being executed from within an MVC Core controller (“Wolverine as Mediator”):

[HttpDelete("/todoitems/{tenant}/longhand")]
public async Task Delete(
    string tenant,
    DeleteTodo command,
    IMessageBus bus)
{
    // Invoke inline for the specified tenant
    await bus.InvokeForTenantAsync(tenant, command);
}

By using the IMessageBus.InvokeForTenantAsync() method, we’re invoking a command inline, but telling Wolverine what the tenant id is. The command handler might look something like this:

// Keep in mind that we set up the automatic
// transactional middleware usage with Marten & Wolverine
// up above, so there's just not much to do here
public static class DeleteTodoHandler
{
    public static void Handle(DeleteTodo command, IDocumentSession session)
    {
        session.Delete<Todo>(command.Id);
    }
}

Not much going on there in our code, but Wolverine is helping us out here by:

  1. Seeing the tenant id value that we passed in before that Wolverine is tracking in its own Envelope structure (Wolverine’s version of Envelope Wrapper from the venerable EIP book)
  2. Creates the Marten IDocumentSession for that tenant id value, which will be reading and writing to the correct tenant database underneath Marten

Now, let’s make this a little more complex by also publishing an event message in that message handler for the DeleteTodo message:

public static class TodoCreatedHandler
{
    public static TodoDeleted Handle(DeleteTodo command, IDocumentSession session)
    {
        session.Delete<Todo>(command.Id);
        
        // This 
        return new TodoDeleted(command.Id);
    }
}

public record TodoDeleted(int TodoId);

Assuming that the TodoDeleted message is being published to a “durable” endpoint, Wolverine is using its transactional outbox integration with Marten to persist the outgoing message in the same tenant database and same transaction as the deletion we’re doing in that command handler. In other words, Wolverine is able to use the tenant databases for its outbox support with no other configuration necessary than what we did up above in the calls to AddMarten() and UseWolverine().

Moreover, Wolverine is even able to use its “durability agent” against all the tenant databases to ensure that any work that is somehow stranded by crashed processes.

Lastly, the TodoDeleted event message cascaded above from our message handler would be tracked throughout Wolverine with the tenant id of the original DeleteToDo command message so that you can do multi-part workflows through Wolverine while tracks the tenant id and utilizes the correct tenant database through Marten all along the way.

Summary

Building solutions with multi-tenancy can be complicated, but the Wolverine + Marten combination can make it a lot easier.

Low Ceremony Sagas with Wolverine

Wolverine puts a very high emphasis on reducing code ceremony and tries really hard to keep itself out of your application code. Wolverine is also built with testability in mind. If you’d be interested in learning more about how Wolverine could simplify your existing application code or set you up with a solid foundation for sustainable productive development for new systems, JasperFx Software is happy to work with you!

Before I get into the nuts and bolts of Wolverine sagas, let me come right out and say that I think that compared to other .NET frameworks, the Wolverine implementation of sagas requires much less code ceremony and therefore easier code to reason about. Wolverine also requires less configuration and explicit code to integrate your custom saga with Wolverine’s saga persistence. Lastly, Wolverine makes the development experience better by building in so much support for automatically configuring development environment resources like database schema objects or message broker objects. I do not believe that any other .NET tooling comes close to the developer experience that the Wolverine and its “Critter Stack” buddy Marten can provide.

Let’s say that you have some kind of multi-step process in your application that might have some mix of:

  • Callouts to 3rd party services
  • Some logical steps that can be parallelized
  • Possibly some conditional workflow based on the results of some of the steps
  • A need to enforce “timeout” conditions if the workflow is taking too long — think maybe of some kind of service level agreement for your workflow

This kind of workflow might be a great opportunity to use Wolverine’s version of Sagas. Conceptually speaking, a “saga” in Wolverine is just a special message handler that needs to inherit from Wolverine’s Saga class and modify itself to track state between messages that impact the saga.

Below is a simple version from the documentation called Order:

public record StartOrder(string OrderId);

public record CompleteOrder(string Id);

public class Order : Saga
{
    // You do need this for the identity
    public string? Id { get; set; }

    // This method would be called when a StartOrder message arrives
    // to start a new Order
    public static (Order, OrderTimeout) Start(StartOrder order, ILogger<Order> logger)
    {
        logger.LogInformation("Got a new order with id {Id}", order.OrderId);

        // creating a timeout message for the saga
        return (new Order{Id = order.OrderId}, new OrderTimeout(order.OrderId));
    }

    // Apply the CompleteOrder to the saga
    public void Handle(CompleteOrder complete, ILogger logger)
    {
        logger.LogInformation("Completing order {Id}", complete.Id);

        // That's it, we're done. Delete the saga state after the message is done.
        MarkCompleted();
    }

    // Delete this order if it has not already been deleted to enforce a "timeout"
    // condition
    public void Handle(OrderTimeout timeout, ILogger<Order> logger)
    {
        logger.LogInformation("Applying timeout to order {Id}", timeout.Id);

        // That's it, we're done. Delete the saga state after the message is done.
        MarkCompleted();
    }

    public static void NotFound(CompleteOrder complete, ILogger logger)
    {
        logger.LogInformation("Tried to complete order {Id}, but it cannot be found", complete.Id);
    }
}

Order is really meant to just be a state machine where it modifies its own state in response to incoming messages and returns cascading messages (you could also use IMessageBus directly as a method argument if you prefer, but my advice is to use simple pure functions) that tell Wolverine what to do next in the multi-step process.

A new Order saga can be created by any old message handler by simply returning a type that inherits from the Saga type in Wolverine. Wolverine is going to automatically discover any public types inheriting from Saga and utilize any public instance methods following certain naming conventions (or static Create() methods) as message handlers that are assumed to modify the state of the saga objects. Wolverine itself is handling everything to do with loading and persisting the Order saga object between commands around the call to the message handler methods on the saga types.

If you’ll notice the Handle(CompleteOrder) method above, the Order is calling MarkCompleted() on itself. That will tell Wolverine that the saga is now complete, and direct Wolverine to delete the current Order saga from the underlying persistence.

As for tracking the saga id between message calls, there are naming conventions about the messages that Wolverine can use to pluck the identity of the saga, but if you’re strictly exchanging messages between a Wolverine saga and other Wolverine message handlers, Wolverine will automatically track metadata about the active saga back and forth.

I’d also ask you to notice the OrderTimeout message that the Order saga returns as it starts. That message type is shown below:

// This message will always be scheduled to be delivered after
// a one minute delay because I guess we want our customers to be
// rushed? Goofy example code:)
public record OrderTimeout(string Id) : TimeoutMessage(1.Minutes());

Wolverine’s cascading message support allows you to return an outgoing message with a time delay — or a particular scheduled time or any other number of options — by just returning a message object. Admittedly this ties you into a little more of Wolverine, but the key takeaway I want you to notice here is that every handler method is a “pure function” with no service dependencies. Every bit of the state change and workflow logic can be tested with simple unit tests that merely work on the before and after state of the Order objects as well as the cascaded messages returned by the message handler functions. No mock objects, no fakes, no custom test harnesses, just simple unit tests. No other saga implementation in the .NET ecosystem can do that for you anywhere nearly as cleanly.

So far I’ve only focused on the logical state machine part of sagas, so let’s jump to persistence. Wolverine has long had a simplistic saga storage mechanism with its integration with Marten, and that’s still one of the easiest and most powerful options. You can also use EF Core for saga persistence, but ick, that means having to use EF Core.

Wolverine 3.0 added a new lightweight saga persistence option for either Sql Server or PostgreSQL (without Marten or EF Core) that just stands up a little table for just a single Saga type and uses JSON serialization to persist the saga. Here’s an example:

using var host = await Host.CreateDefaultBuilder()
    .UseWolverine(opts =>
    {
        // This isn't actually mandatory, but you'll
        // need to do it just to make Wolverine set up
        // the table storage as part of the resource setup
        // otherwise, Wolverine is quite capable of standing
        // up the tables as necessary at runtime if they
        // are missing in its default configuration
        opts.AddSagaType<RedSaga>("red");
        opts.AddSagaType(typeof(BlueSaga),"blue");
       
        
       // This part is absolutely necessary just to have the 
       // normal transactional inbox/outbox support and the new
       // default, lightweight saga persistence
opts.PersistMessagesWithSqlServer(Servers.SqlServerConnectionString, "color_sagas");
        opts.Services.AddResourceSetupOnStartup();
    }).StartAsync();

Just as with the integration with Marten, Wolverine’s lightweight saga implementation is able to build the necessary database table storage on the fly at runtime if it’s missing. The “critter stack” philosophy is to optimize the all important “time to first pull request” metric — meaning that you can get a Wolverine application up fast on your local development box because it’s able to take care of quite a bit of environment setup for you.

Lastly, Wolverine 3.0 is adding optimistic concurrency checks for the Marten saga storage and the new lightweight saga persistence. That’s been an important missing piece of the Wolverine saga story.

Just for some comparison, check out some other saga implementations in .NET: