Message Concurrency, Parallelism, and Ordering with Wolverine

As I wrote last week, message or request concurrency is probably the single most common source of client questions in JasperFx Software consulting and support work around the Critter Stack. Wolverine is a powerful tool for command and event message processing, and it comes with a lot of built in options for wide range of usage scenarios that provider the answers for a lot of the questions we routinely field from clients and other users. More specifically, Wolverine provides a lot of adjustable knobs to limit or expand:

For better or worse, Wolverine has built up quite a few options over the years, and that can be admittedly confusing. Also, there are real performance or correctness tradeoffs with the choices you make around message ordering and processing parallelism. To that end, let’s go through a little whirlwind tour of Wolverine’s options for concurrency, parallelism, and delivery guarantees.

Listener Endpoints

Note that Wolverine standardizes the fluent interface options for endpoint type, message ordering, and parallel execution are consistent across all of its messaging transport types (Rabbit MQ, Azure Service Bus, Kafka, Pulsar, etc.), though not every option is available for every transport.

All messages handled in a Wolverine application come from a constantly running listener “Endpoint” that then delegates the incoming messages to the right message handler. A Wolverine “Endpoint” could be a local, in process queue, a Rabbit MQ queue, a Kafka topic, or an Azure Service Bus subscription (see Wolverine’s documentation on asynchronous messaging for the entire list of messaging options).

This does vary a bit by messaging broker or transport, but there are three modes for Wolverine endpoints, starting with Inline endpoints:

// Configuring a Wolverine application to listen to
// an Azure Service Bus queue with the "Inline" mode
opts.ListenToAzureServiceBusQueue(queueName, q => q.Options.AutoDeleteOnIdle = 5.Minutes()).ProcessInline();

With an Inline endpoint, messages are pulled off the receiving queue or topic one message at a time, and “ack-ed” back to the original queue or topic only on the successful completion of the message handler. This mode completely eschews any kind of durable, transactional inbox, but does still give you an at-least-once delivery guarantee as it’s possible that the “ack” process could fail after the message is successfully handled, potentially resulting in the message being resent from the external messaging broker. Know though that this is rare, and Wolverine puts some error retries around the “ack-ing” process.

As you would assume, using the Inline mode gives you sequential processing of messages within a single node, but limits parallel handling. You can opt into running parallel listeners for any given listening endpoint:

opts.ListenToRabbitQueue("inline")
    // Process inline, default is with one listener
    .ProcessInline()

    // But, you can use multiple, parallel listeners
    .ListenerCount(5);

The second endpoint mode is Buffered where messages are pulled off the external messaging queue or topic as quickly as they can be, and immediately put into an in memory queue and “ack-ed” to any external broker.

// I overrode the buffering limits just to show
// that they exist for "back pressure"
opts.ListenToAzureServiceBusQueue("incoming")
    .BufferedInMemory(new BufferingLimits(1000, 200));

In the sample above, I’m showing how you can override the defaults for how many messages can be buffered in memory for this listening endpoint before the endpoint is paused. Wolverine has some support for back pressure within its Buffered or Durable endpoints to prevent memory from being overrun.

With Buffered or the Durable endpoints I’ll describe next, you can specify the maximum number of parallel messages that can be processed at one time within a listener endpoint on a single node like this:

opts.LocalQueueFor<Message1>()
    .MaximumParallelMessages(6, ProcessingOrder.UnOrdered);

Or you can choose to run messages in a strict sequential order, one at a time like this:

// Make any kind of Wolverine configuration
options
    .PublishMessage<Module1Message>()
    .ToLocalQueue("module1-high-priority")
    .Sequential();

The last endpoint type is Durable, which behaves identical to the Buffered approach except that messages received from external message brokers are persisted to a backing database first before processing, then deleted when the messages are successfully processed or discarded or moved to dead letter queues by error handling policies:

opts.ListenToAzureServiceBusQueue("incoming")
    .UseDurableInbox(new BufferingLimits(1000, 200));

Using the Durable mode enrolls the listening endpoint into Wolverine’s transactional inbox. This is the single most robust option for delivery guarantees with Wolverine, and even adds some protection for idempotent receipt of messages such that Wolverine will quietly reject the same message being received multiple times. Durable endpoints are more robust in terms of delivery guarantees and resilient in the face of system hiccups than the Buffered mode, but does incur a little bit of extra overhead making calls to a database — but I should mention that Wolverine is trying really hard to batch up calls to the database whenever it can for better runtime efficiency, and there are retry loops in all the internals for resiliency as well.

If you really read this post you should hopefully be badly abused of the flippant advice floating around .NET circles right now after the MassTransit commercialization announcement that you can “just” write your own abstractions over messaging brokers instead of using a robust, off the shelf toolset that will have far more engineering for resiliency and observability than most folks realize.

Scenarios

Alright, let’s talk about some common messaging scenarios and look at possible Wolverine options. It’s important to note that there is some real tension between throughput (how many messages can you process over time), message ordering requirements, and delivery guarantees and I’ll try to call those compromises as we go.

You have a constant flood of small messages coming in that are relatively cheap to process…

In this case I would choose a Buffered endpoint and allow it to run messages in parallel:

opts.LocalQueueFor<Message1>()
    .BufferedInMemory()
    .MaximumParallelMessages(6, ProcessingOrder.UnOrdered);

Letting messages run without any strict ordering will allow the endpoint to process messages faster. Using the Buffered approach will allow the endpoint to utilize any kind of message batching that external message brokers might support, and does a lot to remove the messaging broker as a bottle neck for message processing. The Buffered approach isn’t durable of course, but if you care about throughput more than guarantees or message ordering, it’s the best option.

Note that any Buffered or Durable endpoint automatically allows for parallel message processing capped by the number of processor cores for the host process.

A message is expensive to process…

If you have a message type that turns out to require a lot of resources to process, you probably want to limit the parallelization to restrict how many resources the system uses for this message type. I would say to either use an Inline endpoint:

opts.ListenToRabbitQueue("expensive")
    // Process inline, default is with one listener
    .ProcessInline()

    // Cap it to no more than two messages in parallel at any
    // one time
    .ListenerCount(2);

or a Buffered or Durable endpoint, but cap the parallelization.

Messages should be processed in order, at least on each node…

Use either a ProcessInline endpoint, or use the Sequential() option on any other kind of endpoint to limit the local processing to single file:

opts.ListenToAzureServiceBusQueue("incoming")
    .Sequential();

A certain type of message should be processed in order across the entire application…

Sometimes there’s a need to say that a certain set of messages within your system need to be handled in strict order across the entire application. While some specific messaging brokers have some specific functionality for this scenario, Wolverine has this option to ensure that a listening endpoint for a certain location only runs on a single node within the application at any one time, and always processes in strict sequential order:

var host = await Host.CreateDefaultBuilder().UseWolverine(opts =>
{
    opts.UseRabbitMq().EnableWolverineControlQueues();
    opts.PersistMessagesWithPostgresql(Servers.PostgresConnectionString, "listeners");

    opts.ListenToRabbitQueue("ordered")

        // This option is available on all types of Wolverine
        // endpoints that can be configured to be a listener
        .ListenWithStrictOrdering();
}).StartAsync();

Watch out of course, because this throttles the processing of messages to single file on exactly one node. That’s perfect for cases where you’re not too concerned about throughput, but sequencing is very important. A JasperFx Software client is using this for messages to a stateful Saga that coordinates work across their application.

Do note that Wolverine will both ensure a listener with this option is only running on one node, and will redistribute any strict ordering listeners to better distribute work across a cluster. Wolverine will also be able to detect when it needs to switch the listening over to a different node if a node is taken down.

Messages should be processed in order within a logical group, but we need better throughput otherwise…

Let’s say that you have a case where you know the system would work much more efficiently if Wolverine could process messages related to a single business entity of some sort (an Invoice? a Purchase Order? an Incident?) in strict order. You still need more throughput than you can achieve through a strictly ordered listener that only runs on one node, but you do need the messages to be handled in order or maybe just one at a time for a single business entity to arrive at consistent state or to prevent errors due to concurrent access.

If you happened to be using Azure Service Bus as your messaging transport, you can utilize Session Identifiers and FIFO Queues with Wolverine to do exactly this:

_host = await Host.CreateDefaultBuilder()
    .UseWolverine(opts =>
    {
        opts.UseAzureServiceBusTesting()
            .AutoProvision().AutoPurgeOnStartup();

        opts.ListenToAzureServiceBusQueue("send_and_receive");
        opts.PublishMessage<AsbMessage1>().ToAzureServiceBusQueue("send_and_receive");

        opts.ListenToAzureServiceBusQueue("fifo1")

            // Require session identifiers with this queue
            .RequireSessions()

            // This controls the Wolverine handling to force it to process
            // messages sequentially
            .Sequential();

        opts.PublishMessage<AsbMessage2>()
            .ToAzureServiceBusQueue("fifo1");

        opts.PublishMessage<AsbMessage3>().ToAzureServiceBusTopic("asb3");
        opts.ListenToAzureServiceBusSubscription("asb3")
            .FromTopic("asb3")

            // Require sessions on this subscription
            .RequireSessions(1)

            .ProcessInline();
    }).StartAsync();

But, there’s a little bit more to publishing because you’ll need to tell Wolverine what the GroupId value is for your message:


I think we’ll try to make this a little more automatic in the near future with Wolverine.

// bus is an IMessageBus
await bus.SendAsync(new AsbMessage3("Red"), new DeliveryOptions { GroupId = "2" });
await bus.SendAsync(new AsbMessage3("Green"), new DeliveryOptions { GroupId = "2" });
await bus.SendAsync(new AsbMessage3("Refactor"), new DeliveryOptions { GroupId = "2" });

Of course, if you don’t have Azure Service Bus, you still have some other options. I think I’m going to save this for a later post, hopefully after building out some formal support for this, but another option is to:

  1. Plan on having several different listeners for a subset of messages that all have the strictly ordered semantics as shown in the previous section. Each listener can at least process information independently
  2. Use some kind of logic that can look at a message being published by Wolverine and use some kind of deterministic rule that will assign that message to one of the strictly ordered messaging destinations

Like I said, more to come on this in the hopefully near future, and this might be part of a JasperFx Software engagement soon.

What about handling events in Wolverine that are captured to Marten (or future Critter Event Stores)?

I’m Gen X, so the idea of Marten & Wolverine assembling to create the ultimate Event Driven Architecture stack makes me think of Transformers cartoons:)

It’s been a few years, but what is now Wolverine was originally called “Jasper” and was admittedly a failed project until we decided to reorient it to being a complement to Event Sourcing with Marten and renamed it “Wolverine” to continue the “Critter Stack” theme. A huge part of that strategy was having first class mechanisms to either publish or handle events captured by Marten’s Event Sourcing through Wolverine’s robust message execution and message publishing capabilities.

You have two basic mechanisms for this. The first, and original option is “Event Forwarding” where events captured by Marten are published to Wolverine upon the successful completion of the Marten transaction:

builder.Services.AddMarten(opts =>
    {
        var connString = builder
            .Configuration
            .GetConnectionString("marten");

        opts.Connection(connString);

        // There will be more here later...

        opts.Projections
            .Add<AppointmentDurationProjection>(ProjectionLifecycle.Async);

        // OR ???

        // opts.Projections
        //     .Add<AppointmentDurationProjection>(ProjectionLifecycle.Inline);

        opts.Projections.Add<AppointmentProjection>(ProjectionLifecycle.Inline);
        opts.Projections
            .Snapshot<ProviderShift>(SnapshotLifecycle.Async);
    })

    // This adds a hosted service to run
    // asynchronous projections in a background process
    .AddAsyncDaemon(DaemonMode.HotCold)

    // I added this to enroll Marten in the Wolverine outbox
    .IntegrateWithWolverine()

    // I also added this to opt into events being forward to
    // the Wolverine outbox during SaveChangesAsync()
    .EventForwardingToWolverine();

    Event forwarding gives you no ordering guarantees of any kind, but will push events as messages to Wolverine immediately. Event forwarding may give you significantly better throughput then the subscription model we’ll look at next because there’s less latency between persisting the event to Marten and the event being published to Wolverine. Moreover, using “Event Forwarding” means that the event publishing happens throughout any application cluster.

    However, if you need strictly ordered handling of the events being persisted to Marten, you instead need to use the Event Subscriptions model where Wolverine is handling or relaying Marten events as messages in the strict order in which they are appended to Marten, and on a single running node. This is analogous to the strictly ordered listener option explained above.

    What about my scenario you didn’t discuss here?

    See the Wolverine documentation, or come ask us on Discord.

    Summary

    There’s a real tradeoff between message ordering, processing throughput, and message delivery guarantees. Fortunately, Wolverine gives you plenty of options to meet a variety of different project requirements.

    And one last time, you’re just not going to want to sign up for the level of robust options and infrastructure that’s under the covers of a tool like Wolverine can “just roll your own messaging abstractions” because you’re angry and think that community OSS tools can’t be trusted. And also, Wolverine is also a moving target that constantly improves based on the problems, needs, suggestions, and code contributions from our core team, community, and JasperFx Software customers. Your homegrown tooling will never receive that level of feedback, and probably won’t ever match Wolverine’s quality of documentation either.

    Wolverine 4 is Bringing Multi-Tenancy to EF Core

    I think that even in a crowded field of existing “mediator” tools, ASP.Net Core endpoint frameworks, and asynchronous messaging frameworks in .NET that Wolverine has a great opportunity to grow its user base this year. There’s a lot of value that Wolverine brings to the table that I don’t believe other tools do, but it’s been very focused on improving development in conjunction with Marten. With Wolverine 4.0, I’m hoping that much deeper support for EF Core will help Wolverine’s community grow by attracting more developers who aren’t necessarily using Marten (yet).

    The “Critter Stack” (Marten and Wolverine) already has very strong support for multi-tenancy as you can see in some previous posts of mine:

    And I think you get the point. Marten and Wolverine have a much more comprehensive feature set for multi-tenancy than any other persistence or messaging tool in the .NET ecosystem — in no small part because we seem to be the only community that cares about this somehow?

    For Wolverine 4.0 (expected by June 1st) and in conjunction with a JasperFx Software customer, we’re adding first class multi-tenancy support for EF Core usage. Specifically, we’re aiming to allow users to use every bit of Wolverine’s existing multi-tenancy integration, transactional inbox/outbox support, and transactional middleware with EF Core while targeting a separate database for each tenant.

    This work is in flight, but here’s a preview of the (working thank you) syntax. In bootstrapping, we need to tell Wolverine about both a main database for Wolverine storage and any tenant databases as in this sample from a web application:

            builder.Host.UseWolverine(opts =>
            {
                var mainConnectionString = builder.Configuration.GetConnectionString("main");
    
                opts.PersistMessagesWithSqlServer(mainConnectionString)
    
                    // If you have a fixed number of tenant databases and it won't
                    // change w/o downtime -- but don't worry, there are other options coming...
                    .RegisterStaticTenants(x =>
                    {
                        x.Register("tenant1", builder.Configuration.GetConnectionString("tenant1"));
                        x.Register("tenant2", builder.Configuration.GetConnectionString("tenant2"));
                        x.Register("tenant3", builder.Configuration.GetConnectionString("tenant]3"));
                    });
                
                opts.Policies.AutoApplyTransactions();
                opts.Policies.UseDurableLocalQueues();
        
                TestingOverrides.Extension?.Configure(opts);
            });
    

    Next, we need to tell Wolverine to make one or more of our EF Core DbContext types multi-tenanted using the databases we configured above like this:

            // Little sleight of hand, we're registering the DbContext with Wolverine,
            // but letting Wolverine deal with the connection string at runtime
            builder
                .Services
                .AddDbContextWithWolverineManagedMultiTenancy<ItemsDbContext>((b, connectionString) =>
                
                    // Notice the specification of AutoCreate here, more in a second...
                    b.UseSqlServer(connectionString), AutoCreate.CreateOrUpdate);
    

    While Wolverine does support multi-tenancy tracking through both local message publishing and asynchronous messaging, let’s pretend our workflows start from an HTTP endpoint, so I’m going to add some basic tenant id detection using Wolverine.HTTP like so in our Program file:

    var app = builder.Build();
    app.MapWolverineEndpoints(opts =>
    {
        // Set up tenant detection
        opts.TenantId.IsQueryStringValue("tenant");
        opts.TenantId.DefaultIs(StorageConstants.DefaultTenantId);
    });
    

    Alrighty then, here’s that simplistic little DbContext I’m using for testing right now:

    public class ItemsDbContext : DbContext
    {
        public ItemsDbContext(DbContextOptions<ItemsDbContext> options) : base(options)
        {
        }
    
        public DbSet<Item> Items { get; set; }
    
        protected override void OnModelCreating(ModelBuilder modelBuilder)
        {
            // Your normal EF Core mapping
            modelBuilder.Entity<Item>(map =>
            {
                map.ToTable("items", "mt_items");
                map.HasKey(x => x.Id);
                map.Property(x => x.Id).HasColumnName("id");
                map.Property(x => x.Name).HasColumnName("name");
                map.Property(x => x.Approved).HasColumnName("approved");
            });
        }
    }
    
    

    And finally, here’s a message handler that’s also doubling as a simplistic HTTP endpoint to create a new Item in our system by accepting a command:

    public record StartNewItem(Guid Id, string Name);
    
    public static class StartNewItemHandler
    {
        [WolverinePost("/item")]
        public static IStorageAction<Item> Handle(StartNewItem command)
        {
            return new Insert<Item>(new Item
            {
                Id = command.Id, 
                Name = command.Name
            });
        }
    }
    

    For a little context, you might want to check out Wolverine’s storage side effects. Or with less “magic”, we could use this completely equivalent alternative:

    public static class StartNewItemHandler
    {
        [WolverinePost("/item")]
        public static void Handle(StartNewItem command, ItemsDbContext dbContext)
        {
            var item = new Item
            {
                Id = command.Id, 
                Name = command.Name
            };
    
            dbContext.Items.Add(item);
        }
    }
    

    Either way, Wolverine’s transactional middleware actually calls ItemsDbContext.SaveChangesAsync() for us and also deals with any necessary transactional outbox mechanics to “flush” outgoing messages after the transaction has succeeded.

    Alright, let’s go to runtime, where Wolverine is going to happily handle a POST to /item by trying to find the tenant id out of a query string variable named “tenant”, then build an ItemsDbContext instance for us that’s pointed at the proper SQL Server database for the named tenant id. Our code up above doesn’t have to know anything about that process, we just have to write code to carry out the business requirements.

    And that’s that! Of course there are plenty of other things to worry about and questions you might have, so let me try to anticipate those:

    • What about dynamically adding tenants without any downtime? Like Marten’s “Master Table Tenancy” model where you can add new tenants without any system down time and it “just works”, including Wolverine being able to spin up the necessary background agents for scheduled messages and whatnot? That’s definitely planned, and it will end up sharing quite a bit of code with the existing Marten support in this case and building on existing Wolverine capabilities
    • Will this work with Wolverine’s existing EF Core Saga support? Yes, or at least that’s planned and I’ll test that tomorrow
    • Does Wolverine’s transactional inbox/outbox support and scheduled messages extend to this new multi-tenancy model? Yes, that’s already working.
    • Can I use this with lightweight sagas? Yep, that too. Not working yet, but that will be in place by the end of tomorrow.
    • I’m spoiled by Marten and Wolverine’s ability to set up development databases on the fly, is there any chance of something like that for this EF Core integration? If you follow me on BlueSky (sorry), you might have seen me gripe about EF Core migrations as compared to (in my honest opinion) Marten’s much easier model for development time. After some griping and plenty of investigation, we will have at least a model where you can opt into having Wolverine use EF Core migrations to create any missing databases and apply any missing migrations to all the tenant databases at application startup. Having that capability is helping speed up the development of all of this.
    • Will the tenant database discovery be pluggable so we can use whatever existing mechanism we already have? Yes.
    • Can I use the Wolverine managed EF Core multi-tenancy outside of Wolverine message handlers or HTTP endpoints? That work hasn’t started yet, but that’s a particular need for JasperFx Software client work
    • If I’m using PostgreSQL and both Marten and EF Core, can we use both tools in the same application with multi-tenancy? Hopefully with just one set of configuration? Absolutely, yes.
    • Will there be a Wolverine powered equivalent to Marten’s “conjoined tenancy” model for multi-tenancy through one database? Um, not sure, probably not at first — but I’d be happy to talk to anyone who wants to volunteer for that pull request or wants to engage JasperFx to build that out!
    • When again? By June 1st.

    Managing Auto Creation of Database or Message Broker Resources in the Critter Stack vNext

    If you’d prefer to start with more context, skip to the section named “Why is this important?”.

    To set up the problem I’m hoping to address in this post, there are several settings across both Marten and Wolverine that need to be configured for the most optimal possible functioning between development, testing, and deployment time — but yet, some of these settings are done different ways today or have to be done independently for both Marten and Wolverine.

    Below is a proposed configuration approach for Marten, Wolverine, and future “Critter” tools with the Marten 8 / Wolverine 4 “Critter Stack 2025” wave of releases:

            var builder = Host.CreateApplicationBuilder();
            
            // This would apply to both Marten, Wolverine, and future critters....
            builder.Services.AddJasperFx(x =>
            {
                // This expands in importance to be the master "AutoCreate"
                // over every resource at runtime and not just databases
                // So this would maybe take the place of AutoProvision() in Wolverine world too
                x.Production.AutoCreate = AutoCreate.None;
                x.Production.GeneratedCodeMode = TypeLoadMode.Static;
                x.Production.AssertAllPreGeneratedTypesExist = true;
                
                // Just for completeness sake, but these are the defaults
                x.Development.AutoCreate = AutoCreate.CreateOrUpdate;
                x.Development.GeneratedCodeMode = TypeLoadMode.Dynamic;
    
                // Unify the Marten/Wolverine/future critter application assembly
                // Default will always be the entry assembly
                x.ApplicationAssembly = typeof(Message1).Assembly;
            });
            
            // keep bootstrapping...
    

    If you’ve used either Marten or Wolverine for production usages, you know that you probably want to turn off the dynamic code generation at production time, and you might choose to also turn off the automatic database migrations for both Marten and Wolverine in production (or not, I’ve been surprised how many folks are happy to just let the tools manage database schemas).

    The killer problem for us today, is that the settings above have to be configured independently for both Marten and Wolverine — and as a bad coincidence, I just chatted with someone on Discord who got burned by this as I was starting this post. Grr.

    Even worse, the syntactical options for disabling automatic database management for Wolverine’s envelope storage tables is a little different syntax altogether. And then just to make things more fun — and please cut the Critter Stack community and I some slack because all of this evolved over years — the “auto create / migrate / evolve” functionality for like Rabbit MQ queues/exchanges/bindings or Kafka topics is “opt in” instead of “opt out” like the automatic database migrations are with a completely different syntax and naming than either the Marten or Wolverine tables as shown with the AutoProvision() option below:

    using var host = await Host.CreateDefaultBuilder()
        .UseWolverine(opts =>
        {
            opts.UseRabbitMq(rabbit => { rabbit.HostName = "localhost"; })
                // I'm declaring an exchange, a queue, and the binding
                // key that we're referencing below.
                // This is NOT MANDATORY, but rather just allows Wolverine to
                // control the Rabbit MQ object lifecycle
                .DeclareExchange("exchange1", ex => { ex.BindQueue("queue1", "key1"); })
    
                // This will direct Wolverine to create any missing Rabbit MQ exchanges,
                // queues, or binding keys declared in the application at application
                // start up time
                .AutoProvision();
    
            opts.PublishAllMessages().ToRabbitExchange("exchange1");
        }).StartAsync();
    

    I’m not married to the syntax per se, but my proposal is that:

    • Every possible type of “stateful resource” (database configurations or message brokers or whatever we might introduce in the future) by default follows the AutoCreate settings in one place, which for right now is in the AddJasperFx() method (should this be named something else? ConfigureJasperFx(), ConfigureCritterStack() ????
    • You can override this at either the Marten or Wolverine levels, or within Wolverine, maybe you use the default behavior for the application for all database management, but turn down Azure Service Bus to AutoCreate.None.
    • We’ll use the AutoCreate enumeration that originated in Marten, but will now move down to a lower level shared library to define the level for each resource
    • All resource types will have a default setting of AutoCreate.CreateOrUpdate, even message brokers. This is to move the tools into more of a “it just works” out of the box developer experience. This will make the usage of AutoProvision() in Wolverine unnecessary unless you want to override the AutoCreate settings
    • We deprecate the OptimizeArtifactWorkflow() mechanisms that never really caught on, and instead let folks just set potentially different settings for “Development” vs “Production” time, and let the tools apply the right settings based on the IHostEnvironment.Environment name so you don’t have to clutter up your code with too many ugly if (builder.Environment.IsDevelopment() ... calls.

    Just for some context, the AutoCreate values are below:

    public enum AutoCreate
    {
        /// <summary>
        ///     Will drop and recreate tables that do not match the Marten configuration or create new ones
        /// </summary>
        All,
    
        /// <summary>
        ///     Will never destroy existing tables. Attempts to add missing columns or missing tables
        /// </summary>
        CreateOrUpdate,
    
        /// <summary>
        ///     Will create missing schema objects at runtime, but will not update or remove existing schema objects
        /// </summary>
        CreateOnly,
    
        /// <summary>
        ///     Do not recreate, destroy, or update schema objects at runtime. Will throw exceptions if
        ///     the schema does not match the Marten configuration
        /// </summary>
        None
    }
    

    For longstanding Critter Stack users, we’ll absolutely keep:

    • The existing “stateful resource” model, including the resources command line helper for setting up or tearing down resource dependencies
    • The existing db-* command line tooling
    • The IServiceCollection.AddResourceSetupOnStartup() method for forcing all resources (databases and broker objects) to be correctly built out on application startup
    • The existing Marten and Wolverine settings for configuring the AutoCreate levels, but these will be marked as [Obsolete]
    • The existing Marten and Wolverine settings for configuring the code generation TypeLoadMode, but the default values will come from the AddJasperFx() options and the Marten or Wolverine options will be marked as [Obsolete]

    Why is this important?

    An important part of building, deploying, and maintaining an enterprise system with server side tooling like the “Critter Stack” (Marten, Wolverine, and their smaller sibling Weasel that factors quite a bit into this blog post) is dealing with creating or migrating database schema objects or message broker resources so that your application can function as expected against its infrastructure dependencies.

    As any of you know who have ever walked into the development of an existing enterprise system, it’s often challenging to get your local development environment configured for that system — and that can frequently cause you days and I’ve even seen weeks of delay. What if instead you could simply start fresh with a clean clone of the code repository and be up and running very quickly?

    If you pick up Marten for the first time today, spin up a brand new PostgreSQL database where you have full admin rights, and write this code it would happily work without you doing any explicit work to migrate the new PostgreSQL database:

    public class Customer
    {
        public Guid Id { get; set; }
    
        // We'll use this later for some "logic" about how incidents
        // can be automatically prioritized
        public Dictionary<IncidentCategory, IncidentPriority> Priorities { get; set; }
            = new();
    
        public string? Region { get; set; }
    
        public ContractDuration Duration { get; set; }
    }
    
    public record ContractDuration(DateOnly Start, DateOnly End);
    
    public enum IncidentCategory
    {
        Software,
        Hardware,
        Network,
        Database
    }
    
    public enum IncidentPriority
    {
        Critical,
        High,
        Medium,
        Low
    }
    
    await using var store = DocumentStore
        .For("Host=localhost;Port=5432;Database=marten_testing;Username=postgres;password=postgres");
    
    var customer = new Customer
    {
        Duration = new ContractDuration(new DateOnly(2023, 12, 1), new DateOnly(2024, 12, 1)),
        Region = "West Coast",
        Priorities = new Dictionary<IncidentCategory, IncidentPriority>
        {
            { IncidentCategory.Database, IncidentPriority.High }
        }
    };
    
    // IDocumentSession is Marten's unit of work 
    await using var session = store.LightweightSession();
    session.Store(customer);
    await session.SaveChangesAsync();
    
    // Marten assigned an identity for us on Store(), so 
    // we'll use that to load another copy of what was 
    // just saved
    var customer2 = await session.LoadAsync<Customer>(customer.Id);
    
    // Just making a pretty JSON printout
    Console.WriteLine(JsonConvert.SerializeObject(customer2, Formatting.Indented));
    

    Instead, with its default settings, Marten is able to quietly check if its underlying database has all the necessary database tables, functions, sequences, and schemas for whatever it needs roughly when it needs it for the first time. The whole point of this functionality is to ensure that a new developer coming into your project for the very first time can quickly clone your repository, and be up and running either the whole system or even just integration tests that hit the database immediately because Marten is able to “auto-migrate” database changes for you so you can just focus on getting work done.

    Great, right? Except that sometimes you certainly wouldn’t want this “auto-migration” business going. Maybe because the system doesn’t have permissions, or maybe just to make the system spin up faster without the overhead of calculating the necessity of a migration step (it’s not cheap, especially for something like a Serverless usage where you depend on fast cold starts). Either way, you’d like to be able to turn that off at production time with the assumption that you’re applying database changes beforehand (which the Critter Stack has worlds of tools to help with as well), so you’ll turn off the default behavior something like the following with Marten 7 and before:

    var builder = WebApplication.CreateBuilder(args);
    
    builder.Services.AddMarten(opts =>
        {
            // Other configuration...
    
            // In production, let's turn off all the automatic database
            // migration stuff
            if (builder.Environment.IsProduction())
            {
                opts.AutoCreateSchemaObjects = AutoCreate.None;
            }
        })
        // Add background projection processing
        .AddAsyncDaemon(DaemonMode.HotCold)
        // This is a mild optimization
        .UseLightweightSessions();
    

    Wolverine uses the same underlying Weasel helper library to make automatic database migrations that Marten does, and works similarly, but disabling the automatic database setup is different for reasons I don’t remember:

    using var host = await Host.CreateDefaultBuilder()
        .UseWolverine(opts =>
        {
            // Disable automatic database migrations for message
            // storage
            opts.AutoBuildMessageStorageOnStartup = false;
        }).StartAsync();
    

    Wolverine can do similar automatic management of Rabbit MQ, Azure Service Bus, AWS SQS, Kafka, Pulsar, or Google Pubsub objects at runtime, but in this case you have to explicitly “opt in” to that automatic management through the fluent interface registration of a message broker like this sample using Google Pubsub:

            var host = await Host.CreateDefaultBuilder()
                .UseWolverine(opts =>
                {
                    opts.UsePubsub("your-project-id")
    
                        // Let Wolverine create missing topics and subscriptions as necessary
                        .AutoProvision()
    
                        // Optionally purge all subscriptions on application startup.
                        // Warning though, this is potentially slow
                        .AutoPurgeOnStartup();
                }).StartAsync();
    

    Wolverine Meets AWS SNS

    Wolverine 3.13 added a new message transport for Amazon SNS with a big pull request from Luis Villalaz.

    To get started, add this Nuget to your system:

    dotnet add package WolverineFx.AmazonSns
    

    Assuming that you have set up a shared AWS configuration or credential files, you can connect to SNS from Wolverine with this:

    var host = await Host.CreateDefaultBuilder()
        .UseWolverine(opts =>
        {
            // This does depend on the server having an AWS credentials file
            // See https://docs.aws.amazon.com/cli/latest/userguide/cli-configure-files.html for more information
            opts.UseAmazonSnsTransport()
    
                // Let Wolverine create missing topics and subscriptions as necessary
                .AutoProvision();
        }).StartAsync();
    

    To set up message publishing with Wolverine, you can use all the standard Wolverine message routing configuration, but use the new ToSnsTopic() extension method like so:

    var host = await Host.CreateDefaultBuilder()
        .UseWolverine(opts =>
        {
            opts.UseAmazonSnsTransport();
    
            opts.PublishMessage<Message1>()
                .ToSnsTopic("outbound1")
    
                // Increase the outgoing message throughput, but at the cost
                // of strict ordering
                .MessageBatchMaxDegreeOfParallelism(Environment.ProcessorCount)
                .ConfigureTopicCreation(conf =>
                {
                    // Configure topic creation request...
                });
        }).StartAsync();
    

    You can also configure subscriptions to SNS topics to Amazon SQS queues like this:

    var host = await Host.CreateDefaultBuilder()
        .UseWolverine(opts =>
        {
            opts.UseAmazonSnsTransport()
                // Without this, the SubscribeSqsQueue() call does nothing
                .AutoProvision();
    
            opts.PublishMessage<Message1>()
                .ToSnsTopic("outbound1")
                // Sets a subscriptions to be
                .SubscribeSqsQueue("queueName",
                    config =>
                    {
                        // Configure subscription attributes
                        config.RawMessageDelivery = true;
                    });
        }).StartAsync();
    

    Summary

    Right now Wolverine has support for publishing through Amazon SNS to Amazon SQS queues. I of course do expect additional use cases and further interoperability stories for this transport, but honestly, I just want to react to what folks have a demonstrated need for rather than building anything early. If you have a use case for Amazon SNS with Wolverine that isn’t already covered, please just engage with our community either on Wolverine’s GitHub repository or in our Discord room.

    With Wolverine 3.13, our list of supported message brokers has grown to:

    • Rabbit MQ — admittedly our most feature rich and mature transport simply due to how often it’s used
    • Azure Service Bus
    • Amazon SQS
    • Google PubSub — also done through community pull requests
    • Sql Server or PostgreSQL if you just don’t have a lot of messages, but don’t want to introduce new messaging infrastructure. You can also import messages from external database tables as a way to do durable messaging from legacy systems to Wolverine.
    • MQTT for IoT usage
    • Kafka — which has also been improved through recent community feedback and pull requests
    • Apache Pulsar — same as Kafka, there’s suddenly been more community usage and contribution for this transport

    I don’t know when and if there will be additional transports, but there are occasional requests for NATS.io or Redis. I think there’s some possibility of a SignalR transport coming out of JasperFx’s internal work on our forthcoming “CritterWatch” tool.

    Huge Wolverine 3.13 Release

    Wolverine is part of the larger “Critter Stack” suite that provides a robust and productive approach to Event Driven Architecture approaches in the .NET ecosystem. Through its various elements provides an asynchronous messaging framework, an alternative HTTP endpoint framework, and yes, it can be used as just a “mediator” tool (but I’d recommend using Wolverine’s HTTP support directly instead of “Wolverine as MediatR”). What’s special about Wolverine is how much, much more it does to reduce project boilerplate, code ceremony, and the complexity of application code compared to other .NET messaging or “mediator” tools. We the Wolverine team and community would ask that you keep this in mind instead of strictly comparing Wolverine as an apples to apples analogue to other .NET frameworks.

    The Wolverine community has been busy, and I was just able to publish a very large Wolverine 3.13 release this evening. I’m happily going to use this release as a demonstration of the health of Wolverine as an ongoing OSS project because it has:

    • Big new features from other core team members like Jakob Tikjøb Andersen‘s work with HTTP form posts and [AsParameters] support
    • A significant improvement in the documentation structure from core team member JT
    • Huge new features from the community like Luis Villalaz‘s addition of an AWS SNS transport for Wolverine
    • An F# usability improvement from the Critter Stack’s de facto F# support owner nkosi23
    • New feature work sponsored by a JasperFx Software client for some specific needs, and this is important for the health of Wolverine because JasperFx support and consulting clients are directly responsible for making Wolverine and the rest of the Critter Stack be viable as a longer term technical choice
    • Quite a few improvements to the Kafka transport that were suggestions from newer community members who came to Wolverine in the aftermath of other tool’s commercialization plans
    • Pull requests that made improvements or fixed problems in the documentation website — and those kinds of little pull requests do make a difference and are definitely appreciated by myself and the other team members
    • New contributors, including Bjørn Madsen‘s improvements to the Pulsar support

    Anyway, I’ll be blogging about some of the highlights of this new release starting tomorrow with our new HTTP endpoint capabilities that add some frequently requested features, but I wanted to get the announcement and some thanks out to the community first. And of course, if there’s any issues with the new release or old bits (and there will be), just ask away in the Critter Stack Discord server.

    Wrapping Up

    Large OSS project releases can sometimes become their own gravity source that sucks in more and more work when a project owner starts getting enamored of doing a big, flashy release. I’d strongly prefer to be a little more steady with weekly or bi-weekly releases instead of ever doing a big release like this, but a lot of things just happened to come in all at once here.

    JasperFx Software has some contractural obligations to deliver Wolverine 4.0 soon, so this might be the last big release of new features in the 3.* line.

    Preview of (Hopefully) Improved Projections in Marten 8

    Work is continuing on the “Critter Stack 2025” round of releases, but we have finally got an alpha release of Marten 8 (8.0.0-alpha-5) that’s good enough for friendly users and core team members to try out for feedback. 8.0 won’t be a huge release, but we’re making some substantial changes to the projections subsystem and this is where I’d personally love any and all feedback about the changes so far that I’m going to try to preview in this post.

    Just know that first, here are the goals of the projection changes for Marten 8.0:

    1. Eliminate the code generation for projections altogether and instead using dynamic Lambda compilation with FastExpressionCompiler for the remaining convention-based projection approaches. That’s complete in this alpha release.
    2. Expand the support for strong typed identifiers (Vogen or StronglyTypedId or otherwise) across the public API of Marten. I’m personally sick to death of this issue and don’t particularly believe in the value of these infernal things, but the user community has spoken loudly. Some of the breaking API changes in this post were caused by expanding the strong typed identifier support.
    3. Better support explicit code options for all projection categories (single stream projections, multi-stream projections, flat table projections, or event projections)
    4. Extract the basic event sourcing types, abstractions, and most of the projection and event subscription support to a new shared JasperFx.Events library that is planned to be reusable between Marten and future “Critter” tools targeting Sql Server first, then maybe CosmosDb or DynamoDb. We’ll write a better migration guide later, but expect some types you may be using today to have moved namespaces. I was concerned before starting this work for the 2nd time that it would be a time consuming boondoggle that might not be worth the effort. After having largely completed this planned work I am still concerned that this was a time consuming boondoggle and opportunity cost. Alas.
    5. Some significant performance and scalability improvements for asynchronous projections and projection rebuilds that are still a work in progress

    Alright, on to the changes.

    Single Stream Projection

    Probably the most common projection type is to aggregate a single event stream into a view of that stream as either a “write model” to support decision making in commands or a “read model” to support queries or user interfaces. In Marten 8, you will still use the SingleStreamProjection base class (CustomProjection is marked as obsolete in V8), but there’s one significant change that now you have to use a second generic type argument for the identity type of the projected document (blame the proliferation of strong typed identifiers for this), with this as an example:

    // This example is using the old Apply/Create/ShouldDelete conventions
    public class ItemProjection: SingleStreamProjection<Item, Guid>
    {
        public void Apply(Item item, ItemStarted started)
        {
            item.Started = true;
            item.Description = started.Description;
        }
    
        public void Apply(Item item, IEvent<ItemWorked> worked)
        {
            // Nothing, I know, this is weird
        }
    
        public void Apply(Item item, ItemFinished finished)
        {
            item.Completed = true;
        }
    
        public override Item ApplyMetadata(Item aggregate, IEvent lastEvent)
        {
            // Apply the last timestamp
            aggregate.LastModified = lastEvent.Timestamp;
    
            var person = lastEvent.GetHeader("last-modified-by");
    
            aggregate.LastModifiedBy = person?.ToString() ?? "System";
    
            return aggregate;
        }
    }
    

    The same Apply, Create, and ShouldDelete conventions from Marten 4-7 are still supported. You can also still just put those conventional methods directly on the aggregate type just like you could in Marten 4-7.

    The inline lambda options are also still supported with the same method signatures:

        public class TripProjection: SingleStreamProjection<Trip, Guid>
        {
            public TripProjection()
            {
                ProjectEvent<Arrival>((trip, e) => trip.State = e.State);
                ProjectEvent<Travel>((trip, e) => trip.Traveled += e.TotalDistance());
                ProjectEvent<TripEnded>((trip, e) =>
                {
                    trip.Active = false;
                    trip.EndedOn = e.Day;
                });
    
                ProjectEventAsync<Breakdown>(async (session, trip, e) =>
                {
                    var repairShop = await session.Query<RepairShop>()
                        .Where(x => x.State == trip.State)
                        .FirstOrDefaultAsync();
    
                    trip.RepairShopId = repairShop?.Id;
                });
            }
        }
    

    So far the only different from Marten 4-7 is the additional type argument for the identity. Now let’s get into the new options for explicit code when either you just prefer that way, or your logic is too complex for the limited conventional approach.

    First, let’s say that you want to use explicit code to “evolve” the state of an aggregated projection, but you won’t need any additional data lookups except for the event data. In this case, you can override the Evolve method as shown below:

    public class WeirdCustomAggregation: SingleStreamProjection<MyAggregate, Guid>
    {
        public WeirdCustomAggregation()
        {
            ProjectionName = "Weird";
        }
    
        public override MyAggregate Evolve(MyAggregate snapshot, Guid id, IEvent e)
        {
            // Given the current snapshot and an event, "evolve" the aggregate
            // to the next version.
            
            // And snapshot can be null, just meaning it hasn't been
            // started yet, so start it here
            snapshot ??= new MyAggregate(){ Id = id };
            switch (e.Data)
            {
                case AEvent:
                    snapshot.ACount++;
                    break;
                case BEvent:
                    snapshot.BCount++;
                    break;
                case CEvent:
                    snapshot.CCount++;
                    break;
                case DEvent:
                    snapshot.DCount++;
                    break;
            }
    
            return snapshot;
        }
    }
    

    I should note that you may want to explicitly configure what event types the projection is interested in as a way to optimize the projection when running in the async daemon.

    Now, if you want to “evolve” a snapshot with explicit code, but you might need to do query some reference data as you do that, you can instead override the asynchronous EvolveAsync method with this signature:

        public virtual ValueTask<TDoc?> EvolveAsync(TDoc? snapshot, TId id, TQuerySession session, IEvent e,
            CancellationToken cancellation)
    

    But wait, there’s (unfortunately) more options! In the recipes above, you’re assuming that the single stream projection has a simplistic lifecycle of being created, updated one or more times, then maybe being deleted and/or archived. But what if you have some kind of complex workflow where the projected document for a single event stream might be repeatedly created, deleted, then restarted? We had to originally introduce the CustomProjection mechanism to Marten 6/7 as a way of accommodating complex workflows, especially when they involved soft deletes of the projected documents. In Marten 8, we’re (for now) proposing reentrant workflows with this syntax by overriding the DetermineAction() method like so:

    public class StartAndStopProjection: SingleStreamProjection<StartAndStopAggregate, Guid>
    {
        public StartAndStopProjection()
        {
            // This is an optional, but potentially important optimization
            // for the async daemon so that it sets up an allow list
            // of the event types that will be run through this projection
            IncludeType<Start>();
            IncludeType<End>();
            IncludeType<Restart>();
            IncludeType<Increment>();
        }
    
        public override (StartAndStopAggregate?, ActionType) DetermineAction(StartAndStopAggregate? snapshot, Guid identity,
            IReadOnlyList<IEvent> events)
        {
            var actionType = ActionType.Store;
    
            if (snapshot == null && events.HasNoEventsOfType<Start>())
            {
                return (snapshot, ActionType.Nothing);
            }
    
            var eventData = events.ToQueueOfEventData();
            while (eventData.Any())
            {
                var data = eventData.Dequeue();
                switch (data)
                {
                    case Start:
                        snapshot = new StartAndStopAggregate
                        {
                            // Have to assign the identity ourselves
                            Id = identity
                        };
                        break;
    
                    case Increment when snapshot is { Deleted: false }:
    
                        if (actionType == ActionType.StoreThenSoftDelete) continue;
    
                        // Use explicit code to only apply this event
                        // if the snapshot already exists
                        snapshot.Increment();
                        break;
    
                    case End when snapshot is { Deleted: false }:
                        // This will be a "soft delete" because the snapshot type
                        // implements the IDeleted interface
                        snapshot.Deleted = true;
                        actionType = ActionType.StoreThenSoftDelete;
                        break;
    
                    case Restart when snapshot == null || snapshot.Deleted:
                        // Got to "undo" the soft delete status
                        actionType = ActionType.UnDeleteAndStore;
                        snapshot.Deleted = false;
                        break;
                }
            }
    
            return (snapshot, actionType);
        }
    
    }
    

    And of course, since *some* of you will do even more complex things that will require making database calls through Marten or maybe even calling into external web services, there’s an asynchronous alternative as well with this signature:

        public virtual ValueTask<(TDoc?, ActionType)> DetermineActionAsync(TQuerySession session,
            TDoc? snapshot,
            TId identity,
            IIdentitySetter<TDoc, TId> identitySetter,
            IReadOnlyList<IEvent> events,
            CancellationToken cancellation)
    

    Multi-Stream Projections

    Multi-stream projections are similar in mechanism to single stream projections, but there’s an extra step of “slicing” or grouping events across event streams into related aggregate documents. Experienced Marten users will be aware that the “slicing” API in Marten has not been the most usable API in the world. I think that even though it didn’t change *that* much in Marten 8, the “slicing” will still be easier to use.

    First, here’s a sample multi-stream projection that didn’t change at all from Marten 7:

    public class DayProjection: MultiStreamProjection<Day, int>
    {
        public DayProjection()
        {
            // Tell the projection how to group the events
            // by Day document
            Identity<IDayEvent>(x => x.Day);
    
            // This just lets the projection work independently
            // on each Movement child of the Travel event
            // as if it were its own event
            FanOut<Travel, Movement>(x => x.Movements);
    
            // You can also access Event data
            FanOut<Travel, Stop>(x => x.Data.Stops);
    
            ProjectionName = "Day";
    
            // Opt into 2nd level caching of up to 100
            // most recently encountered aggregates as a
            // performance optimization
            Options.CacheLimitPerTenant = 1000;
    
            // With large event stores of relatively small
            // event objects, moving this number up from the
            // default can greatly improve throughput and especially
            // improve projection rebuild times
            Options.BatchSize = 5000;
        }
    
        public void Apply(Day day, TripStarted e)
        {
            day.Started++;
        }
    
        public void Apply(Day day, TripEnded e)
        {
            day.Ended++;
        }
    
        public void Apply(Day day, Movement e)
        {
            switch (e.Direction)
            {
                case Direction.East:
                    day.East += e.Distance;
                    break;
                case Direction.North:
                    day.North += e.Distance;
                    break;
                case Direction.South:
                    day.South += e.Distance;
                    break;
                case Direction.West:
                    day.West += e.Distance;
                    break;
    
                default:
                    throw new ArgumentOutOfRangeException();
            }
        }
    
        public void Apply(Day day, Stop e)
        {
            day.Stops++;
        }
    }
    

    The options to use conventional Apply/Create methods or to override Evolve, EvolveAsync, DetermineAction, or DetermineActionAsync are identical to SingleStreamProjection.

    Now, on to a more complicated “slicing” sample with custom code:

    public class UserGroupsAssignmentProjection: MultiStreamProjection<UserGroupsAssignment, Guid>
    {
    public UserGroupsAssignmentProjection()
    {
    CustomGrouping((_, events, group) =>
    {
    group.AddEvents<UserRegistered>(@event => @event.UserId, events);
    group.AddEvents<MultipleUsersAssignedToGroup>(@event => @event.UserIds, events);

    return Task.CompletedTask;
    });
    }

    I know it’s not that much simpler than Marten 8, but one thing Marten 8 is doing is handling tenancy grouping behind the scenes for you so that you can just focus on defining how events apply to different groupings. The sample above shaves 3-4 lines of code and a level or two of nesting from the Marten 7 equivalent.

    EventProjection and FlatTableProjection

    The existing EventProjection and FlatTableProjection models are supported in their entirety, but we will have a new explicit code option with this signature:

    public virtual ValueTask ApplyAsync(TOperations operations, IEvent e, CancellationToken cancellation)
    

    And of course, you can still just write a custom IProjection class to go straight down to the metal with all your own code, but that’s been simplified a little bit from Marten 7 such that you don’t have to care about whether it’s running Inline or in Async lifetimes:

        public class QuestPatchTestProjection: IProjection
        {
            public Guid Id { get; set; }
    
            public string Name { get; set; }
    
            public Task ApplyAsync(IDocumentOperations operations, IReadOnlyList<IEvent> events, CancellationToken cancellation)
            {
                var questEvents = events.Select(s => s.Data);
    
                foreach (var @event in questEvents)
                {
                    if (@event is Quest quest)
                    {
                        operations.Store(new QuestPatchTestProjection { Id = quest.Id });
                    }
                    else if (@event is QuestStarted started)
                    {
                        operations.Patch<QuestPatchTestProjection>(started.Id).Set(x => x.Name, "New Name");
                    }
                }
                return Task.CompletedTask;
            }
        }
    

    What’s Still to Come?

    I’m admittedly cutting this post short just because I’m a good (okay, not horrible) Dad and it’s time to do bedtime in a minute. Beyond just responding to whatever feedback comes in, there’s some more test cases for the explicit coding options, more samples to write for documentation, and a seemingly endless array of use cases for strong typed identifiers.

    Beyond that, there’s still a significant effort to come with Marten 8 to try some performance and scalability optimizations for asynchronous projections, but I’ll warn you all that anything too complex is likely to land in our theoretical paid add on model.

    A Quick Note About JasperFx’s Plans for Marten & Wolverine

    So, yes, Wolverine overlaps quite a bit with both MediatR and MassTransit. If you’re a MediatR user, Wolverine just does a helluva lot more and we have an existing guide for converting from MediatR to Wolverine. For MassTransit (or NServiceBus) users, Wolverine covers a lot of the same asynchronous messaging framework use cases, but does much, much more to simplify your application code than any other .NET messaging framework and should not be compared as an apples to apples messaging feature comparison. And no other tool in the entire .NET ecosystem can come even remotely close to the Critter Stack’s support for Event Sourcing from soup to nuts.

    It’s kind of a big day in .NET OSS news with both MediatR and MassTransit respectively announcing moves to commercial licensing models. I’d like to start by wishing the best of luck to my friends Jimmy Bogard and Chris Patterson respectively with their new ventures.

    As any long term participant in or observer of the .NET ecosystem knows, there’s about to be a flood of negativity from various people in our community about these moves. There will also be an outcry from a sizable cohort in the .NET community who seem to believe that all development tools should be provided by Microsoft and that only Microsoft can ever be a reliable supplier of these types of tools while somehow suffering from amnesia about how Microsoft has frequently abandoned high profile tools like Silverlight or WCF.

    As for Marten, Wolverine, and other future Critter Stack tools, the current JasperFx Software strategy remains following the “open core” model where the existing capabilities in the MIT-licensed tools (note below) remain under an OSS license and JasperFx Software focuses on services, support plans, and the forthcoming commercial CritterWatch tool for monitoring, management, and some advanced features for data privacy, multi-tenancy, and extreme scalability. While we certainly respect MassTransit’s decision, we’re going to try a different path and stay down the “open core” model and Marten 8 / Wolverine 4 will be released under the MIT OSS license. I will admit that you may see some increasing reluctance to be providing as much free support through Discord as we have to users in the past though.

    To be technical, there is one existing feature in Marten 7.* for optimized projection rebuilds that I think we’ll redesign and move to the commercial add on tooling in the Marten 8 timeframe, but in this case the existing feature is barely usable anyway so ¯\_(ツ)_/¯

    Critter Stack Work in Progress

    It’s just time for an update from my last post on Critter Stack Roadmap Update for February as the work has progressed in the past weeks and we have more clarity on what’s going to change.

    Work is heavily underway right now for a round of related releases in the Critter Stack (Marten, Wolverine, and other tools) I was originally calling “Critter Stack 2025” involving these tools:

    Ermine for Event Sourcing with SQL Server

    “Ermine” is our next full fledged “Critter” that’s been a long planned port of a significant subset of Marten’s functionality to targeting SQL Server. At this point, the general thinking is:

    • Focus on porting the Event Sourcing functionality from Marten
    • Quite possibly build around the JSON field support in EF Core and utilize EF Core under the covers. Maybe.
    • Use a new common JasperFx.Events library that will contain the key abstractions, metadata tracking, and even projection support. This new library will be shared between Marten, Ermine, and theoretical later “critters” targeting CosmosDb or DynamoDb down the line
    • Maybe try to lift out more common database handling code from Marten, but man, there’s more differences between PostgreSQL and SQL Server than I think people understand and that might turn into a time sink
    • Support the same kind of “aggregate handler workflow” integration with Wolverine as we have with Marten today, and probably try to do this with shared code, but that’s just a detail

    Is this a good idea to do at all? We’ll see. The work to generalize the Marten projection support has been a time sink so far. I’ve been told by folks for a decade that Marten should have targeted SQL Server, and that supporting SQL Server would open up a lot more users. I think this is a bit of a gamble, but I’m hopeful.

    JasperFx Dependency Consolidation

    Most of the little, shared foundational elements of Marten, Wolverine, and soon to be Ermine have been consolidated into a single JasperFx library. That now includes what was:

    1. JasperFx.Core (which in turn was renamed from “Baseline” after someone else squatted on that name and in turn was imported from ancient FubuCore for long term followers of mine)
    2. JasperFx.CodeGeneration
    3. The command line discovery, parsing, and execution model that is in Oakton today. That might be a touch annoying for the initial conversion, but in the little bit longer term that’s allowed us to combine several Nuget packages and simplify the project structure over all. TL;DR: fewer Nugets to install going forward.

    Marten 8.0

    I hope that Marten 8.0 is a much smaller release than Marten 7.0 was last year, but the projection model changes are turning out to be substantial. So far, this work has been done:

    • .NET 6/7 support has been dropped and the dependency tree simplified after that
    • Synchronous database access APIs have been eliminated
    • All other API signatures that were marked as [Obsolete] in the latest versions of Marten 7.* were removed
    • Marten.CommandLine was removed altogether, but the “db-*” commands are available as part of Marten’d dependency tree with no difference in functionality from the “marten-*” commands
    • Upgraded to the latest Npgsql 9

    The projection subsystem overhaul is ongoing and substantial and frankly I’m kind of expecting Vizzini to show up in my home office and laugh at me for starting a land war in Southeast Asia. For right now I’ll just say that the key goals are:

    • The aforementioned reuse with Ermine and potential other Event Store implementations later
    • Making it as easy as possible to use explicit code instead as desired for the projections in addition to the existing conventional Apply / Create methods
    • Eliminate code generation for just the projections
    • Simplify the usage of “event slicing” for grouping events in multi-stream projections. I’m happy how this is shaping up so far, and I think this is going to end up being a positive after the initial conversion
    • Improve the throughput of the async daemon

    There’s also a planned “stream compacting” feature happening, but it’s too early to talk about that much. Depending on how the projection work goes, there may be other performance related work as well.

    Wolverine 4.0

    Wolverine 4.0 is mostly about accomodating the work in other products, but there are some changes. Here’s what’s already been done:

    • Dropped .NET 7 support
    • Significant work for a single application being able to use multiple databases from within one application for folks getting clever with modular monoliths. In Wolverine 4.*, you’ll be able to mix and match any number of data stores with the corresponding transactional inbox/outbox support much better than Wolverine 3.* can do. This is 100% about modular monoliths, but also fit into the CritterWatch work
    • Work to provide information to CritterWatch

    There are some other important features that might be part of Wolverine 4.0 depending on some ongoing negotiations with a potential JasperFx customer.

    CritterWatch Minimal Viable Product Direction

    “CritterWatch” is a long planned commercial add on product for Wolverine, Marten, and any future “critter” Event Store tools. The goal is to create both a management and monitoring dashboard for Wolverine messaging and the Event Sourcing processes in those systems.

    The initial concept is shown below:

    At least for the moment, the goal of the CritterWatch MVP is to deliver a standalone system that can be deployed either in the cloud or on a client premises. The MVP functionality set will:

    • Explain the configuration and capabilities of all your Critter Stack systems, including some visualization of how messages flow between your systems and the state of any event projections or subscriptions
    • Work with your OpenTelemetry tracking to correlate ongoing performance information to the artifacts in your system.
    • Visualize any ongoing event projections or subscriptions by telling you where each is running and how healthy they are — as well as give you the ability to pause, restart, rebuild, or rewind them as needed
    • Manage the dead letter queued (DLQ) messages of your system with the ability to query the messages and selectively replay or discard the DLQ messages

    We have a world of other plans for CritterWatch, but the feature set above is the most requested features from the companies that are most interested in this tool first.

    Pretty Substantial Wolverine 3.11 Release

    The Critter Stack community just made a pretty big Wolverine 3.11 release earlier today with 5 brand new contributors making their first pull requests! The highlights are:

    • Efficiency and throughput improvements for publishing messages through the Kafka transport
    • Hopefully more resiliency in the Kafka transport
    • A fix for object disposal mechanics that probably got messed up in the 3.0 release (oops on my part)
    • Improvements for the Azure Service Bus transport‘s ability to handle larger message batches
    • New options for the Pulsar transport
    • Expanded ability for interop with non-Wolverine services with the Google Pubsub transport
    • Some fixes for Wolverine.HTTP

    Wolverine 4.0 is also under way, but there will be at least some Wolverine.HTTP improvements in the 3.* branch before we get to 4.0.

    Big thanks to the whole Critter Stack community for continuing to support Wolverine, including the folks who took the time to create actionable bug reports that led to several of the fixes and the folks who made fixes to the documentation website as well!

    Nobody Codes a Bad System On Purpose

    I have been writing up a little one pager for a JasperFx Software client for their new CTO on why and how their flagship system could use some technical transformation and modernization. I ran my write up past one of their senior developers that I’ve been collaborating on for tactical performance improvements, and he more or less agreed with everything but felt bad that I was maybe throwing the original development team (all since departed for other opportunities) under the bus a bit — my words, not his.

    My response was that their planned approach might have been working just fine upfront when the system was simpler, but maybe they would have happily and competently adapted over time as the system overgrew the original patterns and reference architecture, but just weren’t around to get that feedback.

    And let’s be honest, I know I’ve created some clever architectures that got dropped on unsuspecting other people in my day too. Including the (actually kind of successful) workflow system I did in Classic ASP + Oracle that had ~70 metadata tables and the system that was written in 6 different programming languages.

    That brings me finally to my main point here, and that’s even though I see plenty of systems where the codebase is very challenging to work with and puts the system at risk, I don’t think that any of the teams were necessarily incompetent or didn’t care about doing good work or didn’t have an organized theory about how the code should be structured or even what the architecture should be. Moreover, I can’t say that I’ve even seen a true, classic ball of mud in a couple decades.

    Instead, I would say that the systems that I’ve seen in the past decade that were widely known as having code that was hard to work on and suffered from poor performance all had a pretty cohesive coding approach and architecture. The real problem was that at some point the system or the database had grown enough to expose the flaws in the approach or simply grown too complex to be confined within the system’s prescriptive approach, but the teams who owned those systems did not, or were not able to, adapt over time.

    To try to make this post not ramble on too long, here’s a couple follow up points:

    • I think that if you have technical ownership over any kind of large system, or are tasked with creating what’s likely going to grow to become a large system, you should adopt an attitude of constantly challenging the basic approach and at a minimum, being aware of when intended changes to the system are difficult because of the current architectural approach
    • Moderate on the idea of consistency throughout your codebase or at least between features. On my recent appearance on DotNetRocks, I veered into a sports metaphor about “raising the floor” vs “raising the ceiling” of the technical quality of a codebase. Technical leads who are worried about consistency and prescriptive project templates are trying to “raise the floor” on code quality — and that works to a point. On the other hand, I think that if you empower a development team to adapt or change their technical approach over time or even just for new subsystems, and if the team has the skillset to do so, you can “raise the ceiling” on technical quality because I have found that one of the main contributors to bad system code is rigid adherence to some kind of prescriptive approach that just doesn’t scale up to the more complicated use cases in a big system.
    • If you follow me or have ever stumbled into many discussions about the Critter Stack, you’ll know that I very strongly believe that reducing code ceremony. For me this means forsaking too many abstractions over persistence, reducing layering, favoring a vertical slice architecture, and honestly, letting in some “magic” through conventional approaches (that’s a debate all by itself of course). I think there’s a huge advantage in being able to easily reason about a codebase throughout a use case from system inputs all the way down to the database. On the other side of that, I think that complex layering strategies will often put too many layers of code to the point where teams cannot easily understand the cause and effect between system inputs and what the outcomes actually are. That is, I think, the number one cause of poor system performance by teams comes from not being able to easily see how chatty a system becomes between its front end, server layer, and database. As an aside, I’ve seen OpenTelemetry tracing be a godsend for identifying performance bottlenecks in unnecessarily complicated code by showing you exactly how many queries a single web request is really making.
    • Just to hammer on the code ceremony angle yet again, I think the only truly reliable way to arrive at a good system that meets your company’s needs over time and is easy to change is iteration and adaptation. High ceremony coding approaches retard your ability to quickly iterate and adapt, and but more of an onus on teams to get things right upfront — which just isn’t consistently possible no matter how hard your try.

    Summary

    Anyway, to close out, I think that the mass majority of us really do care about doing a good job in our software development work, but we’re all quite capable of having ideas about how a system should be coded, structured, and architected that simply will not work out over time. The only real solution is empowered teams that constantly adapt as necessary instead of letting a codebase get out of control in the first place.

    Wait, what’s that you ask? How do you work with your product owners to give you the space to do that? And that’s my cue to start my week long vacation!

    Good luck folks, and try to be a little easier on your feelings toward the “previous folks”. And that goes double for me.

    And look, I got through this whole post without ranting about how prescriptive Onion/Clean/Hexagonal/Ports and Adapters/iDesign approaches and all the cruft that the DDD community dares each other to build into systems is the root of all coding evil! Oops, never mind.