I normally write this out in January, but I’m feeling like now is a good time to get this out as some of it is in flight. So with plenty of feedback from the other Critter Stack Core team members and a lot of experience seeing where JasperFx Software clients have hit friction in the past couple years, here’s my current thinking about where the Critter Stack development goes for 2026.
As I’m sure you can guess, every time I’ve written this yearly post, it’s been absurdly off the mark of what actually gets done through the year.
Critter Watch
For the love of all that’s good in this world, JasperFx Software needs to get an MVP out the door that’s usable for early adopters who are already clamoring for it. The “Critter Watch” tool, in a nutshell, should be able to tell you everything you need to know about how or why a Critter Stack application is unhealthy and then also give you the tools you need to heal your systems when anything does go wrong.
The MVP is still shaping up as:
A visualization and explanation of the configuration of your Critter Stack application
Performance metrics integration from both Marten and Wolverine
Event Store monitoring and management of projections and subscriptions
Wolverine node visualization and monitoring
Dead Letter Queue querying and management
Alerting – but I don’t have a huge amount of detail yet. I’m paying close attention to the issues JasperFx clients see in production applications though, and using that to inform what information Critter Watch will surface through its user interface and push notifications
This work is heavily in flight, and will hopefully accelerate over the holidays and January as JasperFx Software clients tend to be much quieter. I will be publishing a separate vision document soon for users to review.
The Entire “Critter Stack”
We’re standing up the new docs.jasperfx.net (Babu is already working on this) to hold documentation on supporting libraries and more tutorials and sample projects that cross Marten & Wolverine. This will finally add some documentation for Weasel (database utilities and migration support), our command line support, the stateful resource model, the code generation model, and everything to do with DevOps recipes.
Play the “Cold Start Optimization” epic across both Marten and Wolverine (and possibly Lamar). I don’t think that true AOT support is feasible, but maybe we can get a lot closer. Have an optimized start mode of some sort that eliminates all or at least most of:
Reflection usage in bootstrapping
Reflection usage at runtime, which today is really just occasional calls to object.GetType()
Assembly scanning of any kind, which we know can be very expensive for some systems with very large dependency trees.
Increased and improved integration with EF Core across the stack
Marten
The biggest set of complaints I’m hearing lately is all around views between multiple entity types or projections involving multiple stream types or multiple entity types. I also got some feedback from multiple past clients about the limitation of Marten as a data source underneath UI grids, which isn’t particularly a new bit of feedback. In general, there also appears to be a massive opportunity to improve Marten’s usability for many users by having more robust support in the box for projecting event data to flat, denormalized tables.
I think I’d like to prioritize a series of work in 2026 to alleviate the complicated view problem:
The “Composite Projections” Epic where you might use the build products of upstream projections to create multi-stream projection views. This is also an opportunity to ratchet up even more scalability and throughput in the daemon. I’ve gotten positive feedback from a couple JasperFx clients about this. It’s also a big opportunity to increase the throughput and scalability of the Async Daemon by making fewer database requests
Revisit GroupJoin in the LINQ support even though that’s going to be absolutely miserable to build. GroupJoin() might end up being a much easier usage that all our Include() functionality.
A first class model to project Marten event data with EF Core. In this proposed model, you’d use an EF Core DbContext to do all the actual writes to a database.
Other than that, some other ideas that have kicked around for awhile are:
Improve the documentation and sample projects, especially around the usage of projections
Take a better look at the full text search features in Marten
Finally support the PostGIS extension in Marten. I think that could be something flashy and quick to build, but I’d strongly prefer to do this in the context of an actual client use case.
Continue to improve our story around multi-stream operations. I’m not enthusiastic about “Dynamic Boundary Consistency” (DCB) in regards to Marten though, so I’m not sure what this actually means yet. This might end up centering much more on the integration with Wolverine’s “aggregate handler workflow” which is already perfectly happy to support strong consistency models even with operations that touch more than one event stream.
Wolverine
Wolverine is by far and away the busiest part of the Critter Stack in terms of active development right now, but I think that slows down soon. To be honest, most work at this point is us reacting tactically to JasperFx client or user needs. In terms of general, strategic themes, I think that 2026 will involve:
In conjunction with “CritterWatch”, improving Wolverine’s management story around dead letter queueing
I would love to expand Wolverine’s database support beyond “just” SQL Server and PostgreSQL
Improving the Kafka integration. That’s not our most widely used messaging broker, but that seems to be the leading source of enhancement requests right now
New Critters?
We’ve done a lot of preliminary work to potentially build new Critter Stack event store alternatives based on different database engines. I’ve always believed that SQL Server would be the logical next database engine, but we’ve gotten fewer and fewer requests for this as PostgreSQL has become a much more popular database choice in the .NET ecosystem.
I’m not sure this will be a high priority in 2026, but you never know…
I was helping a new JasperFx Software client this week to best integrate a Domain Events strategy into their new Wolverine codebase. This client wanted to use the common model of using an EF Core DbContext to harvest domain events raised by different entities and relay those to Wolverine messaging with proper Wolverine transactional outbox support for system durability. As part of that assistance — and also to have some content for other Wolverine users trying the same thing later — I promised to write a blog post showing how I’d do this kind of integration myself with Wolverine and EF Core or at least consider a few options. To try to more permanently head this usage problem for other users, I went into mad scientist mode this evening and just rolled out a new Wolverine 5.6 with some important improvements to make this Domain Events pattern much easier to use in combination with EF Core.
Let’s start with some context about the general kind of approach I’m referring to with…
// Base class that establishes the pattern for publishing
// domain events within an entity
public abstract class Entity : IEntity
{
[NotMapped]
private readonly ConcurrentQueue<IDomainEvent> _domainEvents = new ConcurrentQueue<IDomainEvent>();
[NotMapped]
public IProducerConsumerCollection<IDomainEvent> DomainEvents => _domainEvents;
protected void PublishEvent(IDomainEvent @event)
{
_domainEvents.Enqueue(@event);
}
protected Guid NewIdGuid()
{
return MassTransit.NewId.NextGuid();
}
}
public class BacklogItem : Entity
{
public Guid Id { get; private set; }
[MaxLength(255)]
public string Description { get; private set; }
public virtual Sprint Sprint { get; private set; }
public DateTime CreatedAtUtc { get; private set; } = DateTime.UtcNow;
private BacklogItem() { }
public BacklogItem(string desc)
{
this.Id = NewIdGuid();
this.Description = desc;
}
public void CommitTo(Sprint s)
{
this.Sprint = s;
this.PublishEvent(new BacklogItemCommitted(this, s));
}
}
Note the CommitTo() method that publishes a BacklogItemCommitted event that in his sample is published via MediatR with some customization of an EF Core DbContext like this from the referenced post with some comments that I added:
public override async Task<int> SaveChangesAsync(CancellationToken cancellationToken = default(CancellationToken))
{
await _preSaveChanges();
var res = await base.SaveChangesAsync(cancellationToken);
return res;
}
private async Task _preSaveChanges()
{
await _dispatchDomainEvents();
}
private async Task _dispatchDomainEvents()
{
// Find any entity objects that were changed in any way
// by the current DbContext, and relay them to MediatR
var domainEventEntities = ChangeTracker.Entries<IEntity>()
.Select(po => po.Entity)
.Where(po => po.DomainEvents.Any())
.ToArray();
foreach (var entity in domainEventEntities)
{
// _dispatcher was an abstraction in his post
// that was a light wrapper around MediatR
IDomainEvent dev;
while (entity.DomainEvents.TryTake(out dev))
await _dispatcher.Dispatch(dev);
}
}
The goal of this approach is to make DDD style entity types the entry point and governing “decider” of all business behavior and workflow and give these domain model types a way to publish event messages to the rest of the system for side effects in the system outside of the state of the entity. Like for example, maybe the backlog system has to publish a message to a Slack room about the back log item being added to the sprint. You sure as hell don’t want your domain entity to have to know about the infrastructure you use to talk to Slack or web services or whatever.
Mechanically, I’ve seen this typically done with some kind of Entity base class that either exposes a collection of published domain events like the sample above, or puts some kind of interface like this directly into the Entity objects:
// Just assume that this little abstraction
// eventually relays the event messages to Wolverine
// or whatever messaging tool you're using
public interface IEventPublisher
{
void Publish<T>(T @event);
}
// Using a Nullo just so you don't have potential
// NullReferenceExceptions
public class NulloEventPublisher : IEventPublisher
{
public void Publish<T>(T @event)
{
// Do nothing.
}
}
public abstract class Entity
{
public IEventPublisher Publisher { get; set; } = new NulloEventPublisher();
}
public class BacklogItem : Entity
{
public Guid Id { get; private set; } = Guid.CreateVersion7();
public string Description { get; private set; }
// ZOMG, I forgot how annoying ORMs are. Use a document database
// and stop worrying about making things virtual just for lazy loading
public virtual Sprint Sprint { get; private set; }
public void CommitTo(Sprint sprint)
{
Sprint = sprint;
Publisher.Publish(new BackLotItemCommitted(Id, sprint.Id));
}
}
In the approach of using the abstraction directly inside of your entity classes, you incur the extra overhead of connecting the Entity objects loaded out of EF Core with the implementation of your IEventPublisher interface at runtime. I’ll do a few thought experiments later in this post and try out a couple different alternatives.
Before going back to EF Core integration ideas, let me deviate into…
Idiomatic Critter Stack Usage
Forget EF Core for a second, let’s examine a possible usage with the full “Critter Stack” and use Marten for Event Sourcing instead. In this case, a command handler to add a backlog item to a sprint could look something like this (folks, I didn’t spend much time thinking about how a back log system would be built here):
public record BackLotItemCommitted(Guid SprintId);
public record CommitToSprint(Guid BacklogItemId, Guid SprintId);
// This is utilizing Wolverine's "Aggregate Handler Workflow"
// which is the Critter Stack's flavor of the "Decider" pattern
public static class CommitToSprintHandler
{
public static Events Handle(
// The actual command
CommitToSprint command,
// Current state of the back log item,
// and we may decide to make the commitment here
[WriteAggregate] BacklogItem item,
// Assuming that Sprint is event sourced,
// this is just a read only view of that stream
[ReadAggregate] Sprint sprint)
{
// Use the item & sprint to "decide" if
// the system can proceed with the commitment
return [new BackLotItemCommitted(command.SprintId)];
}
}
In the code above we’re appending the BackLotItemCommitted event to Marten that’s returned from the method. If you need to carry out side effects outside of the scope of this handler using that event as a message input, you have a couple options to have Wolverine relay that through any of its messaging through the event forwarding (faster, but un-ordered) or event subscriptions (strictly ordered, but that always means slower).
I should also say that if the events returned from the function above are also being forwarded as messages and not just being appended to the Marten event store, that messaging is completely integrated with Wolverine’s transactional outbox support. That’s a key differentiation all by itself from a similar MediatR based approach that doesn’t come with outbox support.
That’s it, that’s the whole handler, but here are some things I would want you to take away from that code sample above:
Yes, the business logic is embedded directly in the handler method instead of being buried in the BacklogItem or Sprint aggregates. We are very purposely going down a Functional Programming (adjacent? curious?) approach where the logic is primarily in pure “Decider” functions
I think the code above clearly shows the relationship between the system input (the CommitToSprint command message) and the potential side effects and changes in state of the system. This relative ease of reasoning about the code is of the utmost importance for system maintainability. We can look at the handler code and know that executing that message will potentially lead to events or event messages being published. I’m going to hit this point again from some of the other potential approaches because I think this is a vital point.
Testability of the business logic is easy with the pure function approach
There are no marker interfaces, Entity base classes, or jumping through layers. There’s no repository or factory
So enough of that, let’s start with some possible alternatives for Wolverine integration of domain events from domain entity objects with EF Core.
Relay Events from Your Entity Subclass to Wolverine
Switching back to EF Core integration, let’s look at a possible approach to teach Wolverine how to scrape domain events for publishing from your own custom Event or IEventlayer supertype like this one that we’ll put behind our BackLogItem type:
// Of course, if you're into DDD, you'll probably
// use many more marker interfaces than I do here,
// but you do you and I'll do me in throwaway sample code
public abstract class Entity
{
public List<object> Events { get; } = new();
public void Publish(object @event)
{
Events.Add(@event);
}
}
public class BacklogItem : Entity
{
public Guid Id { get; private set; }
public string Description { get; private set; }
public virtual Sprint Sprint { get; private set; }
public DateTime CreatedAtUtc { get; private set; } = DateTime.UtcNow;
public void CommitTo(Sprint sprint)
{
Sprint = sprint;
Publish(new BackLotItemCommitted(Id, sprint.Id));
}
}
Let’s utilize this a little bit within a Wolverine handler, first with explicit code:
public static class CommitToSprintHandler
{
public static async Task HandleAsync(
CommitToSprint command,
ItemsDbContext dbContext)
{
var item = await dbContext.BacklogItems.FindAsync(command.SprintId);
var sprint = await dbContext.Sprints.FindAsync(command.SprintId);
// This method would cause an event to be published within
// the BacklogItem object here that we need to gather up and
// relay to Wolverine later
item.CommitTo(sprint);
// Wolverine's transactional middleware handles
// everything around SaveChangesAsync() and transactions
}
}
public static class CommitToSprintHandler
{
public static IStorageAction<BacklogItem> Handle(
CommitToSprint command,
// There's a naming convention here about how
// Wolverine "knows" the id for the BacklogItem
// from the incoming command
[Entity] BacklogItem item,
[Entity] Sprint sprint
)
{
// This method would cause an event to be published within
// the BacklogItem object here that we need to gather up and
// relay to Wolverine later
item.CommitTo(sprint);
// This is necessary to "tell" Wolverine to put transactional middleware around the handler
// Just taking in the right DbContext type as a dependency
// work work just as well if you don't like the Wolverine
// magic
return Storage.Update(item);
}
}
Now, let’s add some Wolverine configuration to just make this pattern work:
builder.Host.UseWolverine(opts =>
{
// Setting up Sql Server-backed message storage
// This requires a reference to Wolverine.SqlServer
opts.PersistMessagesWithSqlServer(connectionString, "wolverine");
// Set up Entity Framework Core as the support
// for Wolverine's transactional middleware
opts.UseEntityFrameworkCoreTransactions();
// THIS IS A NEW API IN Wolverine 5.6!
opts.PublishDomainEventsFromEntityFrameworkCore<Entity>(x => x.Events);
// Enrolling all local queues into the
// durable inbox/outbox processing
opts.Policies.UseDurableLocalQueues();
});
In the Wolverine configuration above, the EF Core transactional middleware now “knows” how to scrape out possible domain events from the active DbContext.ChangeTracker and publish them through Wolverine. Moreover, the EF Core transactional middleware is doing all the operation ordering for you so that the events are enqueued as outgoing messages as part of the transaction and potentially persisted to the transactional inbox or outbox (depending on configuration) before the transaction is committed.
To make this as clear as possible, this approach is completely reliant on the EF Core transactional middleware.
Oh, and also note that this domain event “scraping” is also supported and tested with the IDbContextOutbox<T> service if you want to use this in application code outside of Wolverine message handlers or HTTP endpoints.
This approach could also support the thread safe approach that the sample from the first section used in the future, but I’m dubious that that’s necessary.
If I were building a system that embeds domain event publishing directly in domain model entity classes, I would prefer this approach. But, let’s talk about another option that will not require any changes to Wolverine…
Relay Events from Entity to Wolverine Cascading Messages
In this approach, which I’m granting that some people won’t like at all, we’ll simply pipe the event messages from the domain entity right to Wolverine and utilize Wolverine’s cascading message feature.
This time I’m going to change the BacklogItem entity class to something like this:
public class BacklogItem
{
public Guid Id { get; private set; }
public string Description { get; private set; }
public virtual Sprint Sprint { get; private set; }
public DateTime CreatedAtUtc { get; private set; } = DateTime.UtcNow;
// The exact return type isn't hugely important here
public object[] CommitTo(Sprint sprint)
{
Sprint = sprint;
return [new BackLotItemCommitted(Id, sprint.Id)];
}
}
With the handler signature:
public static class CommitToSprintHandler
{
public static object[] Handle(
CommitToSprint command,
// There's a naming convention here about how
// Wolverine "knows" the id for the BacklogItem
// from the incoming command
[Entity] BacklogItem item,
[Entity] Sprint sprint
)
{
return item.CommitTo(sprint);
}
}
The approach above let’s you make the handler be a single pure function which is always great for unit testing, eliminates the need to do any customization of the DbContext type, makes it unnecessary to bother with any kind of IEventPublisher interface, and let’s you keep the logic for what event messages should be raised completely in your domain model entity types.
I’d also argue that this approach makes it more clear to later developers that “hey, additional messages may be published as part of handling the CommitToSprint command” and I think that’s invaluable. I’ll harp on this more later, but I think the traditional, MediatR-flavored approach to domain events from the first example at the top makes application code harder to reason about and therefore more buggy over time.
Embedding IEventPublisher into the Entities
Lastly, let’s move to what I think is my least favorite approach that I will from this moment be recommending against for any JasperFx clients but is now completely supported by Wolverine 5.6+! Let’s use an IEventPublisher interface like this:
// Just assume that this little abstraction
// eventually relays the event messages to Wolverine
// or whatever messaging tool you're using
public interface IEventPublisher
{
void Publish<T>(T @event) where T : IDomainEvent;
}
// Using a Nullo just so you don't have potential
// NullReferenceExceptions
public class NulloEventPublisher : IEventPublisher
{
public void Publish<T>(T @event) where T : IDomainEvent
{
// Do nothing.
}
}
public abstract class Entity
{
public IEventPublisher Publisher { get; set; } = new NulloEventPublisher();
}
public class BacklogItem : Entity
{
public Guid Id { get; private set; } = Guid.CreateVersion7();
public string Description { get; private set; }
// ZOMG, I forgot how annoying ORMs are. Use a document database
// and stop worrying about making things virtual just for lazy loading
public virtual Sprint Sprint { get; private set; }
public void CommitTo(Sprint sprint)
{
Sprint = sprint;
Publisher.Publish(new BackLotItemCommitted(Id, sprint.Id));
}
}
Now, on to a Wolverine implementation for this pattern. You’ll need to do just a couple things. First, add this line of configuration to Wolverine, and note there are no generic arguments here:
// This will set you up to scrape out domain events in the
// EF Core transactional middleware using a special service
// I'm just about to explain
opts.PublishDomainEventsFromEntityFrameworkCore();
Now, build a real implementation of that IEventPublisher interface above:
public class EventPublisher(OutgoingDomainEvents Events) : IEventPublisher
{
public void Publish<T>(T e) where T : IDomainEvent
{
Events.Add(e);
}
}
OutgoingDomainEvents is a service from the WolverineFx.EntityFrameworkCore Nuget that is registered as Scoped by the usage of the EF Core transactional middleware. Next, register your custom IEventPublisher with the Scoped lifecycle:
How you wire up IEventPublisher to your domain entities getting loaded out of the your EF Core DbContext? Frankly, I don’t want to know. Maybe a repository abstraction around your DbContext types? Dunno. I hate that kind of thing in code, but I perfectly trust *you* to do that and to not make me see that code.
What’s important is that within a message handler or HTTP endpoint, if you resolve the IEventPublisher through DI and use the EF Core transactional middleware, the domain events published to that interface will be piped correctly into Wolverine’s active messaging context.
Likewise, if you are using IDbContextOutbox<T>, the domain events published to IEventPublisher will be correctly piped to Wolverine if you:
Pull both IEventPublisher and IDbContextOutbox<T> from the same scoped service provider (nested container in Lamar / StructureMap parlance)
So, we’re going to have to do some sleight of hand to keep your domain entities synchronous
Last note, in unit testing you might use a stand in “Spy” like this:
public class RecordingEventPublisher : OutgoingMessages, IEventPublisher
{
public void Publish<T>(T @event)
{
Add(@event);
}
}
Summary
I have always hated this Domain Events pattern and much prefer the full “Critter Stack” approach with the Decider pattern and event sourcing. But, Wolverine is picking up a lot more users who combine it with EF Core (and JasperFx deeply appreciates these customers!) and I know damn well that there will be more and more demand for this pattern as people with more traditional DDD backgrounds and used to more DI-reliant tools transition to Wolverine. Now was an awfully good time to plug this gap.
If it was me, I would also prefer having an Entity just store published domain events on itself and depend on Wolverine “scraping” these events out of the DbContext change tracking so you don’t have to do any kind of gymnastics and extra layering to attach some kind of IEventPublisher to your Entity types.
Lastly, if you’re comparing this straight up to the MediatR approach, just keep in mind that this is not an oranges to oranges comparison because Wolverine also needs to correctly utilize its transactional outbox for resiliency, which is a feature that MediatR does not provide.
Starting today, Babu Annamalai is taking a larger role at JasperFx Software (LLC) to help expand our support coverage to just about every time zone on the planet. Babu is a long time member of the Marten and now Critter Stack core team. In addition to some large contributions like the Partial API in Marten and smoothing out database migrations, he’s been responsible for most of our DevOps support and documentation websites that helps keep the Critter Stack moving forward.
A little more about Babu:
Babu has over 28 years of experience, excelling in technology and product management roles within renowned enterprise firms. His expertise lies in crafting cutting-edge products and solutions customised for the ever-evolving domain of investment management and research. Co-maintainer of Marten. Owns and manages .NET OSS libraries ReverseMarkdown.Net and MysticMind.PostgresEmbed. Drawing from his wealth of knowledge, he recently embarked on a thrilling entrepreneurial journey, establishing Radarleaf Technologies, providing top-notch consultancy and bespoke software development services.
My internal code name for one of the new features I’m describing is “multi-stage tracked sessions” which somehow got me thinking of the ZZ Top song “Stages” and their Afterburner album because the sound track for getting this work done this week. Not ZZ Top’s best stuff, but there’s still some bangers on it, or at least *I* loved how it sounded on my Dad’s old phonograph player when I was a kid. For what it’s worth, my favorite ZZ Top albums cover to cover are Degüello and their La Futura comeback album.
I was heavily influenced by Extreme Programming in my early career and that’s made me have a very deep appreciation for the quality of “Testability” in the development tools I use and especially for the tools like Marten and Wolverine that I work on. I would say that one of the differentiators for Wolverine over other .NET messaging libraries and application frameworks is its heavy focus and support for automated testing of your application code.
The Critter Stack community released Marten 8.14 and Wolverine 5.1 today with some significant improvements to our testing support. These new features mostly originated from my work with JasperFx Software clients that give me a first hand look into what kinds of challenges our users hit automating tests that involve multiple layers of asynchronous behavior.
Jumping into an example, let’s say that your system interacts with another service that estimates delivery costs for ordering items. At some point in the system you might reach out through a request/reply call in Wolverine to estimate an item delivery before making a purchase like this code:
// This query message is normally sent to an external system through Wolverine
// messaging
public record EstimateDelivery(int ItemId, DateOnly Date, string PostalCode);
// This message type is a response from an external system
public record DeliveryInformation(TimeOnly DeliveryTime, decimal Cost);
public record MaybePurchaseItem(int ItemId, Guid LocationId, DateOnly Date, string PostalCode, decimal BudgetedCost);
public record MakePurchase(int ItemId, Guid LocationId, DateOnly Date);
public record PurchaseRejected(int ItemId, Guid LocationId, DateOnly Date);
public static class MaybePurchaseHandler
{
public static Task<DeliveryInformation> LoadAsync(
MaybePurchaseItem command,
IMessageBus bus,
CancellationToken cancellation)
{
var (itemId, _, date, postalCode, budget) = command;
var estimateDelivery = new EstimateDelivery(itemId, date, postalCode);
// Let's say this is doing a remote request and reply to another system
// through Wolverine messaging
return bus.InvokeAsync<DeliveryInformation>(estimateDelivery, cancellation);
}
public static object Handle(
MaybePurchaseItem command,
DeliveryInformation estimate)
{
if (estimate.Cost <= command.BudgetedCost)
{
return new MakePurchase(command.ItemId, command.LocationId, command.Date);
}
return new PurchaseRejected(command.ItemId, command.LocationId, command.Date);
}
}
And for a little more context, the EstimateDelivery message will always be sent to an external system in this configuration:
var builder = Host.CreateApplicationBuilder();
builder.UseWolverine(opts =>
{
opts
.UseRabbitMq(builder.Configuration.GetConnectionString("rabbit"))
.AutoProvision();
// Just showing that EstimateDelivery is handled by
// whatever system is on the other end of the "estimates" queue
opts.PublishMessage<EstimateDelivery>()
.ToRabbitQueue("estimates");
});
In testing scenarios, maybe the external system isn’t available at all, or it’s just much more challenging to run tests that also include the external system, or maybe you’d just like to write more isolated tests against your service’s behavior before even trying to integrate with the other system (my personal preference anyway). To that end we can now stub the remote handling like this:
public static async Task try_application(IHost host)
{
host.StubWolverineMessageHandling<EstimateDelivery, DeliveryInformation>(
query => new DeliveryInformation(new TimeOnly(17, 0), 1000));
var locationId = Guid.NewGuid();
var itemId = 111;
var expectedDate = new DateOnly(2025, 12, 1);
var postalCode = "78750";
var maybePurchaseItem = new MaybePurchaseItem(itemId, locationId, expectedDate, postalCode,
500);
var tracked =
await host.InvokeMessageAndWaitAsync(maybePurchaseItem);
// The estimated cost from the stub was more than we budgeted
// so this message should have been published
// This line is an assertion too that there was a single message
// of this type published as part of the message handling above
var rejected = tracked.Sent.SingleMessage<PurchaseRejected>();
rejected.ItemId.ShouldBe(itemId);
rejected.LocationId.ShouldBe(locationId);
}
After calling making this call:
host.StubWolverineMessageHandling<EstimateDelivery, DeliveryInformation>(
query => new DeliveryInformation(new TimeOnly(17, 0), 1000));
Calling this from our Wolverine application:
// Let's say this is doing a remote request and reply to another system
// through Wolverine messaging
return bus.InvokeAsync<DeliveryInformation>(estimateDelivery, cancellation);
Will use the stubbed logic we registered. This is enabling you to use fake behavior for difficult to use external services.
For the next test, we can completely remove the stub behavior and revert back to the original configuration like this:
public static void revert_stub(IHost host)
{
// Selectively clear out the stub behavior for only one message
// type
host.WolverineStubs(stubs =>
{
stubs.Clear<EstimateDelivery>();
});
// Or just clear out all active Wolverine message handler
// stubs
host.ClearAllWolverineStubs();
}
There’s a bit more to the feature you can read about in our documentation, but hopefully you can see right away how this can be useful for effectively stubbing out the behavior of external systems through Wolverine in tests.
And yes, some older .NET messaging frameworks already had *this* feature and it’s been occasionally requested from Wolverine, so I’m happy to say we have this important and useful capability.
Forcing Marten’s Asynchronous Daemon to “Catch Up”
Marten has had the IDocumentStore.WaitForNonStaleProjectionDataAsync(timeout) API (see the documentation for an example) for quite awhile that lets you pause a test while any running asynchronous projections or subscriptions run and catch up to wherever the event store “high water mark” was when you originally called the method. Hopefully, this lets ongoing background work proceed until the point where it’s now safe for you to proceed to the “Assert” part of your automated tests. As a convenience, this API is also available through extension methods on both IHost and IServiceProvider.
We’ve recently invested time into this API to make it provide much more contextual information about what’s happening asynchronously if the “waiting” does not complete. Specifically, we’ve made the API throw an exception that embeds a table of where every asynchronous projection or subscription ended up at compared to the event store’s “high water mark” (the highest sequential identifier assigned to a persisted event in the database). In this last release we made sure that that textual table also shows any projections or subscriptions that never recorded any process with a sequence of “0” so you can see what did or didn’t happen. We have also changed the API to record any exceptions thrown by the asynchronous daemon (serialization errors? application errors from *your* projection code? database errors?) and have those exceptions piped out in the failure messages when the “WaitFor” API does not successfully complete.
Okay, with all of that out of the way, we also added a completely new, slightly alternative for the asynchronous daemon that just forces the daemon to quickly process all outstanding events through every asynchronous projection or subscription right this second and throw up any exceptions that it encounters. We call this the “catch up” API:
using var daemon = await theStore.BuildProjectionDaemonAsync();
await daemon.CatchUpAsync(CancellationToken.None);
This mode is faster and hopefully more reliable than WaitFor***** because it’s happening inline and shortcuts a lot of the normal asynchronous polling and messaging within the normal daemon processing.
There’s also an IHost.CatchUpAsync() or IServiceProvider.CatchUpAsync() convenience method for test usage as well.
Multi Stage Tracked Sessions
I’m obviously biased, but I’d say that Wolverine’s tracked session capability is a killer feature that makes Wolverine stand apart from other messaging tools in the .NET ecosystem and it goes a long way toward making integration testing through Wolverine asynchronous messaging be productive and effective.
But, what if you have a testing scenario where you:
Carry out some kind of action (an HTTP request invoked through Alba? publishing a message internally within your application?) that leads to messages being published in Wolverine that might in turn lead to even more messages getting published within your Wolverine system or other tracked systems
Along the way, handling one or more commands leads to events being appended to a Marten event store
That might sound a little bit contrived, but it reflects real world scenarios I’ve discussed with multiple JasperFx clients in just the past couple weeks. With their help and some input from the community, we came up with this new extension to Wolverine’s “tracked sessions” to also track and wait for work spawned by Marten. Consider this bit of code from the tests for this feature:
var tracked = await _host.TrackActivity()
// This new helper just resets the main Marten store
// Equivalent to calling IHost.ResetAllMartenDataAsync()
.ResetAllMartenDataFirst()
.PauseThenCatchUpOnMartenDaemonActivity(CatchUpMode.AndResumeNormally)
.InvokeMessageAndWaitAsync(new AppendLetters(id, ["AAAACCCCBDEEE", "ABCDECCC", "BBBA", "DDDAE"]));
To add some context, handling the AppendLetters command message appends events to a Marten stream and possibly cascades another Wolverine message that also appends events. At the same time, there are asynchronous projections and event subscriptions that will publish messages through Wolverine as they run. We can now make this kind of testing scenario much more feasible and hopefully reliable (async heavy tests are super prone to being blinking tests) through the usage of the PauseThenCatchUpOnMartenDaemonActivity() extension method from the Wolverine.Marten library.
In the bit of test code above, that API is:
Registering a “before” action to pause all async daemon activity before executing the “Act” part of the tracked session which in this case is calling IMessageBus.InvokeAsync() against an AppendLetters command
Registering a 2nd stage of the tracked session
When this tracked session is executed, the following sequence happens:
The tracked session calls Marten’s ResetAllMartenDataAsync() in the main DocumentStore for the application to effectively rewind the database state down to your defined initial state
IMessageBus.InvokeAsync(AppendLetters) is called as the actual “execution” of the tracked session
The tracked session is watching everything going on with Wolverine messaging and waits until all “cascaded” messages are complete — and that is recursive. Basically, the tracked session waits until all subsequent messaging activity in the Wolverine application is complete
The 2nd stage we registered to “CatchUp” means the tracked session calls Marten’s new “CatchUp” API to force all asynchronous projections and event subscriptions in the system to immediately process all persisted events. This also restarts the tracked session monitoring of any Wolverine messaging activity so that this stage will only complete when all detected Wolverine messaging activity is completed.
By using this new capability inside of the older tracked session feature, we’re able to effectively test from the original message input through any subsequent messages triggered by the original message through asynchronous Marten behavior caused by the original messages which might in turn publish yet more messages through Wolverine.
Long story short, this gives us a reliable way to know when the “Act” part of a test is actually complete and proceed to the “Assert” portion of a test. Moreover, this new feature also tries really hard to bring out some visibility into the asynchronous Marten behavior and the second stage messaging behavior in the case of test failures.
Summary
None of this is particularly easy conceptually, and it’s admittedly here because of relatively hard problems in test automation that you might eventually run into. Selfishly, I needed to get these new features into the hands of a client tomorrow and ran out of time to better document these new features, so you get this braindump blog post.
If it helps, I’m going to talk through these new capabilities a bit more in our next Critter Stack live stream tomorrow (Nov. 6th):
Just to set myself up with some pressure to perform, let me hype up a live stream on Wolverine I’m doing later this week!
I’m doing a live stream on Thursday afternoon (U.S. friendly this time) entitled Vertical Slices the Critter Stack Way based on a fun, meandering talk I did for Houston DNUG and an abbreviated version at Commit Your Code last month.
So, yes, it’s technically about the “Vertical Slice Architecture” in general and specifically with Marten and Wolverine, but more importantly, the special sauce in Wolverine that does more — in my opinion of course — than any other server side .NET application framework to simplify your code and improve testability. In the live stream, I’m going to discuss:
A little bit about how I think modern layered architecture approaches and “Ports and Adapters” style approaches can sometimes lead to poor results over time
The qualities of a code base that I think are most important (the ability to reason about the behavior of the code, testability of all sorts, ease of iteration, and modularity)
How Wolverine’s low code ceremony improves outcomes and the qualities I listed above by reducing layering and shrinking your code into a much tighter vertical slice approach so you can actually see what your system does later on
Adopting Wolverine’s idiomatic “A-Frame Architecture” approach and “imperative shell, functional core” thinking to improve testability
A sampling of the ways that Wolverine can hugely simplify data access in simpler scenarios and how it can help you keep more complicated data access much closer to behavioral code so you can actually reason about the cause and effects between those two things. And all of that while happily letting you leverage every bit of power in whatever your database or data access tooling happens to be. Seriously, layering approaches and abstractions that obfuscate the database technologies and queries within your system are a very common source of poor system performance in Onion/Clean Architecture approaches.
Using Wolverine.HTTP as an alternative AspNetCore Endpoint model and why that’s simpler in the end than any kind of “Mediator” tooling inside of MVC Core or Minimal API
Wolverine’s adaptive approach to middleware
The full “Critter Stack” combination with Marten and how that leads to arguably the simplest and cleanest code for CQRS command handlers on the planet
Wolverine’s goodies for the majority of .NET devs using the venerable EF Core tooling as well
If you’ve never heard of Wolverine or haven’t really paid much attention to it yet, I’m most certainly inviting you to the live stream to give it a chance. If you’ve blown Wolverine off in the past as “yet another messaging tool in .NET,” come find out why that is most certainly not the full story because Wolverine will do much more for you within your application code than other, mere messaging frameworks in .NET or even any of the numerous “Mediator” tools floating around.
In the announcement for the Wolverine 5.0 release last week, I left out a pretty big set of improvements for modular monolith support, specifically in how Wolverine can now work with multiple databases from one service process.
And all of those features are supported for Marten, EF Core with either PostgreSQL or SQL Server, and RavenDb.
Back to the “modular monolith” approach and what I’m seeing folks do or want to do is some combination of:
Use multiple EF Core DbContext types that target the same database, but maybe with different schemas
Use Marten’s “ancillary or separated store” feature to divide the storage up for different modules against the same database
Wolverine 3/4 supported the previous two bullet points, but now Wolverine 5 will be able to support any combination of every possible option in the same process. That even includes the ability to:
Use multiple DbContext types that target completely different databases altogether
Mix and match with Marten ancillary stores that target completely different database
Use RavenDb for some modules, even if others use PostgreSQL or SQL Server
Utilize either Marten’s built in multi-tenancy through a database per tenant or Wolverine’s managed EF Core multi-tenancy through a database per tenant
And now do that in one process while being able to support Wolverine’s transactional inbox, outbox, scheduled messages, and saga support for every single database that the application utilizes. And oh, yeah, from the perspective of the future CritterWatch, you’ll be able to use Wolverine’s dead letter management services against every possible database in the service.
Okay, this is the point where I do have to admit that the RavenDb support for the dead letter administration is lagging a little bit, but we’ll get that hole filled in soon.
Here’s an example from the tests:
var builder = Host.CreateApplicationBuilder();
var sqlserver1 = builder.Configuration.GetConnectionString("sqlserver1");
var sqlserver2 = builder.Configuration.GetConnectionString("sqlserver2");
var postgresql = builder.Configuration.GetConnectionString("postgresql");
builder.UseWolverine(opts =>
{
// This helps Wolverine "know" how to share inbox/outbox
// storage across logical module databases where they're
// sharing the same physical database but with different schemas
opts.Durability.MessageStorageSchemaName = "wolverine";
// This will be the "main" store that Wolverine will use
// for node storage
opts.Services.AddMarten(m =>
{
m.Connection(postgresql);
}).IntegrateWithWolverine();
// "An" EF Core module using Wolverine based inbox/outbox storage
opts.UseEntityFrameworkCoreTransactions();
opts.Services.AddDbContextWithWolverineIntegration<SampleDbContext>(x => x.UseSqlServer(sqlserver1));
// This is helping Wolverine out by telling it what database to use for inbox/outbox integration
// when using this DbContext type in handlers or HTTP endpoints
opts.PersistMessagesWithSqlServer(sqlserver1, role:MessageStoreRole.Ancillary).Enroll<SampleDbContext>();
// Another EF Core module
opts.Services.AddDbContextWithWolverineIntegration<ItemsDbContext>(x => x.UseSqlServer(sqlserver2));
opts.PersistMessagesWithSqlServer(sqlserver2, role:MessageStoreRole.Ancillary).Enroll<ItemsDbContext>();
// Yet another Marten backed module
opts.Services.AddMartenStore<IFirstStore>(m =>
{
m.Connection(postgresql);
m.DatabaseSchemaName = "first";
});
});
I’m certainly not saying that you *should* run out and build a system that has that many different persistence options in a single deployable service, but now you *can* with Wolverine. And folks have definitely wanted to build Wolverine systems that target multiple databases for different modules and still get every bit of Wolverine functionality for each database.
Summary
Part of the Wolverine 5.0 work was also Jeffry Gonzalez and I pushing on JasperFx’s forthcoming “CritterWatch” tool and looking for any kind of breaking changes in the Wolverine “publinternals” that might be necessary to support CritterWatch. The “let’s let you use all the database options at one time!” improvements I tried to show in the post were suggested by the work we are doing for dead letter message management in CritterWatch.
I shudder to think how creative folks are going to be with this mix and match ability, but it’s cool to have some bragging rights over these capabilities because I don’t think that any other .NET tool can match this.
That’s of course supposed to be a 1992 Ford Mustang GT with the 5.0L V8 that high school age me thought was the coolest car I could imagine ever owning (I most certainly never did of course). Queue “Ice, Ice Baby” and sing “rolling, in my 5.0”in your head because here we go…
Wolverine 5.0 went live on Nuget earlier today after about three months of pretty intensive development from *20* different contributors with easily that many more folks having contributed to discussions and GitHub issues that helped get us here. I’m just not going to be able to list everyone, so let me just thank the very supportive Wolverine community, the 19 other contributors, and the JasperFx clients who contributed to this release.
This release came closely on the heels of Wolverine 4.0 earlier this year, with the primary reasons for a new major version release being:
A big change in the internals as we replaced the venerable TPL DataFlow library with the System.Threading.Channels library in every place that Wolverine uses in memory queueing. We did this as a precursor to a hugely important new feature commissioned by a JasperFx Software client (who really needs to get that feature in for their “scale out” so it was definitely about time I got this out today).
Some breaking API changes in the “publinternals” of Wolverine to support “CritterWatch”, our long planned and I promise finally in real development add on tooling for Critter Stack observability and management
With that being said, the top line new changes to Wolverine that I’ll be trying to blog about next week are:
The new Partitioned Sequential Messaging feature is a potentially huge step forward for building a Wolverine system that can efficiently and resiliently handle concurrent access to sensitive resources.
For a partial list of significant, smaller improvements:
Wolverine can utilize Marten batch querying for the declarative data access, and that includes working with multiple Marten event streams in one logical operation. This is part of the Critter Stack’s response to the “Dynamic Consistency Boundary” idea from some of the commercial event sourcing tools
You can finally use strong typed identifiers with the “aggregate handler workflow”
An overhaul of the dead letter queue administration services that was part of our ongoing work for CritterWatch
Optimistic concurrency support for EF Core backed Sagas from the community
Ability to target multiple Azure Service Bus namespaces from a single application and improvements to using Azure Service Bus namespace per tenant
Improvements to Rabbit MQ for advanced usage
What’s Next?
As happens basically every time, several features that were planned for 5.0 and some significant open issues didn’t make the 5.0 cut. The bigger effort to optimize the cold start time for both Marten and Wolverine will hopefully happen later this year. I think the next minor point release will target some open issues around Wolverine.HTTP (multi-part uploads, actual content negotiation) and the Kafka transport. I would like to take a longer look sometime at how the CritterStack combination can better support operations that cross stream boundaries.
But in the meantime, I’m shifting to open Marten issues before hopefully spending a couple weeks trying to jump start CritterWatch development again.
I usually end these kinds of major release announcements with a link to Don’t Steal My Sunshine as an exhortation to hold off on reporting problems or asking for whatever didn’t make the release. After referring to “Ice, Ice Baby” in the preface to this and probably getting that bass line stuck in your head, here’s the song you want to hear now anyway — which I feel much less of after getting this damn release out:
I was the guest speaker today on the .NET Data Community Standup doing a talk on how the “Critter Stack” (Marten, Wolverine, and Weasel) support a style of database migrations and even configuration for messaging brokers that greatly reduces development time friction for more productive teams.
The general theme is “it should just work” so developers and testers can get their work done and even iterate on different approaches without having to spend much time fiddling with database or other infrastructure configuration.
And I also shared some hard lessons learned from previous OSS project failures that made the Critter Stack community so adamant that the default configurations “should just work.”
Little update since the last check in on Wolverine 5.0. I think right now that Wolverine 5.0 hits by next Monday (October 6th). To be honest, besides documentation updates, the biggest work is just pushing more on the CritterWatch backend this week to see if that forces any breaking changes in the Wolverine internals.
Big improvements and expansion to Wolverine’s interoperability story against NServiceBus, MassTransit, CloudEvents, and whatever custom interoperability folks need to do
A first class Redis messaging transport from the community
Modernization and upgrades to the GCP Pubsub transport
The ability to mix and match database storage with Wolverine for modular monoliths
A bit batch of optimization for the Marten integration including improvements for multi-stream operations as our response to the “Dynamic Boundary Consistency” idea from other tools
The utilization of System.Threading.Channels in place of the TPL DataFlow library
What’s unfortunately out:
Any effort to optimize the cold start times for Marten and Wolverine. Just a bandwidth problem, plus I think this can get done without breaking changes
And we’ll see:
Random improvements for Azure Service Bus and Kafka usage
HTTP improvements for content negotiation and multi-part uploads
Yet more improvements to the “aggregate handler workflow” with Marten to allow for yet more strong typed identifier usage
The items in the 3rd list don’t require any breaking changes, so could slide to Wolverine 5.1 if necessary.
All in all, I’d argue this turned out to be a big batch of improvements with very few breaking API changes and almost nothing that would impact the average user.
I’ll admit that I’d stopped paying attention quite awhile ago and didn’t even realize Microsoft was still considering building out their own “Eventing Framework” until everybody and their little brother started posting a link to their announcement about forgoing this effort today.
Here’s a very few thoughts from me about this, and I think for about the first time ever, I’m disallowing comments on this one to just spit this out and be done with it.
I thought that what they were proposing in terms of usability by basically trying to make it “Minimal API” for asynchronous messaging was not going to be very successful in complex systems. I get that their approach might have led to a low learning code for simple usage and there’s some appeal to having a common programming model with web development, but man, I think that would have severely limited that tooling in terms of what it helped you do to deal with application complexity or testability compared to existing tools in this space.
Specifically, I think that the Microsoft tooling teams have a blind spot sometimes about testability design in their application frameworks
I think this is a technical area where .NET is actually very rich in options and there’s actually a lot of existing innovation across our ecosystem already (Wolverine, NServiceBus, MassTransit, AkkaDotNet, Rebus, Brighter, Microsoft’s own Dapr for crying out loud). I did not believe that the proposed tooling from Microsoft in this case did anything to improve the ecosystem except for the inevitable folks who just don’t want to have any dependency on .NET technology that is not from Microsoft
I’m continuously shocked anytime something like this bubbles up how a seemingly large part of the .NET community is outright hostile to non-Microsoft tooling in .NET
I will 100% admit that I was concerned about my own Wolverine project being severely harmed by the MS offering at the same time believing quite fervently that Wolverine would long remain a far superior technical solution. The reality is that Microsoft tooling tends to quickly take the Oxygen out of the air for non-Microsoft tools regardless of relative quality or even suitability for real usage. You can absolutely compete with the Microsoft offerings on technical quality, but not in informational reach or community attention
If Microsoft had gone ahead with their tooling, I had every intention of being aggressive online to try to point out every possible area where Wolverine had advantages and I had no plans to just give up. My thought was to just lean in much, much harder to the greater Critter Stack as a full blown Event Sourcing solution where there is really nothing competitive to the Critter Stack in the rest of the .NET community (I said what I said) and certainly nothing from Microsoft themselves (yet)
I think it hurts the .NET ecosystem when Microsoft squelches community innovation and this is something I’ve never liked about the greater .NET community’s fixation on having official, Microsoft approved tooling.
One thing the Microsoft folks tried to sell people like me who lead asynchronous messaging projects is that they (MS) were really good at application frameworks, and we could all take dependencies on a new set of medium level messaging abstractions and core libraries for messaging. I wonder if what they meant is what are now the various Aspire plugins for Rabbit MQ or Azure Service Bus. I was also extremely dubious about all of that.
As someone else pointed out, do you really want one tool trying to be all things to all people because that’s a recipe for a bloated, unmaintainable tool
I think the Microsoft team was a bit naive about what they would have to build out and how many feature requests they would have gotten from folks wanting to ditch very mature tools like MassTransit. I really don’t believe that Microsoft would have resisted the demands from some elements of the community to grow the new things into something able to handle more complex requirements
I don’t know what to say about the people who flipped their lids over the MassTransit and MediatR commercialization plans. I think folks were drastically underestimating the value of those tools, the overhead in supporting those tools over time, and in complete denial about the practicality of rolling your own one off tools.
The idea that Microsoft is an infallible maintainer of their development tools is bonkers