Hey, did you know that JasperFx Software is ready for formal support plans for Marten and Wolverine? Not only are we making the “Critter Stack” tools be viable long term options for your shop, we’re also interested in hearing your opinions about the tools and how they should change.We’re also certainly open to help you succeed with your software development projects on a consulting basis whether you’re using any part of the Critter Stack or some completely different .NET server side tooling.
I checked this morning, and Marten’s original 1.0 release was in September of 2016. Since then we as a community have been able to knock down several big obstacles to user adoption, but one pernicious concern of new users was the ability to scale the asynchronous projection support to very large loads as Marten today only supports a “hot/cold” model where all projections run in the same active process.
Two developments are going to finally change that in the next couple weeks. First off, the next Marten 7 beta is going to have a huge chunk of work on Marten’s “async daemon” process that potentially distributes work across multiple nodes at runtime.
By (implied) request:
We very much would like to know more about this new 🔥 hotness…
"If targeting a single database, Marten possibly runs projections on separate nodes"
If you are targeting a single database, Marten will do its potential ownership of each projection independently. We’re doing this by using PostgreSQL advisory locks for the determination of ownership on a projection by projection basis. At runtime, we’re using a little bit of randomness so that if you happen to start up multiple running application nodes at the same time, the different nodes will start checking for that ownership at random times and do so with a random order of the various projections. It’s not fool proof by any means, but this will allow Marten to potentially spread out the projections to different running application instances.
If you are using multi-tenancy through separate databases, Marten’s async daemon will similarly do an ownership check by database, and keep all the projections for a single database running on the same node. This is done with the theory that this should potentially reduce the number of database connections used overall by your system. As in the previous bullet for a single tenant, there’s some randomness introduced so each application instance doesn’t try to get ownership of the same databases at the same time and potentially cause dead lock situations. Likewise, Marten is randomizing the order in which it attempts to check the ownership of different databases so there’s a chance this strategy will distribute work across multiple nodes.
There’s some other improvements so far (with hopefully much more to follow) that we hope will increase the throughput of asynchronous projections, especially for projection rebuilds.
I should also mention that a JasperFx Software client has engaged us to improve Marten & Wolverine‘s support for dynamic utilization of per tenant databases where both Marten & Wolverine are able to discover new tenant databases at runtime and activate all necessary support agents for the new databases. That dynamic tenant work in part led to the async projection work I described above.
Let’s go even farther…
I’ll personally be very heads down this week on some very long planned work (sponsored by a JasperFx Software client!!!) for a “Critter Stack Pro” tool set to extend Marten’s event store to much larger data sets and throughput. This will be the first of a suite of commercial add on tools to the “Critter Stack”, with the initial emphasis being:
The ability to more effectively distribute asynchronous projection work across the running instances of the application using a software-based “agent distribution” already built into Wolverine. We’ll have some simple rules for how projections are distributed upfront, but I’m hoping to evolve into adaptive rules later that can adjust the distribution based on measured load and performance metrics
Zero-downtime deployments of Marten projection changes
Blue/green deployments of revisioned Marten projections and projected aggregates, meaning that you will be able to deploy a new version of a Marten projection in some running instances of a server applications while the older version is still functional in other running instances
I won’t do anything silly like put a timeframe around this, but the “Critter Stack Pro” will also include a user interface management console to watch and control the projection functionality.
The most popular post on my blog last year by far was The Lowly Strategy Pattern is Still Useful, a little rewind on the very basic strategy pattern. I occasionally make a pledge to myself to try to write more about development fundamentals like that, but I’m unusually busy because it turns out that starting a new company is time consuming (who knew?). One topic I do want to hit is basic design patterns, so I’ll be occasionally spitting out these little posts when I can pull out a decent example from something from Marten or Wolverine.
According to Wikipedia, the “State Pattern” is:
The state pattern is a behavioralsoftware design pattern that allows an object to alter its behavior when its internal state changes.
I do still have my old hard copy of the real GoF book on my book shelf and don’t really feel ashamed about that in any way.
Let me just jump right into an example from the ongoing, soon to be released (I swear) Marten 7.0 effort. In Marten 7.0, when you issue a query via Marten’s IDocumentSession service like in this MVC controller:
Marten is opening a database connection just in time, and immediately closing that connection as soon as the resulting ADO.Net DbDataReader is closed or disposed. We believe that this “new normal” behavior will be more efficient in most usages, and will especially help folks integrate Marten into Hot Chocolate for GraphQL queries (that’s a longer post when Marten 7 is officially released).
However, some folks will sometimes need to make Marten do one of a couple things:
Use the Marten session’s underlying database connection to read or write to PostgreSQL outside of Marten
Combine other tools like Dapper with Marten in the same shared database transaction
To that end, you can explicitly start a new database transaction for a Marten session like so:
public static async Task DoStuffInTransaction(
IDocumentSession session,
CancellationToken token)
{
// This makes the session open a new database connection
// and start a new transaction
await session.BeginTransactionAsync(token);
// do a mix of reads and write operations
// Commit the whole unit of work and
// any operations
await session.SaveChangesAsync(token);
}
As soon as that call to `IDocumentSession.BeginTransactionAsync() is made, the behavior of the session for every single subsequent operation changes. Instead of:
Opening a new connection just in time
Executing
Closing that connection as soon as possible
The session is now:
Attaching the generated command to the session’s currently open connection and transaction
Executing
Inside the internals of the DocumentSession, you could simply do an if/then check on the current state of the session to see if it’s currently enrolled or not in a transaction or using an open connection, but that’s a lot of repetitive branching logic that would clutter up our code. Instead, we’re using the old “State Pattern” with a common interface like this:
One way or another, every operation inside of IDocumentSession that calls through to the database utilizes that interface above — and there’s a lot of different operations!
By default now, each IDocumentSession is created with a reference to our default flavor of IConnectionLifetime called AutoClosingLifetime. But when you call BeginTransactionAsync(), the session is creating an all new object with all new behavior for the newly started connection and transaction like this:
public async ValueTask BeginTransactionAsync(CancellationToken token)
{
if (_connection is IAlwaysConnectedLifetime lifetime)
{
await lifetime.BeginTransactionAsync(token).ConfigureAwait(false);
}
else if (_connection is ITransactionStarter starter)
{
var tx = await starter.StartAsync(token).ConfigureAwait(false);
await tx.BeginTransactionAsync(token).ConfigureAwait(false);
// As you can see below, the session is completely swapping out its
// IConnectionLifetime reference so that every operation will go
// now get the "already connected and in a transaction" state
// logic
_connection = tx;
}
else
{
throw new InvalidOperationException(
$"The current lifetime {_connection} is neither a {nameof(IAlwaysConnectedLifetime)} nor a {nameof(ITransactionStarter)}");
}
}
And now, just to make this a little more concrete, here’s the logic of the AutoClosingLifetime when Marten is executing a single command asynchronously:
By using the “State Pattern”, we are able to remove a great deal of potentially repetitive and error prone if/then branching logic out of our code. That’s even more valuable when you consider that we have additional session state behavior for externally controlled transactions (the user pushes a shared connection and/or transaction into Marten) or for enlisting in ambient transactions. Many of the ancient GoF patterns were at heart, a way to head off potential bugs by reducing the amount if if/then branching code.
Last thing, many of you are going to correctly call out that the mechanical implementation is very similar to the old “Strategy” pattern. That’s certainly true, but I think the key is that the intent is a little different. The “State Pattern” is closely related to the usage of finite state machines where there’s a fixed set of operations that behave differently depending on the exact state. The Marten IDocumentSession transactional behavior qualifies as a “state pattern” in my book.
Not that I think it’s worth a lot of argument if you wanna just say it still looks like a “strategy”:-)
A very important part of any event sourcing architecture is actually being able to interpret the raw events representing the current (or past) state of the system. That’s where Marten’s “Projection” subsystem comes into play as a way to compound a stream of events into a stateful object representing the whole state.
Most of the examples you’ll find of Marten projections will show you one of the aggregation recipes that heavily lean on conventional method signatures with Marten doing some “magic” around those method names, like this simple “self-aggregating” document type:
public record TodoCreated(Guid TodoId, string Description);
public record TodoUpdated(Guid TodoId, string Description);
public class Todo
{
public Guid Id { get; set; }
public string Description { get; set; } = null!;
public static Todo Create(TodoCreated @event) => new()
{
Id = @event.TodoId,
Description = @event.Description,
};
public void Apply(TodoUpdated @event)
{
Description = @event.Description;
}
}
Notice the Apply() and Create() methods in the Todo class above. Those are following a naming convention that Marten uses to “know” how to update a Todo document with new information from events.
I (and by “I” I’m clearly taking responsibility for any problems with this approach) went down this path with Marten V4 as a way to make some performance optimizations at runtime. This approach goes okay if you stay well within the well lit path (create, update, maybe delete the aggregate document), but can break down when folks get “fancy” with things like soft deletes. Or all too frequently, this approach can confuse users when the problem domain gets more complex.
There’s an escape hatch though. We can toss aside all the conventional magic and the corresponding runtime magic that Marten does for these projections and just write some explicit code.
Using Marten’s “CustomProjection” recipe — which is just a way to use explicit code to do aggregations of event data — we can write the same functionality as above with this equivalent:
public record TodoCreated(Guid TodoId, string Description);
public record TodoUpdated(Guid TodoId, string Description);
public class Todo
{
public Guid Id { get; set; }
public string Description { get; set; } = null!;
}
// Need to inherit from CustomProjection
public class TodoProjection: CustomProjection<Todo, Guid>
{
public TodoProjection()
{
// This is kinda meh to me, but this tells
// Marten how to do the grouping of events to
// aggregated Todo documents by the stream id
Slicer = new ByStreamId<Todo>();
// The code below is only valuable as an optimization
// if this projection is running in Marten's async
// daemon to help the daemon filter candidate events faster
IncludeType<TodoCreated>();
IncludeType<TodoUpdated>();
}
public override ValueTask ApplyChangesAsync(DocumentSessionBase session, EventSlice<Todo, Guid> slice, CancellationToken cancellation,
ProjectionLifecycle lifecycle = ProjectionLifecycle.Inline)
{
var aggregate = slice.Aggregate;
foreach (var e in slice.AllData())
{
switch (e)
{
case TodoCreated created:
aggregate ??= new Todo { Id = slice.Id, Description = created.Description };
break;
case TodoUpdated updated:
aggregate ??= new Todo { Id = slice.Id };
aggregate.Description = updated.Description;
break;
}
}
// This is an "upsert", so no silly EF Core "is this new or an existing document?"
// if/then logic here
session.Store(aggregate);
return new ValueTask();
}
}
Putting aside the admitted clumsiness of the “slicing” junk, our projection code is just a switch statement. In hindsight, the newer C# switch expression syntax was just barely coming out when I designed the conventional approach. If I had it to do again, I think I would have focused harder on promoting the explicit logic and bypassed the whole conventions + runtime code generation thing for aggregations. Oh well.
For right now though, just know that you’ve got an escape hatch with Marten projections to “just write some code” any time the conventional approach causes you the slightest bit of grief.
Hey, did you know that JasperFx Software is ready for formal support plans for Marten and Wolverine? Not only are we trying to make the “Critter Stack” tools be viable long term options for your shop, we’re also interested in hearing your opinions about the tools and how they should change.We’re also certainly open to help you succeed with your software development projects on a consulting basis whether you’re using any part of the Critter Stack or any other .NET server side tooling.
A lot of pull requests and bug fixes just happened to land today for both Marten and Wolverine. In order, we’ve got:
Marten 7.0.0 Beta 5
Marten 7.0.0 Beta 5 is actually quite a big release and a major step forward on the road to the final V7 release. Besides some bug fixes, I think the big highlights are:
Marten finally gets the long awaited “Partial Update” model that only depends on native PosgreSQL features! Huge addition from Babu. If you’re coming to Marten from MongoDb, or only would if Marten had the ability to modify documents without first having to load the whole thing, well now you can! No PLv8 extension necessary!
We pushed through a new low level execution model that’s more parsimonious about how long database connections are kept open that should help applications using Marten scale to more concurrent transactions. This should also help folks using Marten in conjunction with Hot Chocolate as now IQuerySession could be used in multiple threads in parallel.
Marten now uses Polly internally for retries on transient errors, and the “retry” functionality actually works now (it didn’t actually do anything useful before, as I shamefully refuse to make eye contact with you).
Several fixes around full text indexes that were blocking some folks
Wolverine 1.16.0
Wolverine 1.16.0 came out today with a couple additions and fixes related to MQTT or Rabbit MQ message publishing to topics. As an example, here’s some new functionality with Rabbit MQ message publishing:
You can specify publishing rules for messages by supplying the logic to determine the topic name from the message itself. Let’s say that we have an interface that several of our message types implement like so:
public interface ITenantMessage
{
string TenantId { get; }
}
Let’s say that any message that implements that interface, we want published to the topic for that messages TenantId. We can implement that rule like so:
using var host = await Host.CreateDefaultBuilder()
.UseWolverine((context, opts) =>
{
opts.UseRabbitMq();
// Publish any message that implements ITenantMessage to
// a Rabbit MQ "Topic" exchange named "tenant.messages"
opts.PublishMessagesToRabbitMqExchange<ITenantMessage>("tenant.messages",m => $"{m.GetType().Name.ToLower()}/{m.TenantId}")
// Specify or configure sending through Wolverine for all
// messages through this Exchange
.BufferedInMemory();
})
.StartAsync();
Wolverine 2.0 Alpha 1
Knock on wood, if the GitHub Action & Nuget gods all agree, there will be a Wolverine 2.0 alpha 1 set of Nugets available that’s just Wolverine 1.16, but targeting the very latest Marten 7 betas as somebody asks me just about every single day when that’s going to be ready.
Enjoy! And don’t tell me about any problems with these releases until Monday!
Summary
I had a very off week as I struggled with a cold, a busy personal life, and way more Zoom meetings than I normally have. All the same, getting to spit out these three releases today makes me feel like Bill Murray here:
Hey, did you know that JasperFx Software is ready for formal support plans for Marten and Wolverine? Not only are we trying to make the “Critter Stack” tools be viable long term options for your shop, we’re also interested in hearing your opinions about the tools and how they should change.We’re also certainly open to help you succeed with your software development projects on a consulting basis whether you’re using any part of the Critter Stack or any other .NET server side tooling.
Let’s build a small web service application using the whole “Critter Stack” and their friends, one small step at a time. For right now, the “finished” code is at CritterStackHelpDesk on GitHub.
First off, we’re using Marten in our incident tracking, help desk system to read and persist data to a PostgreSQL database. When handling messages, Wolverine could easily encounter transient (read: random and not necessarily systematic) exceptions related to network hiccups or timeout errors if the database happens to be too busy at that very time. Let’s tell Wolverine to apply a little exponential backoff (close enough for government work) and retry a command that hits one of these transient database errors a limited number of times like this within the call to UseWolverine() within our Program file:
// Let's build in some durability for transient errors
opts.OnException<NpgsqlException>().Or<MartenCommandException>()
.RetryWithCooldown(50.Milliseconds(), 100.Milliseconds(), 250.Milliseconds());
The retries may happily catch the system at a later time when it’s not as busy, so the transient error doesn’t reoccur and the message can succeed. If we get successive failures, we wait longer before retries. This retry policy effectively throttles a Wolverine system and may give a distressed subsystem within your architecture (in this case the PostgreSQL database) a chance to recover.
Other times you may have a handler encounter an exception that tells us the message in question is invalid somehow, and could never be handled. There’s absolutely no reason to retry that message, so instead, let’s tell Wolverine to instead discard that message immediately (and not even bother to move it to a dead letter queue):
// Log the bad message sure, but otherwise throw away this message because
// it can never be processed
opts.OnException<InvalidInputThatCouldNeverBeProcessedException>()
.Discard();
I’ve done a few integration projects now where some kind of downstream web service was prone to being completely down. Let’s pretend that we’re only calling that web service through a message handler (my preference whenever possible for exactly this failure scenario) and can tell from an exception that the web service is absolutely unavailable and no other messages could possibly go through until that service is fixed.
Wolverine can do that as well, like so:
// Shut down the listener for whatever queue experienced this exception
// for 5 minutes, and put the message back on the queue
opts.OnException<MakeBelieveSubsystemIsDownException>()
.PauseThenRequeue(5.Minutes());
And finally, Wolverine also has a circuit breaker functionality to shut down processing on a queue if there are too many errors in a certain time. This feature certainly applies to messages coming in from external messages from Rabbit MQ or Azure Service Bus or AWS SQS, but can also apply to database backed local queues. For the help desk system, I’m going to add a circuit breaker to the local queue for processing the TryAssignPriority command to pause all local processing on the current node if a certain threshold of message processing is failing:
opts.LocalQueueFor<TryAssignPriority>()
// By default, local queues allow for parallel processing with a maximum
// parallel count equal to the number of processors on the executing
// machine, but you can override the queue to be sequential and single file
.Sequential()
// Or add more to the maximum parallel count!
.MaximumParallelMessages(10)
// Pause processing on this local queue for 1 minute if there's
// more than 20% failures for a period of 2 minutes
.CircuitBreaker(cb =>
{
cb.PauseTime = 1.Minutes();
cb.SamplingPeriod = 2.Minutes();
cb.FailurePercentageThreshold = 20;
// Definitely worry about this type of exception
cb.Include<TimeoutException>();
// Don't worry about this type of exception
cb.Exclude<InvalidInputThatCouldNeverBeProcessedException>();
});
And don’t worry, Wolverine won’t lose any additional messages published to that queue. They’ll just sit in the database until the current node picks back up on this local queue or another running node is able to steal the work from the database and continue.
Summary and What’s Next
I only gave some highlights here, but Wolverine has some more capabilities for error handling. I think these policies are probably something you adapt over time as you learn more about how your system and its dependencies behave. Throwing more descriptive exceptions from your own code is definitely beneficial as well for these kinds of error handling policies.
I’m almost done with this series. I think the next post or two — and it won’t come until next week — will be all about logging, auditing, metrics, and Open Telemetry integration.
Hey, did you know that JasperFx Software is ready for formal support plans for Marten and Wolverine? Not only are we trying to make the “Critter Stack” tools be viable long term options for your shop, we’re also interested in hearing your opinions about the tools and how they should change.We’re also certainly open to help you succeed with your software development projects on a consulting basis whether you’re using any part of the Critter Stack or any other .NET server side tooling.
Let’s build a small web service application using the whole “Critter Stack” and their friends, one small step at a time. For right now, the “finished” code is at CritterStackHelpDesk on GitHub.
I’ve personally spent quite a bit of time helping teams and organizations deal with older, legacy codebases where it might easily take a couple days of working painstakingly through the instructions in a large Wiki page of some sort in order to make their codebase work on a local development environment. That’s indicative of a high friction environment, and definitely not what we’d ideally like to have for our own teams.
Thinking about the external dependencies of our incident tracking, help desk api we’ve utilized:
Marten for persistence, which requires our system to need PostgreSQL database schema objects
Wolverine’s PostgreSQL-backed transactional outbox support, which also requires its own set of PostgreSQL database schema objects
Rabbit MQ for asynchronous messaging, which requires queues, exchanges, and bindings to be set up in our message broker for the application to work
That’s a bit of stuff that needs to be configured within the Rabbit MQ or PostgreSQL infrastructure around our service in order to run our integration tests or the application itself for local testing.
Instead of the error prone, painstaking manual set up laboriously laid out in a Wiki page somewhere where you can’t remember where it is, let’s leverage the Critter Stack’s “Stateful Resource” model to quickly set our system up ready to run in development.
Building on our existing application configuration, I’m going to add a couple more lines of code to our system’s Program file:
// Depending on your DevOps setup and policies,
// you may or may not actually want this enabled
// in production installations, but some folks do
if (builder.Environment.IsDevelopment())
{
// This will direct our application to set up
// all known "stateful resources" at application bootstrapping
// time
builder.Services.AddResourceSetupOnStartup();
}
And that’s that. If you’re using the integration test harness like we did in an earlier post, or just starting up the application normally, the application will check for the existence of any of the following, and try to build out anything that’s missing from:
The known Marten document tables and all the database objects to support Marten’s event sourcing
The necessary tables and functions for Wolverine’s transactional inbox, outbox, and scheduled message tables (I’ll add a post later on those)
The known Rabbit MQ exchanges, queues, and bindings
Your application will have to have administrative privileges over all the resources for any of this to work of course, but you would have that at development time at least.
With this capability in place, the procedure for a new developer getting started with our codebase is to:
Does a clean git clone of our codebase on to his local box
Runs docker compose up to start up all the necessary infrastructure they need to run the system or the system’s integration tests locally
Just run the integration tests or start the system and go!
If you omit the call to builder.Services.AddResourceSetupOnStartup();, you could still go to the command line and use this command just once to set everything up:
dotnet run -- resources setup
To check on the status of any or all of the resources, you can use:
dotnet run -- resources check
which for the HelpDesk.API, gives you this:
If you want to tear down all the existing data — and at least attempt to purge any Rabbit MQ queues of all messages — you can use:
dotnet run -- resources clear
There’s a few other options you can read about in the Oakton documentation for the Stateful Resource model, but for right now, type dotnet run -- help resources and you can see Oakton’s built in help for the resources command that runs down the supported usage:
Summary and What’s Next
The Critter Stack is trying really hard to create a productive, low friction development ecosystem for your projects. One of the ways it tries to make that happen is by being able to set up infrastructural dependencies automatically at runtime so a developer and just “clone n’ go” without the excruciating pain of the multi-page Wiki getting started instructions so painfully common in legacy codebases.
This stateful resource model is also supported for Kafka transport (which is also local development friendly) and the cloud native Azure Service Bus transport and AWS SQS transport (Wolverine + AWS SQS does work with LocalStack just fine). In the cloud native cases, the credentials from the Wolverine application will have to have the necessary rights to create queues, topics, and subscriptions. In the case of the cloud native transports, there is an option to prefix all the names of the queues, topics, and subscriptions to still create an isolated environment per developer for a better local development story even when relying on cloud native technologies.
I think I’ll add another post to this series where I switch the messaging to one of the cloud native approaches.
As for what’s next in this increasingly long series, I think we still have logging, open telemetry and metrics, resiliency, and maybe a post on Wolverine’s middleware support. That list is somewhat driven by recency bias around questions I’ve been asked here or there about Wolverine.
Hey, did you know that JasperFx Software is ready for formal support plans for Marten and Wolverine? Not only are we trying to make the “Critter Stack” tools be viable long term options for your shop, we’re also interested in hearing your opinions about the tools and how they should change.We’re also certainly open to help you succeed with your software development projects on a consulting basis whether you’re using any part of the Critter Stack or any other .NET server side tooling.
Let’s build a small web service application using the whole “Critter Stack” and their friends, one small step at a time. For right now, the “finished” code is at CritterStackHelpDesk on GitHub.
To this point in the series, everything has happened within the context of our single HelpDesk.API project. We’ve utilized HTTP endpoints, Wolverine as a mediator, and sent messages through Wolverine’s local queueing features. Today, let’s add Rabbit MQ to the mix as a super, local development-friendly option for distributed processing and just barely dip our toes into Wolverine’s asynchronous messaging support.
As a reminder, here’s a diagram of our incident tracking, help desk system:
In our case, we’re going to create a separate service to handle outgoing emails and SMS messaging I’ve inevitably named the “NotificationService.” For the communication between the Help Desk API and the Notification Service, we’re going to use a Rabbit MQ queue to send RingAllTheAlarms messages from our Help Desk API to the downstream Notification Service, where that will formulate an email body or SMS message or who knows what according to our agent’s personal preferences.
I’ve heard a couple derivations over the years of Zawinski’s Law, stating that every system will eventually grow until it can read mail (or contain a half-assed implementation of LISP). My corollary to that is that every enterprise system will inevitably grow to include a separate service for sending notifications to users.
Earlier, we had build a message handler that potentially sent a RingAllTheAlarms message if an incident was assigned a critical priority:
[AggregateHandler]
public static (Events, OutgoingMessages) Handle(
TryAssignPriority command,
IncidentDetails details,
Customer customer)
{
var events = new Events();
var messages = new OutgoingMessages();
if (details.Category.HasValue && customer.Priorities.TryGetValue(details.Category.Value, out var priority))
{
if (details.Priority != priority)
{
events.Add(new IncidentPrioritised(priority, command.UserId));
if (priority == IncidentPriority.Critical)
{
messages.Add(new RingAllTheAlarms(command.IncidentId));
}
}
}
return (events, messages);
}
When our system tries to publish that RingAllTheAlarms message, Wolverine tries to route that message to a subscribing endpoint (local queues are also considered to be endpoints by Wolverine), and publishes the message to each subscriber — or does nothing if there are no known subscribers for that message type.
Let’s first create our new Notification Service from scratch, with a quick call to:
dotnet new console
After that, I admittedly took a short cut and just added a project reference to our Help Desk API project because it’s late at night as I write this and I’m lazy by nature. In real usage you probably at least start with a shared library just to define the message types that are exchanged between two or more processes:
To be clear, Wolverine does not require you to use shared types for the message bodies between Wolverine applications, but that frequently turns out to be the easiest mechanism to get started and it can easily be sufficient in many situations.
Back to our new Notification Service. I’m going to add a reference to Wolverine’s Rabbit MQ transport library (Wolverine.RabbitMQ) with:
dotnet add package WolverineFx.RabbitMQ
With that in place, the entire (faked up) Notification Service code is this:
using Helpdesk.Api;
using Microsoft.Extensions.Hosting;
using Oakton;
using Wolverine;
using Wolverine.RabbitMQ;
return await Host.CreateDefaultBuilder()
.UseWolverine(opts =>
{
// Connect to Rabbit MQ
// The default like this expects to connect to a Rabbit MQ
// broker running in the localhost at the default Rabbit MQ
// port
opts.UseRabbitMq();
// Tell Wolverine to listen for incoming messages
// from a Rabbit MQ queue
opts.ListenToRabbitQueue("notifications");
}).RunOaktonCommands(args);
// Just to see that there is a message handler for the RingAllTheAlarms
// message
public static class RingAllTheAlarmsHandler
{
public static void Handle(RingAllTheAlarms message)
{
Console.WriteLine("I'm going to scream out an alert about incident " + message.IncidentId);
}
}
Moving back to our Help Desk API project, I’m going to add a reference to the WolverineFx.RabbitMQ Nuget, and add this code to define the outgoing subscription for the RingAllTheAlarms message:
builder.Host.UseWolverine(opts =>
{
// Other configuration...
// Opt into the transactional inbox/outbox on all messaging
// endpoints
opts.Policies.UseDurableOutboxOnAllSendingEndpoints();
// Connecting to a local Rabbit MQ broker
// at the default port
opts.UseRabbitMq();
// Adding a single Rabbit MQ messaging rule
opts.PublishMessage<RingAllTheAlarms>()
.ToRabbitExchange("notifications");
// Other configuration...
});
I’m going to very highly recommend that you read up a little bit on Rabbit MQ’s model of exchanges, queues, and bindings before you try to use it in anger because every message broker seems to have subtly different behavior. Just for this post though, you’ll see that the Help Desk API is publishing to a Rabbit MQ exchange named “notifications” and the Notification Service is listening to a queue named “notifications”. To fully connect the two services through Rabbit MQ, you’d need to add a binding from the “notifications” exchange to the “notifications” queue. You can certainly do that through any Rabbit MQ management mechanism, but you could also define that binding in Wolverine itself and let Wolverine put that altogether for you at runtime much like Wolverine and Marten can for their database schema dependencies.
Let’s revisit the Notification Service code and make it set up a little bit more for us in the Wolverine setup to automatically build the right Rabbit MQ exchange, queue, and binding between our applications like so:
return await Host.CreateDefaultBuilder()
.UseWolverine(opts =>
{
opts.UseRabbitMq()
// Make it build out any missing exchanges, queues, or bindings that
// the system knows about as ncessary
.AutoProvision()
// This is just to make Wolverine help us out to configure Rabbit MQ end to end
// This isn't mandatory, but it might help you be more productive at development
// time
.BindExchange("notifications").ToQueue("notifications", "notification_binding");
// Tell Wolverine to listen for incoming messages
// from a Rabbit MQ queue
opts.ListenToRabbitQueue("notifications");
}).RunOaktonCommands(args);
And that’s actually that, we’re completely ready to go assuming there’s a Rabbit MQ broker running on our local development box — which I usually do just through docker compose (here’s the docker-compose.yaml file from this sample application).
One thing to note for folks seeing this who are coming from a MassTransit or NServiceBus background, Wolverine does not need you to specify any kind of connectivity between message handlers and listening endpoints. That might become an “opt in” feature some day, but there’s nothing like that in Wolverine today.
Summary and What’s Next
I just barely exposed a little bit of what Wolverine can while using Rabbit MQ as a messaging transport. There’s a ton of levers and knobs to adjust for increased throughput or for more strict message ordering. There’s also a conventional routing capability that might be a good default for getting started.
As far as when you should use asynchronous messaging, my thinking is that you should pretty well always use asynchronous messaging between two processing unless you have to have the inline response from the downstream system. Otherwise, I think that using asynchronous messaging techniques helps to decouple systems from each other temporally, and gives you more tools for creating robust and resilient systems through error handling policies.
And speaking of “resiliency”, I think that will be the subject of one of the remaining posts in this series.
There’s a new Marten 7.0 beta 4 release out today with a new round of bug fixes and some performance enhancements. We’re getting closer to getting a 7.0 release out, so I thought I’d update the world a bit on what’s remaining. I’d also love to give folks a chance to weigh in on some of the outstanding work that may or may not make the cut for 7.0 or slide to later. Due to some commitments to clients, I’m hoping to have the release out by early February at the latest, but we’ll see.
A Wolverine 2.0 release will follow shortly, but that’s going to be almost completely about upgrading Wolverine to use the latest Marten and Weasel dependencies and shouldn’t result in any breaking changes.
What’s In Flight or Outstanding
There’s several medium sized efforts either in flight, or yet to come. User feedback is certainly welcome:
Low level database execution improvements. We’re doing a lot of work to integrate relatively newer ADO.Net features from Npgsql that will help us wring out a little better performance. As part of that work, we’re going to replace our homegrown resiliency feature (IRetryPolicy) with a more efficient and likely more effective approach using Polly baked into Marten. I was hesitant to take on Polly before because of its tendency to be a diamond dependency issue, but I think we’ve changed our minds about the risk/reward equation here. I think we’ll also get a little performance and scalability boost by using Polly’s static Lambda approach in place of our current approach. The reality is that while you probably shouldn’t be too consumed with micro-optimizations in application development, it’s much more valuable in infrastructure code like Marten to be as performant as possible.
Open Telemetry support baked in. I think this is a low hanging fruit issue that might be a great place for anyone to jump in. Please feel free to weigh in on the possible approaches we’ve outlined.
Better scalability for asynchronous projections and the ability to deploy projection and event changes with less or even zero downtime compared to the current Marten. I’ll refer you to a longer discussion for feedback on possible directions. That discussion also touches on topics around event data migrations and archival strategies.
Enabling built in support for strong typed identifiers. This is far more work than I personally think it’s worth, but plenty of folks tell us that it’s a must have feature even to the point where they tell us they won’t use Marten until this exists. This kind of thing is what drives me personally to make disparaging remarks about the DDD community’s seeming love of code ceremony. Grr.
“Partial” document updates with native PostgreSQL features. We’ve had this functionality for years, but it depends on the PLv8 extension to PostgreSQL that’s continuously harder to use, especially on the cloud. I think this could be a big win, especially for users coming from MongoDb
Dynamic Tenant Database Discovery — customer request, and that means it goes to a the top of the priority list. Weird how it works that way.
What else folks? I don’t want the release to drag on forever, but there’s plenty of other things to do
LINQ Improvements
From my perspective, the effective rewrite of the LINQ provider support for V7 is the single biggest change and improvement for Marten 7. As always, I’m hopeful that this shores up Marten’s technical foundation for years to come. I’d sum that work up as:
Glass Half Full: the new LINQ support covers a lot more scenarios that were missing previously, and especially improves both the number of supported use cases and the efficiency of the generated SQL for querying within child collections in many cases. Moreover, the new LINQ support should be better about telling you when it can’t support something instead of doing erroneous searches, and should be in much better shape for when we need to add new permutations to the support from user requests later.
Glass Half Empty: It took a long, long time to get this done and it was quite an opportunity cost for me personally. We also got a large GitHub sponsorship for this work, and while I was and am very grateful for that, I’m also feeling guilty about how long it took to finish that work.
And that folks is the life of a semi-successful OSS author in one nutshell.
Hey, did you know that JasperFx Software is ready for formal support plans for Marten and Wolverine? Not only are we trying to make the “Critter Stack” tools be viable long term options for your shop, we’re also interested in hearing your opinions about the tools and how they should change.We’re also certainly open to help you succeed with your software development projects on a consulting basis whether you’re using any part of the Critter Stack or any other .NET server side tooling.
Let’s build a small web service application using the whole “Critter Stack” and their friends, one small step at a time. For right now, the “finished” code is at CritterStackHelpDesk on GitHub.
I’m taking a short detour in this series today as I prepare to do my “Contrarian Architecture” talk at the CodeMash 2024 conference today. In that talk (here’s a version from NDC Oslo 2023), I’m going to spend some time more or less bashing stereotypical usages of the Clean or Onion Architecture prescriptive approach.
While there’s nothing to prevent you from using either Wolverine or Marten within a typical Clean Architecture style code organization, the “Critter Stack” plays well within a lower code ceremony vertical slice architecture that I personally prefer.
First though, let’s talk about why I don’t like about the stereotypical Code/Onion Architecture approach you commonly find in enterprise .NET systems. With this common mode of code organization, the incident tracking help desk service we have been building in this series might be organized something like:
Class Name
Project
IncidentController
HelpDesk.API
IncidentService
HelpDesk.ServiceLayer
Incident
HelpDesk.Domain
IncidentRepository
HelpDesk.Data
Don’t laugh because a lot of people do this
This kind of code structure is primarily organized around the “nouns” of the system and reliant on the formal layering prescriptions to try to create a healthy separation of concerns. It’s probably perfectly fine for pure CRUD applications, but breaks down very badly over time for more workflow centric applications.
I despise this form of code organization in very large systems because:
It scatters closely related code throughout the codebase
You typically don’t spend a lot of time trying to reason about an entire layer at a time. Instead, you’re largely worried about the behavior of one single use case and the logical flow through the entire stack for that one use case
The code layout tells you very little about what the application does as it’s primarily focused around technical concerns (hat tip to David Whitney for that insight)
It’s high ceremony. Lots of layers, interfaces, and just a lot of stuff
Abstractions around the low level persistence infrastructure can very easily lead you to poorly performing code and can make it much harder later to understand why code is performing poorly in production
Shifting to the Idiomatic Wolverine Approach
Let’s say that we’re sitting around a fire boasting of our victories in software development (that’s a lie, I’m telling horror stories about the worst systems I’ve ever seen) and you ask me “Jeremy, what is best in code?”
And I’d respond:
Low ceremony code that’s easy to read and write
Closely related code is close together
Unrelated code is separated
Code is organized around the “verbs” of the system, which in the case of Wolverine probably means the commands
The code structure by itself gives some insight into what the system actually does
Taking our LogIncident command, I’m going to put every drop of code related to that command in a single file called “LogIncident.cs”:
public record LogIncident(
Guid CustomerId,
Contact Contact,
string Description
)
{
public class LogIncidentValidator : AbstractValidator<LogIncident>
{
// I stole this idea of using inner classes to keep them
// close to the actual model from *someone* online,
// but don't remember who
public LogIncidentValidator()
{
RuleFor(x => x.Description).NotEmpty().NotNull();
RuleFor(x => x.Contact).NotNull();
}
}
};
public record NewIncidentResponse(Guid IncidentId)
: CreationResponse("/api/incidents/" + IncidentId);
public static class LogIncidentEndpoint
{
[WolverineBefore]
public static async Task<ProblemDetails> ValidateCustomer(
LogIncident command,
// Method injection works just fine within middleware too
IDocumentSession session)
{
var exists = await session.Query<Customer>().AnyAsync(x => x.Id == command.CustomerId);
return exists
? WolverineContinue.NoProblems
: new ProblemDetails { Detail = $"Unknown customer id {command.CustomerId}", Status = 400};
}
[WolverinePost("/api/incidents")]
public static (NewIncidentResponse, IStartStream) Post(LogIncident command, User user)
{
var logged = new IncidentLogged(
command.CustomerId,
command.Contact,
command.Description,
user.Id);
var op = MartenOps.StartStream<Incident>(logged);
return (new NewIncidentResponse(op.StreamId), op);
}
}
Every single bit of code related to handling this operation in our system is in one file that we can read top to bottom. A few significant points about this code:
I think it’s working out well in other Wolverine systems to largely name the files based on command names or the request body models for HTTP endpoints. At least with systems being built with a CQRS approach. Using the command name allows the system to be more self descriptive when you’re just browsing the codebase for the first time
The behavioral logic is still isolated to the Post() method, and even though there is some direct data access in the same class in its LoadAsync() method, the Post() method is a pure function that can be unit tested without any mocks
There’s also no code unrelated to LogIncident anywhere in this file, so you bypass the problem you get in noun-centric code organizations where you have to train your brain to ignore a lot of unrelated code in an IncidentService that has nothing to do with the particular operation you’re working on at any one time
I’m not bothering to wrap any kind of repository abstraction around Marten’s IDocumentSession in this code sample. That’s not to say that I wouldn’t do so in the case of something more complicated, and especially if there’s some kind of complex set of data queries that would need to be reused in other commands
You can clearly see the cause and effect between the command input and any outcomes of that command. I think this is an important discussion all by itself because it can easily be hard to reason about that same kind of cause and effect in systems that split responsibilities within a single use case across different areas of the code and even across different projects or components. Codebases that are hard to reason about are very prone to regression errors down the line — and that’s the voice of painful experience talking.
I certainly wouldn’t use this “single file” approach on larger, more complex use cases, but it’s working out well for early Wolverine adopters so far. Since much of my criticism of Clean/Onion Architecture approaches is really about using prescriptive rules too literally, I would also say that I would deviate from this “single file” approach any time it was valuable to reuse code across commands or queries or just when the message handling for a single message gets complex enough to need or want other files to separate responsibilities just within that one use case.
Summary and What’s Next
Wolverine is optimized for a “Vertical Slice Architecture” code organization approach. Both Marten and Wolverine are meant to require as little code ceremony as they can, and that also makes the vertical slice architecture and even the single file approach I showed here be feasible.
Hey, did you know that JasperFx Software is ready for formal support plans for Marten and Wolverine? Not only are we trying to make the “Critter Stack” tools be viable long term options for your shop, we’re also interested in hearing your opinions about the tools and how they should change.We’re also certainly open to help you succeed with your software development projects on a consulting basis whether you’re using any part of the Critter Stack or any other .NET server side tooling.
Let’s build a small web service application using the whole “Critter Stack” and their friends, one small step at a time. For right now, the “finished” code is at CritterStackHelpDesk on GitHub.
Let’s start this post by making a bold statement that I’ll probably regret, but still spend the rest of this post trying to back up:
Remembering the basic flow of our incident tracking, help desk service in this series, we’ve got this workflow:
Starting in the middle with the “Categorize Incident”, our system’s workflow is something like:
A technician will send a request to change the category of the incident
If the system determines that the request will be changing the category, the system will append a new event to mark that state, and also publish a new command message to try to assign a priority to the incident automatically based on the customer data
When the system handles that new “Try Assign Priority” command, it will look at the customer’s settings, and likewise append another event to record the change of priority for the incident. If the incident changes, it will also publish a message to an external “Notification Service” — but for this post, let’s just worry about whether we’re correctly publishing the right message
In an earlier post, I showed this version of a message handler for the CategoriseIncident command:
public static class CategoriseIncidentHandler
{
public static readonly Guid SystemId = Guid.NewGuid();
[AggregateHandler]
// The object? as return value will be interpreted
// by Wolverine as appending one or zero events
public static async Task<object?> Handle(
CategoriseIncident command,
IncidentDetails existing,
IMessageBus bus)
{
if (existing.Category != command.Category)
{
// Send the message to any and all subscribers to this message
await bus.PublishAsync(
new TryAssignPriority { IncidentId = existing.Id });
return new IncidentCategorised
{
Category = command.Category,
UserId = SystemId
};
}
// Wolverine will interpret this as "do no work"
return null;
}
}
Notice that this handler is injecting the Wolverine IMessageBus service into the handler method. We could test this code as is with a “fake” for IMessageBus just to verify whether the expected outgoing message for TryAssignPriority goes out or not. Helpfully, Wolverine even supplies a “spy” version of IMessageBus called TestMessageContext that can be used in unit tests as a stand in just to record what the outgoing messages were.
My strong preference though is to use Wolverine’s concept of cascading messages to write a pure function such that the behavioral logic can be tested without any mocks, stubs, or other fakes. In the sample code above, we had been using Wolverine as “just” a “Mediator” within an MVC Core controller. This time around, let’s ditch the unnecessary “Mediator” ceremony and use a Wolverine HTTP endpoint for the same functionality. In this case we can write the same functionality as a pure function like so:
public static class CategoriseIncidentEndpoint
{
[WolverinePost("/api/incidents/categorise"), AggregateHandler]
public static (Events, OutgoingMessages) Post(
CategoriseIncident command,
IncidentDetails existing,
User user)
{
var events = new Events();
var messages = new OutgoingMessages();
if (existing.Category != command.Category)
{
// Append a new event to the incident
// stream
events += new IncidentCategorised
{
Category = command.Category,
UserId = user.Id
};
// Send a command message to try to assign the priority
messages.Add(new TryAssignPriority
{
IncidentId = existing.Id
});
}
return (events, messages);
}
}
In the endpoint above, we’re “pushing” all of the required inputs for our business logic in the Post() method that makes a decision about what state changes should be captured and what additional actions should be done through outgoing, cascaded messages.
A couple notes about this code:
It’s using the aggregate handler workflow we introduced in an earlier post to “push” the IncidentDetails aggregate for the incident stream into the method. We’ll need this information to “decide” what to do next
The Events type is a Wolverine construct that tells Wolverine “hey, the objects in this collection are meant to be appended as events to the event stream for this aggregate.”
Likewise, the OutgoingMessages type is a Wolverine construct that — wait for it — tells Wolverine that the objects contained in that collection should be published as cascading messages after the database transaction succeeds
The Marten + Wolverine transactional middleware is calling Marten’s IDocumentSession.SaveChangesAsync() to commit the logical transaction, and also dealing with the transaction outbox mechanics for the cascading messages from the OutgoingMessages collection.
Alright, with all that said, let’s look at what a unit test for a CategoriseIncident command message that results in the category being changed:
[Fact]
public void raise_categorized_event_if_changed()
{
var command = new CategoriseIncident
{
Category = IncidentCategory.Database
};
var details = new IncidentDetails(
Guid.NewGuid(),
Guid.NewGuid(),
IncidentStatus.Closed,
Array.Empty<IncidentNote>(),
IncidentCategory.Hardware);
var user = new User(Guid.NewGuid());
var (events, messages) = CategoriseIncidentEndpoint.Post(command, details, user);
// There should be one appended event
var categorised = events.Single()
.ShouldBeOfType<IncidentCategorised>();
categorised
.Category.ShouldBe(IncidentCategory.Database);
categorised.UserId.ShouldBe(user.Id);
// And there should be a single outgoing message
var message = messages.Single()
.ShouldBeOfType<TryAssignPriority>();
message.IncidentId.ShouldBe(details.Id);
message.UserId.ShouldBe(user.Id);
}
In real life, I’d probably opt to break that unit test into a BDD-like context and individual tests to assert the expected event(s) being appended and the expected outgoing messages, but this is conceptually easier and I didn’t sleep well last night, so this is what you get!
Let’s move on to the message handler for the TryAssignPriority message, and also make this a pure function so we can easily test the behavior:
public static class TryAssignPriorityHandler
{
// Wolverine will call this method before the "real" Handler method,
// and it can "magically" connect that the Customer object should be delivered
// to the Handle() method at runtime
public static Task<Customer?> LoadAsync(IncidentDetails details, IDocumentSession session)
{
return session.LoadAsync<Customer>(details.CustomerId);
}
// There's some database lookup at runtime, but I've isolated that above, so the
// behavioral logic that "decides" what to do is a pure function below.
[AggregateHandler]
public static (Events, OutgoingMessages) Handle(
TryAssignPriority command,
IncidentDetails details,
Customer customer)
{
var events = new Events();
var messages = new OutgoingMessages();
if (details.Category.HasValue && customer.Priorities.TryGetValue(details.Category.Value, out var priority))
{
if (details.Priority != priority)
{
events.Add(new IncidentPrioritised(priority, command.UserId));
if (priority == IncidentPriority.Critical)
{
messages.Add(new RingAllTheAlarms(command.IncidentId));
}
}
}
return (events, messages);
}
}
I’d ask you to notice the LoadAsync() method above. It’s part of the logical handler workflow, but Wolverine is letting us keep that separate from the main “decider” message Handle() method. We’d have to test the entire handler with an integration test eventually, but we can happily write fast running, fine grained unit tests on the expected behavior by just “pushing” inputs into the Handle() method and measuring the events and outgoing messages just by checking the return values.
Summary and What’s Next
Wolverine’s approach has always been driven by the desire to make your application code as testable as possible. Originally that meant to just keep the framework (Wolverine itself) out of your application code as much as possible. Later on, the Wolverine community was influenced by more Functional Programming techniques and Jim Shore’s paper on Testing without Mocks.
Specifically, Wolverine embraced the idea of the “A-Frame Architecture”, with Wolverine itself in the role of the mediator/controller/conductor coordinates between infrastructural concerns like Marten and your own business logic code in message handlers or HTTP endpoint methods without creating a direct coupling between you behavioral logic code and your infrastructure:
If you take advantage of Wolverine features like cascading messages, side effects, and compound handlers to decompose your system in a more FP-esque way while letting Wolverine handle the coordination, you can arrive at much more testable code.
I said earlier that I’d get to Rabbit MQ messaging soon, and I’ll get around to that soon. To fit in with one of my CodeMash 2024 talks on this Friday, I might take a little side trip into how the “Critter Stack” plays well inside of a low ceremony vertical slice architecture as I get ready to absolutely blast away at the “Clean/Onion Architecture” this week.