Hey, did you know that JasperFx Software is ready for formal support plans for Marten and Wolverine? Not only are we trying to make the “Critter Stack” tools be viable long term options for your shop, we’re also interested in hearing your opinions about the tools and how they should change.We’re also certainly open to help you succeed with your software development projects on a consulting basis whether you’re using any part of the Critter Stack or any other .NET server side tooling.
Let’s build a small web service application using the whole “Critter Stack” and their friends, one small step at a time. For right now, the “finished” code is at CritterStackHelpDesk on GitHub.
To this point in the series, everything has happened within the context of our single HelpDesk.API project. We’ve utilized HTTP endpoints, Wolverine as a mediator, and sent messages through Wolverine’s local queueing features. Today, let’s add Rabbit MQ to the mix as a super, local development-friendly option for distributed processing and just barely dip our toes into Wolverine’s asynchronous messaging support.
As a reminder, here’s a diagram of our incident tracking, help desk system:
In our case, we’re going to create a separate service to handle outgoing emails and SMS messaging I’ve inevitably named the “NotificationService.” For the communication between the Help Desk API and the Notification Service, we’re going to use a Rabbit MQ queue to send RingAllTheAlarms messages from our Help Desk API to the downstream Notification Service, where that will formulate an email body or SMS message or who knows what according to our agent’s personal preferences.
I’ve heard a couple derivations over the years of Zawinski’s Law, stating that every system will eventually grow until it can read mail (or contain a half-assed implementation of LISP). My corollary to that is that every enterprise system will inevitably grow to include a separate service for sending notifications to users.
Earlier, we had build a message handler that potentially sent a RingAllTheAlarms message if an incident was assigned a critical priority:
[AggregateHandler]
public static (Events, OutgoingMessages) Handle(
TryAssignPriority command,
IncidentDetails details,
Customer customer)
{
var events = new Events();
var messages = new OutgoingMessages();
if (details.Category.HasValue && customer.Priorities.TryGetValue(details.Category.Value, out var priority))
{
if (details.Priority != priority)
{
events.Add(new IncidentPrioritised(priority, command.UserId));
if (priority == IncidentPriority.Critical)
{
messages.Add(new RingAllTheAlarms(command.IncidentId));
}
}
}
return (events, messages);
}
When our system tries to publish that RingAllTheAlarms message, Wolverine tries to route that message to a subscribing endpoint (local queues are also considered to be endpoints by Wolverine), and publishes the message to each subscriber — or does nothing if there are no known subscribers for that message type.
Let’s first create our new Notification Service from scratch, with a quick call to:
dotnet new console
After that, I admittedly took a short cut and just added a project reference to our Help Desk API project because it’s late at night as I write this and I’m lazy by nature. In real usage you probably at least start with a shared library just to define the message types that are exchanged between two or more processes:
To be clear, Wolverine does not require you to use shared types for the message bodies between Wolverine applications, but that frequently turns out to be the easiest mechanism to get started and it can easily be sufficient in many situations.
Back to our new Notification Service. I’m going to add a reference to Wolverine’s Rabbit MQ transport library (Wolverine.RabbitMQ) with:
dotnet add package WolverineFx.RabbitMQ
With that in place, the entire (faked up) Notification Service code is this:
using Helpdesk.Api;
using Microsoft.Extensions.Hosting;
using Oakton;
using Wolverine;
using Wolverine.RabbitMQ;
return await Host.CreateDefaultBuilder()
.UseWolverine(opts =>
{
// Connect to Rabbit MQ
// The default like this expects to connect to a Rabbit MQ
// broker running in the localhost at the default Rabbit MQ
// port
opts.UseRabbitMq();
// Tell Wolverine to listen for incoming messages
// from a Rabbit MQ queue
opts.ListenToRabbitQueue("notifications");
}).RunOaktonCommands(args);
// Just to see that there is a message handler for the RingAllTheAlarms
// message
public static class RingAllTheAlarmsHandler
{
public static void Handle(RingAllTheAlarms message)
{
Console.WriteLine("I'm going to scream out an alert about incident " + message.IncidentId);
}
}
Moving back to our Help Desk API project, I’m going to add a reference to the WolverineFx.RabbitMQ Nuget, and add this code to define the outgoing subscription for the RingAllTheAlarms message:
builder.Host.UseWolverine(opts =>
{
// Other configuration...
// Opt into the transactional inbox/outbox on all messaging
// endpoints
opts.Policies.UseDurableOutboxOnAllSendingEndpoints();
// Connecting to a local Rabbit MQ broker
// at the default port
opts.UseRabbitMq();
// Adding a single Rabbit MQ messaging rule
opts.PublishMessage<RingAllTheAlarms>()
.ToRabbitExchange("notifications");
// Other configuration...
});
I’m going to very highly recommend that you read up a little bit on Rabbit MQ’s model of exchanges, queues, and bindings before you try to use it in anger because every message broker seems to have subtly different behavior. Just for this post though, you’ll see that the Help Desk API is publishing to a Rabbit MQ exchange named “notifications” and the Notification Service is listening to a queue named “notifications”. To fully connect the two services through Rabbit MQ, you’d need to add a binding from the “notifications” exchange to the “notifications” queue. You can certainly do that through any Rabbit MQ management mechanism, but you could also define that binding in Wolverine itself and let Wolverine put that altogether for you at runtime much like Wolverine and Marten can for their database schema dependencies.
Let’s revisit the Notification Service code and make it set up a little bit more for us in the Wolverine setup to automatically build the right Rabbit MQ exchange, queue, and binding between our applications like so:
return await Host.CreateDefaultBuilder()
.UseWolverine(opts =>
{
opts.UseRabbitMq()
// Make it build out any missing exchanges, queues, or bindings that
// the system knows about as ncessary
.AutoProvision()
// This is just to make Wolverine help us out to configure Rabbit MQ end to end
// This isn't mandatory, but it might help you be more productive at development
// time
.BindExchange("notifications").ToQueue("notifications", "notification_binding");
// Tell Wolverine to listen for incoming messages
// from a Rabbit MQ queue
opts.ListenToRabbitQueue("notifications");
}).RunOaktonCommands(args);
And that’s actually that, we’re completely ready to go assuming there’s a Rabbit MQ broker running on our local development box — which I usually do just through docker compose (here’s the docker-compose.yaml file from this sample application).
One thing to note for folks seeing this who are coming from a MassTransit or NServiceBus background, Wolverine does not need you to specify any kind of connectivity between message handlers and listening endpoints. That might become an “opt in” feature some day, but there’s nothing like that in Wolverine today.
Summary and What’s Next
I just barely exposed a little bit of what Wolverine can while using Rabbit MQ as a messaging transport. There’s a ton of levers and knobs to adjust for increased throughput or for more strict message ordering. There’s also a conventional routing capability that might be a good default for getting started.
As far as when you should use asynchronous messaging, my thinking is that you should pretty well always use asynchronous messaging between two processing unless you have to have the inline response from the downstream system. Otherwise, I think that using asynchronous messaging techniques helps to decouple systems from each other temporally, and gives you more tools for creating robust and resilient systems through error handling policies.
And speaking of “resiliency”, I think that will be the subject of one of the remaining posts in this series.
There’s a new Marten 7.0 beta 4 release out today with a new round of bug fixes and some performance enhancements. We’re getting closer to getting a 7.0 release out, so I thought I’d update the world a bit on what’s remaining. I’d also love to give folks a chance to weigh in on some of the outstanding work that may or may not make the cut for 7.0 or slide to later. Due to some commitments to clients, I’m hoping to have the release out by early February at the latest, but we’ll see.
A Wolverine 2.0 release will follow shortly, but that’s going to be almost completely about upgrading Wolverine to use the latest Marten and Weasel dependencies and shouldn’t result in any breaking changes.
What’s In Flight or Outstanding
There’s several medium sized efforts either in flight, or yet to come. User feedback is certainly welcome:
Low level database execution improvements. We’re doing a lot of work to integrate relatively newer ADO.Net features from Npgsql that will help us wring out a little better performance. As part of that work, we’re going to replace our homegrown resiliency feature (IRetryPolicy) with a more efficient and likely more effective approach using Polly baked into Marten. I was hesitant to take on Polly before because of its tendency to be a diamond dependency issue, but I think we’ve changed our minds about the risk/reward equation here. I think we’ll also get a little performance and scalability boost by using Polly’s static Lambda approach in place of our current approach. The reality is that while you probably shouldn’t be too consumed with micro-optimizations in application development, it’s much more valuable in infrastructure code like Marten to be as performant as possible.
Open Telemetry support baked in. I think this is a low hanging fruit issue that might be a great place for anyone to jump in. Please feel free to weigh in on the possible approaches we’ve outlined.
Better scalability for asynchronous projections and the ability to deploy projection and event changes with less or even zero downtime compared to the current Marten. I’ll refer you to a longer discussion for feedback on possible directions. That discussion also touches on topics around event data migrations and archival strategies.
Enabling built in support for strong typed identifiers. This is far more work than I personally think it’s worth, but plenty of folks tell us that it’s a must have feature even to the point where they tell us they won’t use Marten until this exists. This kind of thing is what drives me personally to make disparaging remarks about the DDD community’s seeming love of code ceremony. Grr.
“Partial” document updates with native PostgreSQL features. We’ve had this functionality for years, but it depends on the PLv8 extension to PostgreSQL that’s continuously harder to use, especially on the cloud. I think this could be a big win, especially for users coming from MongoDb
Dynamic Tenant Database Discovery — customer request, and that means it goes to a the top of the priority list. Weird how it works that way.
What else folks? I don’t want the release to drag on forever, but there’s plenty of other things to do
LINQ Improvements
From my perspective, the effective rewrite of the LINQ provider support for V7 is the single biggest change and improvement for Marten 7. As always, I’m hopeful that this shores up Marten’s technical foundation for years to come. I’d sum that work up as:
Glass Half Full: the new LINQ support covers a lot more scenarios that were missing previously, and especially improves both the number of supported use cases and the efficiency of the generated SQL for querying within child collections in many cases. Moreover, the new LINQ support should be better about telling you when it can’t support something instead of doing erroneous searches, and should be in much better shape for when we need to add new permutations to the support from user requests later.
Glass Half Empty: It took a long, long time to get this done and it was quite an opportunity cost for me personally. We also got a large GitHub sponsorship for this work, and while I was and am very grateful for that, I’m also feeling guilty about how long it took to finish that work.
And that folks is the life of a semi-successful OSS author in one nutshell.
Hey, did you know that JasperFx Software is ready for formal support plans for Marten and Wolverine? Not only are we trying to make the “Critter Stack” tools be viable long term options for your shop, we’re also interested in hearing your opinions about the tools and how they should change.We’re also certainly open to help you succeed with your software development projects on a consulting basis whether you’re using any part of the Critter Stack or any other .NET server side tooling.
Let’s build a small web service application using the whole “Critter Stack” and their friends, one small step at a time. For right now, the “finished” code is at CritterStackHelpDesk on GitHub.
I’m taking a short detour in this series today as I prepare to do my “Contrarian Architecture” talk at the CodeMash 2024 conference today. In that talk (here’s a version from NDC Oslo 2023), I’m going to spend some time more or less bashing stereotypical usages of the Clean or Onion Architecture prescriptive approach.
While there’s nothing to prevent you from using either Wolverine or Marten within a typical Clean Architecture style code organization, the “Critter Stack” plays well within a lower code ceremony vertical slice architecture that I personally prefer.
First though, let’s talk about why I don’t like about the stereotypical Code/Onion Architecture approach you commonly find in enterprise .NET systems. With this common mode of code organization, the incident tracking help desk service we have been building in this series might be organized something like:
Class Name
Project
IncidentController
HelpDesk.API
IncidentService
HelpDesk.ServiceLayer
Incident
HelpDesk.Domain
IncidentRepository
HelpDesk.Data
Don’t laugh because a lot of people do this
This kind of code structure is primarily organized around the “nouns” of the system and reliant on the formal layering prescriptions to try to create a healthy separation of concerns. It’s probably perfectly fine for pure CRUD applications, but breaks down very badly over time for more workflow centric applications.
I despise this form of code organization in very large systems because:
It scatters closely related code throughout the codebase
You typically don’t spend a lot of time trying to reason about an entire layer at a time. Instead, you’re largely worried about the behavior of one single use case and the logical flow through the entire stack for that one use case
The code layout tells you very little about what the application does as it’s primarily focused around technical concerns (hat tip to David Whitney for that insight)
It’s high ceremony. Lots of layers, interfaces, and just a lot of stuff
Abstractions around the low level persistence infrastructure can very easily lead you to poorly performing code and can make it much harder later to understand why code is performing poorly in production
Shifting to the Idiomatic Wolverine Approach
Let’s say that we’re sitting around a fire boasting of our victories in software development (that’s a lie, I’m telling horror stories about the worst systems I’ve ever seen) and you ask me “Jeremy, what is best in code?”
And I’d respond:
Low ceremony code that’s easy to read and write
Closely related code is close together
Unrelated code is separated
Code is organized around the “verbs” of the system, which in the case of Wolverine probably means the commands
The code structure by itself gives some insight into what the system actually does
Taking our LogIncident command, I’m going to put every drop of code related to that command in a single file called “LogIncident.cs”:
public record LogIncident(
Guid CustomerId,
Contact Contact,
string Description
)
{
public class LogIncidentValidator : AbstractValidator<LogIncident>
{
// I stole this idea of using inner classes to keep them
// close to the actual model from *someone* online,
// but don't remember who
public LogIncidentValidator()
{
RuleFor(x => x.Description).NotEmpty().NotNull();
RuleFor(x => x.Contact).NotNull();
}
}
};
public record NewIncidentResponse(Guid IncidentId)
: CreationResponse("/api/incidents/" + IncidentId);
public static class LogIncidentEndpoint
{
[WolverineBefore]
public static async Task<ProblemDetails> ValidateCustomer(
LogIncident command,
// Method injection works just fine within middleware too
IDocumentSession session)
{
var exists = await session.Query<Customer>().AnyAsync(x => x.Id == command.CustomerId);
return exists
? WolverineContinue.NoProblems
: new ProblemDetails { Detail = $"Unknown customer id {command.CustomerId}", Status = 400};
}
[WolverinePost("/api/incidents")]
public static (NewIncidentResponse, IStartStream) Post(LogIncident command, User user)
{
var logged = new IncidentLogged(
command.CustomerId,
command.Contact,
command.Description,
user.Id);
var op = MartenOps.StartStream<Incident>(logged);
return (new NewIncidentResponse(op.StreamId), op);
}
}
Every single bit of code related to handling this operation in our system is in one file that we can read top to bottom. A few significant points about this code:
I think it’s working out well in other Wolverine systems to largely name the files based on command names or the request body models for HTTP endpoints. At least with systems being built with a CQRS approach. Using the command name allows the system to be more self descriptive when you’re just browsing the codebase for the first time
The behavioral logic is still isolated to the Post() method, and even though there is some direct data access in the same class in its LoadAsync() method, the Post() method is a pure function that can be unit tested without any mocks
There’s also no code unrelated to LogIncident anywhere in this file, so you bypass the problem you get in noun-centric code organizations where you have to train your brain to ignore a lot of unrelated code in an IncidentService that has nothing to do with the particular operation you’re working on at any one time
I’m not bothering to wrap any kind of repository abstraction around Marten’s IDocumentSession in this code sample. That’s not to say that I wouldn’t do so in the case of something more complicated, and especially if there’s some kind of complex set of data queries that would need to be reused in other commands
You can clearly see the cause and effect between the command input and any outcomes of that command. I think this is an important discussion all by itself because it can easily be hard to reason about that same kind of cause and effect in systems that split responsibilities within a single use case across different areas of the code and even across different projects or components. Codebases that are hard to reason about are very prone to regression errors down the line — and that’s the voice of painful experience talking.
I certainly wouldn’t use this “single file” approach on larger, more complex use cases, but it’s working out well for early Wolverine adopters so far. Since much of my criticism of Clean/Onion Architecture approaches is really about using prescriptive rules too literally, I would also say that I would deviate from this “single file” approach any time it was valuable to reuse code across commands or queries or just when the message handling for a single message gets complex enough to need or want other files to separate responsibilities just within that one use case.
Summary and What’s Next
Wolverine is optimized for a “Vertical Slice Architecture” code organization approach. Both Marten and Wolverine are meant to require as little code ceremony as they can, and that also makes the vertical slice architecture and even the single file approach I showed here be feasible.
Hey, did you know that JasperFx Software is ready for formal support plans for Marten and Wolverine? Not only are we trying to make the “Critter Stack” tools be viable long term options for your shop, we’re also interested in hearing your opinions about the tools and how they should change.We’re also certainly open to help you succeed with your software development projects on a consulting basis whether you’re using any part of the Critter Stack or any other .NET server side tooling.
Let’s build a small web service application using the whole “Critter Stack” and their friends, one small step at a time. For right now, the “finished” code is at CritterStackHelpDesk on GitHub.
Let’s start this post by making a bold statement that I’ll probably regret, but still spend the rest of this post trying to back up:
Remembering the basic flow of our incident tracking, help desk service in this series, we’ve got this workflow:
Starting in the middle with the “Categorize Incident”, our system’s workflow is something like:
A technician will send a request to change the category of the incident
If the system determines that the request will be changing the category, the system will append a new event to mark that state, and also publish a new command message to try to assign a priority to the incident automatically based on the customer data
When the system handles that new “Try Assign Priority” command, it will look at the customer’s settings, and likewise append another event to record the change of priority for the incident. If the incident changes, it will also publish a message to an external “Notification Service” — but for this post, let’s just worry about whether we’re correctly publishing the right message
In an earlier post, I showed this version of a message handler for the CategoriseIncident command:
public static class CategoriseIncidentHandler
{
public static readonly Guid SystemId = Guid.NewGuid();
[AggregateHandler]
// The object? as return value will be interpreted
// by Wolverine as appending one or zero events
public static async Task<object?> Handle(
CategoriseIncident command,
IncidentDetails existing,
IMessageBus bus)
{
if (existing.Category != command.Category)
{
// Send the message to any and all subscribers to this message
await bus.PublishAsync(
new TryAssignPriority { IncidentId = existing.Id });
return new IncidentCategorised
{
Category = command.Category,
UserId = SystemId
};
}
// Wolverine will interpret this as "do no work"
return null;
}
}
Notice that this handler is injecting the Wolverine IMessageBus service into the handler method. We could test this code as is with a “fake” for IMessageBus just to verify whether the expected outgoing message for TryAssignPriority goes out or not. Helpfully, Wolverine even supplies a “spy” version of IMessageBus called TestMessageContext that can be used in unit tests as a stand in just to record what the outgoing messages were.
My strong preference though is to use Wolverine’s concept of cascading messages to write a pure function such that the behavioral logic can be tested without any mocks, stubs, or other fakes. In the sample code above, we had been using Wolverine as “just” a “Mediator” within an MVC Core controller. This time around, let’s ditch the unnecessary “Mediator” ceremony and use a Wolverine HTTP endpoint for the same functionality. In this case we can write the same functionality as a pure function like so:
public static class CategoriseIncidentEndpoint
{
[WolverinePost("/api/incidents/categorise"), AggregateHandler]
public static (Events, OutgoingMessages) Post(
CategoriseIncident command,
IncidentDetails existing,
User user)
{
var events = new Events();
var messages = new OutgoingMessages();
if (existing.Category != command.Category)
{
// Append a new event to the incident
// stream
events += new IncidentCategorised
{
Category = command.Category,
UserId = user.Id
};
// Send a command message to try to assign the priority
messages.Add(new TryAssignPriority
{
IncidentId = existing.Id
});
}
return (events, messages);
}
}
In the endpoint above, we’re “pushing” all of the required inputs for our business logic in the Post() method that makes a decision about what state changes should be captured and what additional actions should be done through outgoing, cascaded messages.
A couple notes about this code:
It’s using the aggregate handler workflow we introduced in an earlier post to “push” the IncidentDetails aggregate for the incident stream into the method. We’ll need this information to “decide” what to do next
The Events type is a Wolverine construct that tells Wolverine “hey, the objects in this collection are meant to be appended as events to the event stream for this aggregate.”
Likewise, the OutgoingMessages type is a Wolverine construct that — wait for it — tells Wolverine that the objects contained in that collection should be published as cascading messages after the database transaction succeeds
The Marten + Wolverine transactional middleware is calling Marten’s IDocumentSession.SaveChangesAsync() to commit the logical transaction, and also dealing with the transaction outbox mechanics for the cascading messages from the OutgoingMessages collection.
Alright, with all that said, let’s look at what a unit test for a CategoriseIncident command message that results in the category being changed:
[Fact]
public void raise_categorized_event_if_changed()
{
var command = new CategoriseIncident
{
Category = IncidentCategory.Database
};
var details = new IncidentDetails(
Guid.NewGuid(),
Guid.NewGuid(),
IncidentStatus.Closed,
Array.Empty<IncidentNote>(),
IncidentCategory.Hardware);
var user = new User(Guid.NewGuid());
var (events, messages) = CategoriseIncidentEndpoint.Post(command, details, user);
// There should be one appended event
var categorised = events.Single()
.ShouldBeOfType<IncidentCategorised>();
categorised
.Category.ShouldBe(IncidentCategory.Database);
categorised.UserId.ShouldBe(user.Id);
// And there should be a single outgoing message
var message = messages.Single()
.ShouldBeOfType<TryAssignPriority>();
message.IncidentId.ShouldBe(details.Id);
message.UserId.ShouldBe(user.Id);
}
In real life, I’d probably opt to break that unit test into a BDD-like context and individual tests to assert the expected event(s) being appended and the expected outgoing messages, but this is conceptually easier and I didn’t sleep well last night, so this is what you get!
Let’s move on to the message handler for the TryAssignPriority message, and also make this a pure function so we can easily test the behavior:
public static class TryAssignPriorityHandler
{
// Wolverine will call this method before the "real" Handler method,
// and it can "magically" connect that the Customer object should be delivered
// to the Handle() method at runtime
public static Task<Customer?> LoadAsync(IncidentDetails details, IDocumentSession session)
{
return session.LoadAsync<Customer>(details.CustomerId);
}
// There's some database lookup at runtime, but I've isolated that above, so the
// behavioral logic that "decides" what to do is a pure function below.
[AggregateHandler]
public static (Events, OutgoingMessages) Handle(
TryAssignPriority command,
IncidentDetails details,
Customer customer)
{
var events = new Events();
var messages = new OutgoingMessages();
if (details.Category.HasValue && customer.Priorities.TryGetValue(details.Category.Value, out var priority))
{
if (details.Priority != priority)
{
events.Add(new IncidentPrioritised(priority, command.UserId));
if (priority == IncidentPriority.Critical)
{
messages.Add(new RingAllTheAlarms(command.IncidentId));
}
}
}
return (events, messages);
}
}
I’d ask you to notice the LoadAsync() method above. It’s part of the logical handler workflow, but Wolverine is letting us keep that separate from the main “decider” message Handle() method. We’d have to test the entire handler with an integration test eventually, but we can happily write fast running, fine grained unit tests on the expected behavior by just “pushing” inputs into the Handle() method and measuring the events and outgoing messages just by checking the return values.
Summary and What’s Next
Wolverine’s approach has always been driven by the desire to make your application code as testable as possible. Originally that meant to just keep the framework (Wolverine itself) out of your application code as much as possible. Later on, the Wolverine community was influenced by more Functional Programming techniques and Jim Shore’s paper on Testing without Mocks.
Specifically, Wolverine embraced the idea of the “A-Frame Architecture”, with Wolverine itself in the role of the mediator/controller/conductor coordinates between infrastructural concerns like Marten and your own business logic code in message handlers or HTTP endpoint methods without creating a direct coupling between you behavioral logic code and your infrastructure:
If you take advantage of Wolverine features like cascading messages, side effects, and compound handlers to decompose your system in a more FP-esque way while letting Wolverine handle the coordination, you can arrive at much more testable code.
I said earlier that I’d get to Rabbit MQ messaging soon, and I’ll get around to that soon. To fit in with one of my CodeMash 2024 talks on this Friday, I might take a little side trip into how the “Critter Stack” plays well inside of a low ceremony vertical slice architecture as I get ready to absolutely blast away at the “Clean/Onion Architecture” this week.
Hey, did you know that JasperFx Software is ready for formal support plans for Marten and Wolverine? Not only are we trying to make the “Critter Stack” tools be viable long term options for your shop, we’re also interested in hearing your opinions about the tools and how they should change.We’re also certainly open to help you succeed with your software development projects on a consulting basis whether you’re using any part of the Critter Stack or any other .NET server side tooling.
Let’s build a small web service application using the whole “Critter Stack” and their friends, one small step at a time. For right now, the “finished” code is at CritterStackHelpDesk on GitHub.
Heretofore in this series, I’ve been using ASP.Net MVC Core controllers anytime we’ve had to build HTTP endpoints for our incident tracking, help desk system in order to introduce new concepts a little more slowly.
If you would, let’s refer back to an earlier incarnation of an HTTP endpoint to handle our LogIncident command from an earlier post in this series:
public class IncidentController : ControllerBase
{
private readonly IDocumentSession _session;
public IncidentController(IDocumentSession session)
{
_session = session;
}
[HttpPost("/api/incidents")]
public async Task<IResult> Log(
[FromBody] LogIncident command
)
{
var userId = currentUserId();
var logged = new IncidentLogged(command.CustomerId, command.Contact, command.Description, userId);
var incidentId = _session.Events.StartStream(logged).Id;
await _session.SaveChangesAsync(HttpContext.RequestAborted);
return Results.Created("/incidents/" + incidentId, incidentId);
}
private Guid currentUserId()
{
// let's say that we do something here that "finds" the
// user id as a Guid from the ClaimsPrincipal
var userIdClaim = User.FindFirst("user-id");
if (userIdClaim != null && Guid.TryParse(userIdClaim.Value, out var id))
{
return id;
}
throw new UnauthorizedAccessException("No user");
}
}
Just to be clear as possible here, the Wolverine HTTP endpoints feature introduced in this post can be mixed and matched with MVC Core and/or Minimal API or even FastEndpoints within the same application and routing tree. I think the ASP.Net team deserves some serious credit for making that last sentence a fact.
Today though, let’s use Wolverine HTTP endpoints and rewrite that controller method above the “Wolverine way.” To get started, add a Nuget reference to the help desk service like so:
dotnet add package WolverineFx.Http
Next, let’s break into our Program file and add Wolverine endpoints to our routing tree near the bottom of the file like so:
app.MapWolverineEndpoints(opts =>
{
// We'll add a little more in a bit...
});
// Just to show where the above code is within the context
// of the Program file...
return await app.RunOaktonCommands(args);
Now, let’s make our first cut at a Wolverine HTTP endpoint for the LogIncident command, but I’m purposely going to do it without introducing a lot of new concepts, so please bear with me a bit:
public record NewIncidentResponse(Guid IncidentId)
: CreationResponse("/api/incidents/" + IncidentId);
public static class LogIncidentEndpoint
{
[WolverinePost("/api/incidents")]
public static NewIncidentResponse Post(
// No [FromBody] stuff necessary
LogIncident command,
// Service injection is automatic,
// just like message handlers
IDocumentSession session,
// You can take in an argument for HttpContext
// or immediate members of HttpContext
// as method arguments
ClaimsPrincipal principal)
{
// Some ugly code to find the user id
// within a claim for the currently authenticated
// user
Guid userId = Guid.Empty;
var userIdClaim = principal.FindFirst("user-id");
if (userIdClaim != null && Guid.TryParse(userIdClaim.Value, out var claimValue))
{
userId = claimValue;
}
var logged = new IncidentLogged(command.CustomerId, command.Contact, command.Description, userId);
var id = session.Events.StartStream<Incident>(logged).Id;
return new NewIncidentResponse(id);
}
}
Here’s a few salient facts about the code above to explain what it’s doing:
Just like Wolverine message handlers, the endpoint methods are flexible and Wolverine generates code around your code to mediate between the raw HttpContext for the request and your code
We have already enabled Marten transactional middleware for our message handlers in an earlier post, and that happily applies to Wolverine HTTP endpoints as well. That helps make our endpoint method be just a synchronous method with the transactional middleware dealing with the ugly asynchronous stuff for us.
You can “inject” HttpContext and its immediate children into the method signatures as I did with the ClaimsPrincipal up above
Method injection is automatic without any silly [FromServices] attributes, and that’s what’s happening with the IDocumentSession argument
The LogIncident parameter is assumed to be the HTTP request body due to being the first argument, and it will be deserialized from the incoming JSON in the request body just like you’d probably expect
The NewIncidentResponse type is roughly the equivalent to using Results.Created() in Minimal API to create a response body with the url of the newly created Incident stream and an HTTP status code of 201 for “Created.” What’s different about Wolverine.HTTP is that it can infer OpenAPI documentation from the signature of that type without requiring you to pollute your code by manually adding [ProducesResponseType] attributes on the method to get a “proper” OpenAPI document for the endpoint.
Moving on, that user id detection from the ClaimsPrincipal looks a little bit ugly to me, and likely to be repetitive. Let’s ameliorate that by introducing Wolverine’s flavor of HTTP middleware and move that code to this class:
// Using the custom type makes it easier
// for the Wolverine code generation to route
// things around. I'm not ashamed.
public record User(Guid Id);
public static class UserDetectionMiddleware
{
public static (User, ProblemDetails) Load(ClaimsPrincipal principal)
{
var userIdClaim = principal.FindFirst("user-id");
if (userIdClaim != null && Guid.TryParse(userIdClaim.Value, out var id))
{
// Everything is good, keep on trucking with this request!
return (new User(id), WolverineContinue.NoProblems);
}
// Nope, nope, nope. We got problems, so stop the presses and emit a ProblemDetails response
// with a 400 status code telling the caller that there's no valid user for this request
return (new User(Guid.Empty), new ProblemDetails { Detail = "No valid user", Status = 400});
}
}
Do note the usage of ProblemDetails in that middleware. If there is no user-id claim on the ClaimsPrincipal, we’ll abort the request by writing out the ProblemDetails stating there’s no valid user. This pattern is baked into Wolverine.HTTP to help create one off request validations. We’ll utilize this quite a bit more later.
Next, I need to add that new bit of middleware to our application. As a shortcut, I’m going to just add it to every single Wolverine HTTP endpoint by breaking back into our Program file and adding this line of code:
app.MapWolverineEndpoints(opts =>
{
// We'll add a little more in a bit...
// Creates a User object in HTTP requests based on
// the "user-id" claim
opts.AddMiddleware(typeof(UserDetectionMiddleware));
});
Now, back to our endpoint code and I’ll take advantage of that middleware by changing the method to this:
[WolverinePost("/api/incidents")]
public static NewIncidentResponse Post(
// No [FromBody] stuff necessary
LogIncident command,
// Service injection is automatic,
// just like message handlers
IDocumentSession session,
// This will be created for us through the new user detection
// middleware
User user)
{
var logged = new IncidentLogged(
command.CustomerId,
command.Contact,
command.Description,
user.Id);
var id = session.Events.StartStream<Incident>(logged).Id;
return new NewIncidentResponse(id);
}
This is a little bit of a bonus, but let’s also get rid of the need to inject the Marten IDocumentSession service by using a Wolverine “side effect” with this equivalent code:
[WolverinePost("/api/incidents")]
public static (NewIncidentResponse, IStartStream) Post(LogIncident command, User user)
{
var logged = new IncidentLogged(
command.CustomerId,
command.Contact,
command.Description,
user.Id);
var op = MartenOps.StartStream<Incident>(logged);
return (new NewIncidentResponse(op.StreamId), op);
}
In the code above I’m using the MartenOps.StartStream() method to return a “side effect” that will create a new Marten stream as part of the request instead of directly interacting with the IDocumentSession from Marten. That’s a small thing you might not care for, but it can lead to the elimination of mock objects within your unit tests as you can now write a state-based test directly against the method above like so:
public class LogIncident_handling
{
[Fact]
public void handle_the_log_incident_command()
{
// This is trivial, but the point is that
// we now have a pure function that can be
// unit tested by pushing inputs in and measuring
// outputs without any pesky mock object setup
var contact = new Contact(ContactChannel.Email);
var theCommand = new LogIncident(BaselineData.Customer1Id, contact, "It's broken");
var theUser = new User(Guid.NewGuid());
var (_, stream) = LogIncidentEndpoint.Post(theCommand, theUser);
// Test the *decision* to emit the correct
// events and make sure all that pesky left/right
// hand mapping is correct
var logged = stream.Events.Single()
.ShouldBeOfType<IncidentLogged>();
logged.CustomerId.ShouldBe(theCommand.CustomerId);
logged.Contact.ShouldBe(theCommand.Contact);
logged.LoggedBy.ShouldBe(theUser.Id);
}
}
Hey, let’s add some validation too!
We’ve already introduced middleware, so let’s just incorporate the popular Fluent Validation library into our project and let it do some basic validation on the incoming LogIncident command body, and if any validation fails, pull the ripcord and parachute out of the request with a ProblemDetails body and 400 status code that describes the validation errors.
Next, I have to add the usage of that middleware through this new line of code:
app.MapWolverineEndpoints(opts =>
{
// Direct Wolverine.HTTP to use Fluent Validation
// middleware to validate any request bodies where
// there's a known validator (or many validators)
opts.UseFluentValidationProblemDetailMiddleware();
// Creates a User object in HTTP requests based on
// the "user-id" claim
opts.AddMiddleware(typeof(UserDetectionMiddleware));
});
And add an actual validator for our LogIncident, and in this case that model is just an internal concern of our service, so I’ll just embed that new validator as an inner type of the command type like so:
public record LogIncident(
Guid CustomerId,
Contact Contact,
string Description
)
{
public class LogIncidentValidator : AbstractValidator<LogIncident>
{
// I stole this idea of using inner classes to keep them
// close to the actual model from *someone* online,
// but don't remember who
public LogIncidentValidator()
{
RuleFor(x => x.Description).NotEmpty().NotNull();
RuleFor(x => x.Contact).NotNull();
}
}
};
Now, Wolverine does have to “know” about these validators to use them within the endpoint handling, so I’ll need to have these types registered in the application’s IoC container against the right IValidator<T> interface. This is not required, but Wolverine has a (Lamar) helper to find and register these validators within your project and do so in a way that’s most efficient at runtime (i.e., there’s a micro optimization for making these validators have a Singleton life time in the container if Wolverine can see that the types are stateless). I’ll use that little helper in our Program file within the UseWolverine() configuration like so:
builder.Host.UseWolverine(opts =>
{
// lots more stuff unfortunately, but focus on the line below
// just for now:-)
// Apply the validation middleware *and* discover and register
// Fluent Validation validators
opts.UseFluentValidation();
}
And that’s that. We’ve not got Fluent Validation validation in the request handling for the LogIncident command. In a later section, I’ll explain how Wolverine does this, and try to sell you all on the idea that Wolverine is able to do this more efficiently than other commonly used frameworks *cough* MediatR *cough* that depend on conditional runtime code.
One off validation with “Compound Handlers”
As you might have noticed, the LogIncident command has a CustomerId property that we’re using as is within our HTTP handler. We should never just trust the inputs of a random client, so let’s at least validate that the command refers to a real customer.
Now, typically I like to make Wolverine message handler or HTTP endpoint methods be the “happy path” and handle exception cases and one off validations with a Wolverine feature we inelegantly call “compound handlers.”
I’m going to add a new method to our LogIncidentHandler class like so:
// Wolverine has some naming conventions for Before/Load
// or After/AfterAsync, but you can use a more descriptive
// method name and help Wolverine out with an attribute
[WolverineBefore]
public static async Task<ProblemDetails> ValidateCustomer(
LogIncident command,
// Method injection works just fine within middleware too
IDocumentSession session)
{
var exists = await session.Query<Customer>().AnyAsync(x => x.Id == command.CustomerId);
return exists
? WolverineContinue.NoProblems
: new ProblemDetails { Detail = $"Unknown customer id {command.CustomerId}", Status = 400};
}
Integration Testing
While the individual methods and middleware can all be tested separately, you do want to put everything together with an integration test to prove out whether or not all this magic really works. As I described in an earlier post where we learned how to use Alba to create an integration testing harness for a “critter stack” application, we can write an end to end integration test against the HTTP endpoint like so (this sample doesn’t cover every permutation, but hopefully you get the point):
[Fact]
public async Task create_a_new_incident_happy_path()
{
// We'll need a user
var user = new User(Guid.NewGuid());
// Log a new incident first
var initial = await Scenario(x =>
{
var contact = new Contact(ContactChannel.Email);
x.Post.Json(new LogIncident(BaselineData.Customer1Id, contact, "It's broken")).ToUrl("/api/incidents");
x.StatusCodeShouldBe(201);
x.WithClaim(new Claim("user-id", user.Id.ToString()));
});
var incidentId = initial.ReadAsJson<NewIncidentResponse>().IncidentId;
using var session = Store.LightweightSession();
var events = await session.Events.FetchStreamAsync(incidentId);
var logged = events.First().ShouldBeOfType<IncidentLogged>();
// This deserves more assertions, but you get the point...
logged.CustomerId.ShouldBe(BaselineData.Customer1Id);
}
[Fact]
public async Task log_incident_with_invalid_customer()
{
// We'll need a user
var user = new User(Guid.NewGuid());
// Reject the new incident because the Customer for
// the command cannot be found
var initial = await Scenario(x =>
{
var contact = new Contact(ContactChannel.Email);
var nonExistentCustomerId = Guid.NewGuid();
x.Post.Json(new LogIncident(nonExistentCustomerId, contact, "It's broken")).ToUrl("/api/incidents");
x.StatusCodeShouldBe(400);
x.WithClaim(new Claim("user-id", user.Id.ToString()));
});
}
}
Um, how does this all work?
So far I’ve shown you some “magic” code, and that tends to really upset some folks. I also made some big time claims about how Wolverine is able to be more efficient at runtime (alas, there is a significant “cold start” problem you can easily work around, so don’t get upset if your first ever Wolverine request isn’t snappy).
Wolverine works by using code generation to wrap its handling code around your code. That includes the middleware, and the usage of any IoC services as well. Moreover, do you know what the fastest IoC container is in all the .NET land? I certainly think that Lamar is at least in the game for that one, but nope, the answer is no IoC container at runtime.
One of the advantages of this approach is that we can preview the generated code to unravel the “magic” and explain what Wolverine is doing at runtime. Moreover, we’ve tried to add descriptive comments to the generated code to further explain what and why code is in place.
Here’s the generated code for our LogIncident endpoint (warning, ugly generated code ahead):
// <auto-generated/>
#pragma warning disable
using FluentValidation;
using Microsoft.AspNetCore.Routing;
using System;
using System.Linq;
using Wolverine.Http;
using Wolverine.Http.FluentValidation;
using Wolverine.Marten.Publishing;
using Wolverine.Runtime;
namespace Internal.Generated.WolverineHandlers
{
// START: POST_api_incidents
public class POST_api_incidents : Wolverine.Http.HttpHandler
{
private readonly Wolverine.Http.WolverineHttpOptions _wolverineHttpOptions;
private readonly Wolverine.Runtime.IWolverineRuntime _wolverineRuntime;
private readonly Wolverine.Marten.Publishing.OutboxedSessionFactory _outboxedSessionFactory;
private readonly FluentValidation.IValidator<Helpdesk.Api.LogIncident> _validator;
private readonly Wolverine.Http.FluentValidation.IProblemDetailSource<Helpdesk.Api.LogIncident> _problemDetailSource;
public POST_api_incidents(Wolverine.Http.WolverineHttpOptions wolverineHttpOptions, Wolverine.Runtime.IWolverineRuntime wolverineRuntime, Wolverine.Marten.Publishing.OutboxedSessionFactory outboxedSessionFactory, FluentValidation.IValidator<Helpdesk.Api.LogIncident> validator, Wolverine.Http.FluentValidation.IProblemDetailSource<Helpdesk.Api.LogIncident> problemDetailSource) : base(wolverineHttpOptions)
{
_wolverineHttpOptions = wolverineHttpOptions;
_wolverineRuntime = wolverineRuntime;
_outboxedSessionFactory = outboxedSessionFactory;
_validator = validator;
_problemDetailSource = problemDetailSource;
}
public override async System.Threading.Tasks.Task Handle(Microsoft.AspNetCore.Http.HttpContext httpContext)
{
var messageContext = new Wolverine.Runtime.MessageContext(_wolverineRuntime);
// Building the Marten session
await using var documentSession = _outboxedSessionFactory.OpenSession(messageContext);
// Reading the request body via JSON deserialization
var (command, jsonContinue) = await ReadJsonAsync<Helpdesk.Api.LogIncident>(httpContext);
if (jsonContinue == Wolverine.HandlerContinuation.Stop) return;
// Execute FluentValidation validators
var result1 = await Wolverine.Http.FluentValidation.Internals.FluentValidationHttpExecutor.ExecuteOne<Helpdesk.Api.LogIncident>(_validator, _problemDetailSource, command).ConfigureAwait(false);
// Evaluate whether or not the execution should be stopped based on the IResult value
if (!(result1 is Wolverine.Http.WolverineContinue))
{
await result1.ExecuteAsync(httpContext).ConfigureAwait(false);
return;
}
(var user, var problemDetails2) = Helpdesk.Api.UserDetectionMiddleware.Load(httpContext.User);
// Evaluate whether the processing should stop if there are any problems
if (!(ReferenceEquals(problemDetails2, Wolverine.Http.WolverineContinue.NoProblems)))
{
await WriteProblems(problemDetails2, httpContext).ConfigureAwait(false);
return;
}
var problemDetails3 = await Helpdesk.Api.LogIncidentEndpoint.ValidateCustomer(command, documentSession).ConfigureAwait(false);
// Evaluate whether the processing should stop if there are any problems
if (!(ReferenceEquals(problemDetails3, Wolverine.Http.WolverineContinue.NoProblems)))
{
await WriteProblems(problemDetails3, httpContext).ConfigureAwait(false);
return;
}
// The actual HTTP request handler execution
(var newIncidentResponse_response, var startStream) = Helpdesk.Api.LogIncidentEndpoint.Post(command, user);
// Placed by Wolverine's ISideEffect policy
startStream.Execute(documentSession);
// This response type customizes the HTTP response
ApplyHttpAware(newIncidentResponse_response, httpContext);
// Commit any outstanding Marten changes
await documentSession.SaveChangesAsync(httpContext.RequestAborted).ConfigureAwait(false);
// Have to flush outgoing messages just in case Marten did nothing because of https://github.com/JasperFx/wolverine/issues/536
await messageContext.FlushOutgoingMessagesAsync().ConfigureAwait(false);
// Writing the response body to JSON because this was the first 'return variable' in the method signature
await WriteJsonAsync(httpContext, newIncidentResponse_response);
}
}
// END: POST_api_incidents
}
Summary and What’s Next
The Wolverine.HTTP library was originally built to be a supplement to MVC Core or Minimal API by allowing you to create endpoints that integrated well into Wolverine’s messaging, transactional outbox functionality, and existing transactional middleware. It has since grown into being more of a full fledged alternative for building web services, but with potential for substantially less ceremony and far more testability than MVC Core.
In later posts I’ll talk more about the runtime architecture and how Wolverine squeezes out more performance by eliminating conditional runtime switching, reducing object allocations, and sidestepping the dictionary lookups that are endemic to other “flexible” .NET frameworks like MVC Core.
Wolverine.HTTP has not yet been used with Razor at all, and I’m not sure that will ever happen. Not to worry though, you can happily use Wolverine.HTTP in the same application with MVC Core controllers or even Minimal API endpoints.
OpenAPI support has been a constant challenge with Wolverine.HTTP as the OpenAPI generation in ASP.Net Core is very MVC-centric, but I think we’re in much better shape now.
In the next post, I think we’ll introduce asynchronous messaging with Rabbit MQ. At some point in this series I’m going to talk more about how the “Critter Stack” is well suited for a lower ceremony vertical slice architecture that (hopefully) creates a maintainable and testable codebase without all the typical Clean/Onion Architecture baggage that I could personally do without.
And just for fun…
My “History” with ASP.Net MVC
There’s no useful content in this section, just some navel-gazing. Even though I really haven’t had to use ASP.Net MVC too terribly much, I do have a long history with it:
In the beginning, there was what we now call ASP Classic, and it was good. For that day and time anyway when we would happily code directly in production and before TDD and SOLID and namby-pamby “source control.” (I started my development career in “Shadow IT” if that’s not obvious here). And when we did use source control, it was VSS because on the sly because the official source control in the office was something far, far worse that was COBOL-centric that I don’t think even exists any longer.
Next there was ASP.Net WebForms and it was dreadful. I hated it.
We started collectively learning about Agile and wanted to practice Test Driven Development, and began to hate WebForms even more
Ruby on Rails came out in the middle 00’s and made what later became the ALT.Net community absolutely loathe WebForms even more than we already did
At an MVP Summit on the Microsoft campus, the one and only Scott Guthrie, the Gu himself, showed a very early prototype of ASP.Net MVC to a handful of us and I was intrigued. That continued onward through the official unveiling of MVC at the very first ALT.Net open spaces event in Austin in ’07.
A few collaborators and I decided that early ASP.Net MVC was too high ceremony and went all “Captain Ahab” trying to make an alternative, open source framework called FubuMVC go as an alternative — all while NancyFx, a “yet another Sinatra clone” became far more successful years before Microsoft finally got around to their own inevitable Sinatra clone (Minimal API)
After .NET Core came along and made .NET a helluva lot better ecosystem, I decided that whatever, MVC Core is fine, it’s not going to be the biggest problem on our project, and if the client wants to use it, there’s no need to be upset about it. It’s fine, no really.
MVC Core has gotten some incremental improvements over time that made it lower ceremony than earlier ASP.Net MVC, and that’s worth calling out as a positive
People working with MVC Core started running into the problem of bloated controllers, and started using early MediatR as a way to kind of, sort of manage controller bloat by offloading it into focused command handlers. I mocked that approach mercilessly, but that was partially because of how awful a time I had helping folks do absurdly complicated middleware schemes with MediatR using StructureMap or Lamar (MVC Core + MediatR is probably worthwhile as a forcing function to avoid the controller bloat problems with MVC Core by itself)
I worked on several long-running codebases built with MVC Core based on Clean Architecture templates that were ginormous piles of technical debt, and I absolutely blame MVC Core as a contributing factor for that
I’m back to mildly disliking MVC Core (and I’m outright hostile to Clean/Onion templates). Not that you can’t write maintainable systems with MVC Core, but I think that its idiomatic usage can easily lead to unmaintainable systems. Let’s just say that I don’t think that MVC Core — and especially combined with some kind of Clean/Onion Architecture template as it very commonly is out in the wild — leads folks to the “pit of success” in the long run
Hey, did you know that JasperFx Software is now able to offer formal support plans and consulting for Marten and Wolverine? Not only are we trying to make the “Critter Stack” tools be viable long term options for your shop, we’re also interested in hearing your opinions about the tools and how they should change.We’re also certainly open to help you succeed with your software development projects on a consulting basis whether you’re using any part of the Critter Stack or any other .NET server side tooling.
I’ve written posts like this in early January over the past several years laying out my grand hopes for my OSS work in the new year, and if you’re curious, you can check out my theoretical plans from 2021, 2022, and 2023. I’m always wrong of course, and there’s going to be a few things on my list this year that are repeats from the past couple years. I’m still going to claim my superpower as an OSS developer is having a much longer attention span than the average developer, but that cuts both ways.
But first…
My 2023 in Review
I had a huge year in 2023 by any possible measure. After 15 years of constant effort and a couple hurtful false starts along the way, I started a new company named JasperFx Software LLC as both a software development consultancy and to build a sustainable business model around the “Critter Stack” tools of Marten and Wolverine. Let me stop here and say how much I appreciate our early customers and I’m looking forward to expanding on those relationships in the New Year’s!
Technically speaking, I was most excited — and disappointed a little bit about how long it took — for the Wolverine 1.0 release this summer! That was especially gratifying for me because Wolverine took 5-6 years and a pretty substantial reboot and rename in 2022 to fully gestate into what it is now. Wolverine might not be exploding in download numbers (yet), but it’s attracted a great community of early users and we’ve collectively pushed Wolverine to 1.13 now with a ton of new features and usability improvements that weren’t on my radar a year ago at all.
Personally, my highlights were finally meeting my collaborator and friend Oskar Dudycz in real life at NDC Oslo — which supposed to have happened years earlier but a certain worldwide pandemic delayed that for a few years. I also enjoyed my trip to the KCDC conference last year, and turned that into a road trip with my older son to visit family along the way.
My most important goal for 2024 is to reduce my personal stress level that’s been a fallout from spinning up the new company. Wish me luck on that one.
First, let’s start with what’s either heavily in flight, then the work JasperFx is doing for clients in January/February this year:
Marten 7.0 is moving along pretty well right now. The biggest chunk of work so far has been the completely revamped LINQ support that improves both the span of supported LINQ use cases and is able to generate much more efficient SQL for nested child collection searching. Besides adding a lot more polish overall, we’re making improvements to Marten’s performance by utilizing newer Npgsql features like data sources, finally building out a native “partial” update model that doesn’t depend on Javascript running in PostgreSQL, and revamping Marten’s retry functionality. And that doesn’t even address improvements to the event store functionality.
There’ll also be a Wolverine 2.0 early this year, but I think that will mostly be about integrating Wolverine with Marten 7.0 and probably dropping .NET 6 support.
A JasperFx customer has engaged us to build out functionality to be able to utilize and manage new tenant databases inside a “database per tenant” multi-tenancy strategy using both Marten and Wolverine without requiring any downtime.
For a different JasperFx customer, we’re finally building in the long planned ability to scale Marten’s event store features to “really big” workloads by being able to adaptively distribute projection work across the running nodes within a cluster instead of today’s “hot/cold” failover approach. That’s been on my list of goals for the New Year for several years running, but it finally happens early in 2024
As part of the previous bullet, we’re building in the ability to do zero downtime deployments of changes to event projections. As part of those plans, we’re also aiming for true blue/green deployment capabilities for Marten’s event sourcing feature set.
“First class subscriptions” from Marten’s event store through Wolverine’s messaging features
For the last two bullet points, that brings me to JasperFx’s plans for world domination (or at least enough revenue to keep growing).
I know some folks are annoyed at our potential push for an open core model, but using a paid model for advanced features. I understand that, but I think that that option will create a more sustainable environment for the open core model to continue. My personal dividing line is that any feature that is almost automatically going to require us to help users utilize or configure it, or leads to very large transaction throughput absolutely deserves to be paid for.
The details aren’t firmed up by any means, but the “Critter Stack” is moving to an Open-core model where the existing libraries continue under the MIT license while we also offer a new set of functionality for complex usages, advanced monitoring and management, and improved scalability. Tentatively, we’re shamelessly calling this the “CritterStackPro.” The first couple features are all related to the event sourcing scalability and deployment capabilities our largest customer has commissioned that I described up above. I’m very excited to see this all come to fruition after years of planning and discussions.
Beyond that, we’ve got some ideas and plenty of user feedback about what would be valuable for a potential management console for the “Critter Stack” tools.
Other Vaguely Thought Up Aspirations
Continue to push Marten & Wolverine to be the best possible technical platform for building event driven architectures
I can’t speak to any specifics yet (’cause I don’t know them anyway), but there will be some improved integration recipes for Marten/Wolverine with Hot Chocolate both via user request and through a JasperFx Software customer
Add more robust sample applications and tutorials for both Marten and Wolverine to our various websites
Oskar already has a new code name for our next “Critter Stack” tool. I’m not saying that will be Marten-like event sourcing support and first class Wolverine support using Sql Server, but I’m not “not saying” that’s what it would be either.
I’m still somewhat interested in an optimized serverless mode for both Marten and Wolverine to really leverage AOT compilation, but man, that’s going to take some effort
Somehow, some way, get or build out better infrastructure for the kind of automated integration testing we do with Marten and Wolverine
And that’s enough dreaming for now. I’m looking forward to seeing how the Critter Stack tools and our community continues to grow and progress in 2024. Happy New Year’s everyone!
Hey, did you know that JasperFx Software is ready for formal support plans for Marten and Wolverine? Not only are we trying to make the “Critter Stack” tools be viable long term options for your shop, we’re also interested in hearing your opinions about the tools and how they should change.We’re also certainly open to help you succeed with your software development projects on a consulting basis whether you’re using any part of the Critter Stack or any other .NET server side tooling.
Let’s build a small web service application using the whole “Critter Stack” and their friends, one small step at a time. For right now, the “finished” code is at CritterStackHelpDesk on GitHub.
As we layer in new technical concepts from both Wolverine and Marten to build out incident tracking, help desk API, we looked at this message handler in the last post that both saved data, and published a message to an asynchronous, local queue that would act upon the newly saved data at some point.
public static class CategoriseIncidentHandler
{
public static readonly Guid SystemId = Guid.NewGuid();
[AggregateHandler]
// The object? as return value will be interpreted
// by Wolverine as appending one or zero events
public static async Task<object?> Handle(
CategoriseIncident command,
IncidentDetails existing,
IMessageBus bus)
{
if (existing.Category != command.Category)
{
// Send the message to any and all subscribers to this message
await bus.PublishAsync(
new TryAssignPriority { IncidentId = existing.Id });
return new IncidentCategorised
{
Category = command.Category,
UserId = SystemId
};
}
// Wolverine will interpret this as "do no work"
return null;
}
}
To recap, that message handler is potentially appending an IncidentCategorised event to an Incident event stream and publishing a command message named TryAssignPriority that will trigger a downstream action to try to assign a new priority to our Incident.
This relatively simple message handler (and we’ll make it even simpler in a later post in this series) creates a host of potential problems for our system:
In a naive usage of messaging tools, there’s a race condition between the outbound `TryAssignPriority` message being picked up by its handler and the database changes getting committed to the database. I have seen this cause nasty, hard to reproduce bugs through in real life production applications when once in awhile the message is processed before the database changes are made, and the system behaves incorrectly because the expected data is not yet committed by the original command finishing.
Maybe the actual message sending fails, but the database changes succeed, so the system is in an inconsistent state.
Maybe the outgoing message is happily published successfully, but the database changes fail, so that when the TryAssignPriority message is handled, it’s working against old system state.
Event if everything succeeds perfectly, the outgoing message should never actually be published until the transaction is complete.
To be clear, even without the usage of the outbox feature we’re about to use, Wolverine will apply an “in memory outbox” in message handlers such that all the messages published through IMessageBus.PublishAsync()/SendAsync()/etc. will be held in memory until the successful completion of the message handler. That by itself is enough to prevent the race condition between the database changes and the outgoing messages.
using Wolverine.Marten;
var builder = WebApplication.CreateBuilder(args);
builder.Services.AddMarten(opts =>
{
// This would be from your configuration file in typical usage
opts.Connection(Servers.PostgresConnectionString);
opts.DatabaseSchemaName = "wolverine_middleware";
})
// This is the wolverine integration for the outbox/inbox,
// transactional middleware, saga persistence we don't care about
// yet
.IntegrateWithWolverine()
// Just letting Marten build out known database schema elements upfront
// Helps with Wolverine integration in development
.ApplyAllDatabaseChangesOnStartup();
Among other things, the call to IntegrateWithWolverine() up above directs Wolverine to use the PostgreSQL database for Marten as the durable storage for incoming and outgoing messages as part of Wolverine’s transactional inbox and outbox. The basic goal of this subsystem is to create consistency (really “eventual consistency“) between database transactions and outgoing messages without having to resort to endlessly painful distributed transactions.
Now, we’ve got another step to make. As of right now, Wolverine makes a determination of whether or not to use the durable outbox storage based on the destination of the outgoing message — with the theory that teams might easily want to mix and match durable messaging and less resource intensive “fire and forget” messaging within the same application. In this help desk service, we’ll make that easy and just say that all message processing in local queues (we set up TryAssignPriority to be handled through a local queue in the previous post) to be durable. In the UseWolverine() configuration, I’ll add this line of code to do that:
builder.Host.UseWolverine(opts =>
{
// More configuration...
// Automatic transactional middleware
opts.Policies.AutoApplyTransactions();
// Opt into the transactional inbox for local
// queues
opts.Policies.UseDurableLocalQueues();
// Opt into the transactional inbox/outbox on all messaging
// endpoints
opts.Policies.UseDurableOutboxOnAllSendingEndpoints();
// Set up from the previous post
opts.LocalQueueFor<TryAssignPriority>()
// By default, local queues allow for parallel processing with a maximum
// parallel count equal to the number of processors on the executing
// machine, but you can override the queue to be sequential and single file
.Sequential()
// Or add more to the maximum parallel count!
.MaximumParallelMessages(10);
});
I (Jeremy) may very well declare this “endpoint by endpoint” declaration of durability to have been a big mistake because confused some users and vote to change this in a later version of Wolverine.
With this outbox functionality in place, the messaging and transaction workflow behind the scenes of that handler shown above is to:
When the outgoing TryAssignPriority message is published, Wolverine will “route” that message into its internal Envelope structure that includes the message itself and all the necessary metadata and information Wolverine would need to actually send the message later
The outbox integration will append the outgoing message as a pending operation to the current Marten session
The IncidentCategorised event will be appended to the current Marten session
The Marten session is committed (IDocumentSession.SaveChangesAsync()), which will persist the new event and a copy of the outgoing Envelope into the outbox or inbox (scheduled messages or messages to local queues will be persisted in the incoming table) tables in one single, batched database command and by a native PostgreSQL transaction.
Assuming the database transaction succeeds, the outgoing messages are “released” to Wolverine’s outgoing message publishing in memory (we’re coming back to that last point in a bit)
Once Wolverine is able to successfully publish the message to the outgoing transport, it will delete the database table record for that outgoing message.
The 4th point is important I think. The close integration between Marten & Wolverine allows for more efficient processing by combining the database operations to minimize database round trips. In cases where the outgoing message transport is also batched (Azure Service Bus or AWS SQS for example), the database command to delete messages is also optimized for one call using PostgreSQL array support. I guess the main point of bringing this up is just to say there’s been quite a bit of thought and outright micro-optimizations done to this infrastructure.
But what about…?
the process is shut down cleanly? Wolverine tries to “drain” all in flight work first, and then “release” that process’s ownership of the persisted messages
the process crashes before messages floating around the local queues or outgoing message publishing finishes? Wolverine is able to detect a “dormant node” and reassign the persisted incoming and outgoing messages to be processed by another node. Or in the case of a single node, restart that work when the process is restarted.
the Wolverine tables don’t yet exist in the database? Wolverine has similar database management to Marten (it’s all the shared Weasel library doing that behind the scenes) and will happily build out missing tables in its default setting
an application using a database per tenant multi-tenancy strategy? Wolverine creates separate inbox or outbox storage in each tenant database. It’s complicated and took quite awhile to build, but it works. If no tenant is specified, the inbox/outbox in a “default” database is used
I need to use the outbox approach for consistency outside of a message handler, like when handling an HTTP request that happens to make both database changes and publish messages? That’s a really good question, and arguably one of the best reasons to use Wolverine over other .NET messaging tools because as we’ll see in later posts, that’s perfectly possible and quite easy. There is a recipe for using the Wolverine outbox functionality with MVC Core or Minimal API shown here.
Summary and What’s Next
The outbox (and closely related inbox) support is hugely important inside of any system that uses asynchronous messaging as a way of creating consistency and resiliency. Wolverine’s implementation is significantly different (and honestly more complicated) than typical implementations that depend on just polling from an outbound database table. That’s a positive in some ways because we believe that Wolverine’s approach is more efficient and will lead to greater throughput.
There is also similar inbox/outbox functionality and optimizations for Wolverine with EF Core using either PostgreSQL or Sql Server as the backing storage. In the future, I hope to see the EF Core and Sql Server support improve, but for right now, the Marten integration is getting the most attention and usage. I’d also love to see Wolverine grow to include support for alternative databases, with Azure CosmosDb and AWS Dynamo Db being leading contenders. We’ll see.
As for what’s next, let me figure out what sounds easy for the next post in January. In the meantime, Happy New Year’s everybody!
Hey, did you know that JasperFx Software is ready for formal support plans for Marten and Wolverine? Not only are we trying to make the “Critter Stack” tools be viable long term options for your shop, we’re also interested in hearing your opinions about the tools and how they should change.We’re also certainly open to help you succeed with your software development projects on a consulting basis whether you’re using any part of the Critter Stack or any other .NET server side tooling. Reach us anytime at sales@jasperfx.net or on Discord!
I just published Wolverine 1.13.0 this evening with some significant improvements (see the release notes here). Beyond the normal scattering of bug fixes (and some significant improvements to the MQTT support in Wolverine for a JasperFx Software client who we’re helping build an IoT system), the main headline is that Wolverine does a substantially better job generating OpenAPI documentation for its HTTP endpoint model.
When I’m building web services of any kind I tend to lean very hard into doing integration testing with Alba, and because of that, I also tend not to use Swashbuckle or an equivalent tool very often during development and that has apparently been a blind spot for me in building Wolverine.HTTP so far. To play out a typical conversation I frequently have with other server side .NET developers talking about tooling for web services, I think:
MVC Core by itself — but this is hugely acerbated by unfortunately popular prescriptive architectural patterns that organize code around NounController / NounService / NounRepository code organization — can easily lead to unmaintainable code in bloated controller classes and plenty of work for software consultants who get brought in later to clean up after the system wildly outgrew the original team’s “Clean Architecture” approach
I’m not convinced that Minimal API is any better for larger applications
The MVC Core controllers delegating to an inner “mediator” tool strategy may help divide the code into more maintainable code, but it adds what I think is an unacceptable level of extra code ceremony. Also acerbated by prescriptive architectures
You should use Wolverine.HTTP! It’s much lower ceremony code than the “controllers + mediator” strategy, but still sets you up for a vertical slice architecture! And it integrates well with Marten or Wolverine messaging!
Other developers: This all sounds great! Pause. Hey, the web services with this thing seem to work just fine, but man, the Swashbuckle/NSwag/Angular client generation is all kinds of not good! I’m going back to “Wolverine as MediatR”.
To which I reply:
But no more of that after today because the Wolverine HTTP OpenAPI generation just took a huge leap forward after the 1.13 release!
Here’s a sample of what I mean. From the Wolverine.HTTP test suite, here’s an endpoint method that uses Marten to load an Invoice document, modify it, then save it:
The [Document] attribute tells Wolverine to load the Invoice from Marten, and part of its convention will match on the invoiceId route argument from the route pattern. That failed before in a couple ways:
Swashbuckle can’t be convinced that the Invoice argument isn’t the request body
If you omit an Guid invoiceId argument from the route, Swashbuckle wasn’t seeing invoiceId as a route parameter and didn’t let you specify that in the Swashbuckle page.
Swashbuckle definitely didn’t get that IMartenOp is a specialized Wolverine side effect that shouldn’t be used as the response body.
Now though, that endpoint looks like this in Swashbuckle:
Which is now correct and actually usable! (The 404 is valid because there’s a route argument and that status is returned if the Invoice referred to by the invoiceId route argument does not exist).
To call out some improvements for Wolverine.HTTP users, at least the Swashbuckle generation handles:
Route arguments that are used by Wolverine, but not necessarily in the main method signature. So no stupid, unused [FromRoute] string id method parameters
Querystring arguments are reflected in the Swashbuckle page
[FromHeader] arguments are reflected in Swashbuckle
HTTP endpoints that return some kind of tuple correctly show the response body if there is one — and that’s a commonly used and powerful capability of Wolverine’s HTTP endpoints that previously fouled up the OpenAPI generation
The usage of [EmptyResponse] correctly sets up the 204 status code behavior with no extraneous 200 or 404 status codes coming in by default
Ignoring method injected service parameters in the main method
For a little background, after getting plenty of helpful feedback from Wolverine users, I finally took some more serious time to go investigate the problems and root causes. After digging in much deeper to the AspNetCore and Swashbuckle internals, I came to the conclusion that the OpenAPI internals in AspNetCore are batshit crazy far too hard coded to MVC Core and that Wolverine absolutely had to have its own provider for generating OpenAPI documents off of its own semantic model. Fortunately, AspNetCore and Swashbuckle are both open source, so I could easily get to the source code to reverse engineer what they do under the covers (plus JetBrains Rider is a rock star at disassembling code on the fly). Wolverine.HTTP 1.13 now registers its own strategy for generating the OpenAPI documentation for Wolverine endpoints and keeps the built in MVC Core-centric strategy from applying to the same Wolverine endpoints.
I’m sure there will be other issues over time, but so far, this has addressed every known issue with our OpenAPI generation. I’m hoping this goes a long way toward removing impediments to more users adopting Wolverine.HTTP because as I’ve said before, I think the Wolverine model leads to much lower ceremony code, better testability over all, and potentially to significantly better maintainability of larger systems that today turn into huge messes with MVC Core.
Hey, did you know that JasperFx Software is ready for formal support plans for Marten and Wolverine? Not only are we trying to make the “Critter Stack” tools be viable long term options for your shop, we’re also interested in hearing your opinions about the tools and how they should change.We’re also certainly open to help you succeed with your software development projects on a consulting basis whether you’re using any part of the Critter Stack or any other .NET server side tooling.
Let’s build a small web service application using the whole “Critter Stack” and their friends, one small step at a time. For right now, the “finished” code is at CritterStackHelpDesk on GitHub.
As we continue to add new functionality to our incident tracking, help desk system, we have been using Marten for persistence and Wolverine for command execution within MVC Core controllers (with cameos from Alba for testing support and Oakton for command line utilities).
In the workflow we’ve built out so far for the little system shown below, we’ve created a command called CategorizeIncident that for the moment is only sent to the system through HTTP calls from a user interface.
Let’s say that in our system that we may have some domain logic rules based on customer data that we could use to try to prioritize an incident automatically once the incident is categorized. To that end, let’s create a new command named `TryAssignPriority` like this:
public class TryAssignPriority
{
public Guid IncidentId { get; set; }
}
We’d like to kick off this work any time an incident is categorized, but we might not want to necessarily do that work within the scope of the web request that’s capturing the CategorizeIncident command. Partially this would be a potential scalability issue to potentially offload work from the web server, partially to make the user interface as responsive as possible by not making it wait for slower web service responses, but mostly because I want an excuse to introduce Wolverine’s ability to asynchronously process work through local, in memory queues.
Most of the code in this post is an intermediate form that I’m using just to introduce concepts in the simplest way I can think of. In later posts I’ll show more idiomatic Wolverine ways to do things to arrive at the final version that is in GitHub.
Alright, now that we’ve got our new command class above, let’s publish that locally through Wolverine by breaking into our earlier CategoriseIncidentHandler that I’ll show here in a “before” state:
public static class CategoriseIncidentHandler
{
public static readonly Guid SystemId = Guid.NewGuid();
[AggregateHandler]
public static IEnumerable<object> Handle(CategoriseIncident command, IncidentDetails existing)
{
if (existing.Category != command.Category)
{
yield return new IncidentCategorised
{
Category = command.Category,
UserId = SystemId
};
}
}
}
In this next version, I’m going to add a single call to Wolverine’s main IMessageBus entry point to publish the new TryAssignPriority command message:
public static class CategoriseIncidentHandler
{
public static readonly Guid SystemId = Guid.NewGuid();
[AggregateHandler]
// The object? as return value will be interpreted
// by Wolverine as appending one or zero events
public static async Task<object?> Handle(
CategoriseIncident command,
IncidentDetails existing,
IMessageBus bus)
{
if (existing.Category != command.Category)
{
// Send the message to any and all subscribers to this message
await bus.PublishAsync(new TryAssignPriority { IncidentId = existing.Id });
return new IncidentCategorised
{
Category = command.Category,
UserId = SystemId
};
}
// Wolverine will interpret this as "do no work"
return null;
}
}
I didn’t do anything that is necessarily out of order here. We haven’t built a message handler for TryAssignPriority or done anything to register subscribers, but that can come later because the PublishAsync() call up above will quietly do nothing if there are no known subscribers for the message.
For asynchronous messaging veterans out there, I will discuss Wolverine’s support for a transactional outbox for a later post. For now, just know that there’s at the very least an in-memory outbox around any message handler that will not send out any pending published messages until after the original message is successfully handled. If you’re not familiar with the “transactional outbox” pattern, please come back to read the follow up post on that later because you absolutely need to understand that to use asynchronous messaging infrastructure like Wolverine.
Next, let’s just add a skeleton message handler for our TryAssignPriority command message in the root API projection:
public static class TryAssignPriorityHandler
{
public static void Handle(TryAssignPriority command)
{
Console.WriteLine("Hey, somebody wants me to prioritize incident " + command.IncidentId);
}
}
Switching to the command line (you may need to have the PostgreSQL database running for this next thing to work #sadtrombone), I’m going to call dotnet run -- describe to preview my help desk API a little bit.
Under the section of the textual output with the header “Wolverine Message Routing”, you’ll see the message routing tree for Wolverine’s known message types:
As you can hopefully see in that table up above, just by the fact that Wolverine “knows” there is a handler in the local application for the TryAssignPriority message type, it’s going to route messages of that type to a local queue where it will be executed later in a separate thread.
Don’t worry, this conventional routing, the parallelization settings, and just about anything you can think of is configurable, but let’s mostly stay with defaults for right now.
Switching to the Wolverine configuration in the Program file, here’s a little taste of some of the ways we could control the exact parameters of the asynchronous processing for this local, in memory queue:
builder.Host.UseWolverine(opts =>
{
// more configuration...
// Adding a single Rabbit MQ messaging rule
opts.PublishMessage<RingAllTheAlarms>()
.ToRabbitExchange("notifications");
opts.LocalQueueFor<TryAssignPriority>()
// By default, local queues allow for parallel processing with a maximum
// parallel count equal to the number of processors on the executing
// machine, but you can override the queue to be sequential and single file
.Sequential()
// Or add more to the maximum parallel count!
.MaximumParallelMessages(10);
// Or if so desired, you can route specific messages to
// specific local queues when ordering is important
opts.Policies.DisableConventionalLocalRouting();
opts.Publish(x =>
{
x.Message<TryAssignPriority>();
x.Message<CategoriseIncident>();
x.ToLocalQueue("commands").Sequential();
});
});
Summary and What’s Next
Through its local queues function, Wolverine has very strong support for managing asynchronous work within a local process. Any of Wolverine’s message handling capability is usable within these local queues. You also have complete control over the parallelization of the messages being handled in these local queues.
This functionality does raise a lot of questions that I will try to answer in subsequent posts in this series:
For the sake of system consistency, we absolutely have to talk about Wolverine’s transactional outbox support
How we can use Wolverine’s integration testing support to test our system even when it is spawning additional messages that may be handled asynchronously
Wolverine’s ability to automatically forward captured events in Marten to message handlers for side effects
How to utilize Wolverine’s “special sauce” to craft message handlers as pure functions that are more easily unit tested than what we have so far
Wolverine’s built in Open Telemetry support to trace the asynchronous work end to end
Wolverine’s error handling policies to make our system as resilient as possible
Thanks for reading! I’ve been pleasantly surprised how well this series has been received so far. I think this will be the last entry until after Christmas, but I think I will write at least 7-8 more just to keep introducing bits of Critter Stack capabilities in small bites. In the meantime, Merry Christmas and Happy Holidays to you all!
Let’s build a small web service application using the whole “Critter Stack” and their friends, one small step at a time. For right now, the “finished” code is at CritterStackHelpDesk on GitHub.
Before I go on with anything else in this series, I think we should establish some automated testing infrastructure for our incident tracking, help desk service. While we’re absolutely going to talk about how to structure code with Wolverine to make isolated unit testing as easy as possible for our domain logic, there are some elements of your system’s behavior that are best tested with automated integration tests that use the system’s infrastructure.
In this post I’m going to show you how I like to set up an integration testing harness for a “Critter Stack” service. I’m going to use xUnit.Net in this post, and while the mechanics would be a little different, I think the basic concepts should be easily transferable to other testing libraries like NUnit or MSTest. I’m also going to bring in the Alba library that we’ll use for testing HTTP calls through our system in memory, but in this first step, all you need to understand is that Alba is helping to set up the system under test in our testing harness.
Heads up a little bit, I’m skipping to the “finished” state of the help desk API code in this post, so there’s some Marten and Wolverine concepts sneaking in that haven’t been introduced until now.
First, let’s start our new testing project with:
dotnet new xunit
Then add some additional Nuget references:
dotnet add package Shouldly
dotnet add package Alba
That gives us a skeleton of the testing project. Before going on, we need to add a project reference from our new testing project to the entry point project of our help desk API. As we are worried about integration testing right now, we’re going to want the testing project to be able to start the system under test project up by calling the normal Program.Main() entrypoint so that we’re running the application the way that the system is normally configured — give or take a few overrides.
Let’s stop and talk about this a little bit because I think this is an important point. I think integration tests are more “valid” (i.e. less prone to false positives or false negatives) as they more closely reflect the actual system. I don’t want completely separate bootstrapping for the test harness that may or may not reflect the application’s production bootstrapping (don’t blow that point off, I’ve seen countless teams do partial IoC configuration for testing that can vary quite a bit from the application’s configuration).
So if you’ll accept my argument that we should be bootstrapping the system under test with its own Program.Main() entry point, our next step is to add this code to the main service to enable the test project to access that entry point:
using System.Runtime.CompilerServices;
// You have to do this in order to reference the Program
// entry point in the test harness
[assembly:InternalsVisibleTo("Helpdesk.Api.Tests")]
Switching finally to our testing project, I like to create a class I usually call AppFixture that manages the lifetime of the system under test running in our test project like so:
public class AppFixture : IAsyncLifetime
{
public IAlbaHost Host { get; private set; }
// This is a one time initialization of the
// system under test before the first usage
public async Task InitializeAsync()
{
// Sorry folks, but this is absolutely necessary if you
// use Oakton for command line processing and want to
// use WebApplicationFactory and/or Alba for integration testing
OaktonEnvironment.AutoStartHost = true;
// This is bootstrapping the actual application using
// its implied Program.Main() set up
// This is using a library named "Alba". See https://jasperfx.github.io/alba for more information
Host = await AlbaHost.For<Program>(x =>
{
x.ConfigureServices(services =>
{
// We'll be using Rabbit MQ messaging later...
services.DisableAllExternalWolverineTransports();
// We're going to establish some baseline data
// for testing
services.InitializeMartenWith<BaselineData>();
});
}, new AuthenticationStub());
}
public Task DisposeAsync()
{
if (Host != null)
{
return Host.DisposeAsync().AsTask();
}
return Task.CompletedTask;
}
}
A few notes about the code above:
Alba is using the WebApplicationFactory under the covers to bootstrap our help desk API service using the in memory TestServer in place of Kestrel. WebApplicationFactory does allow us to modify the IoC service registrations for our system and override parts of the system’s normal configuration
In this case, I’m telling Wolverine to effectively stub out all external transports. In later posts we’ll use Rabbit MQ for example to publish messages to an external process, but in this test harness we’re going to turn that off and simple have Wolverine be able to “catch” the outgoing messages in our tests. See Wolverine’s test automation support documentation for more information about this.
The DisposeAsync() method is very important. If you want to make your integration tests be repeatable and run smoothly as you iterate, you need the tests to clean up after themselves and not leave locks on resources like ports or files that could stop the next test run from functioning correctly
Pay attention to the `OaktonEnvironment.AutoStartHost = true;` call, that’s 100% necessary if your application is using Oakton for command parsing. Sorry.
As will be inevitably necessary, I’m using Alba’s facility for stubbing out web authentication that allows us to both sidestep pesky authentication infrastucture in functional testing while also happily letting us pass along user claims as test inputs in individual tests
Bootstrapping the IHost for your application can be expensive, so I prefer to share that host across tests whenever possible, and I generally rely on having individual tests establish their inputs at beginning of each test. See the xUnit.Net documentation on sharing fixtures between tests for more context about the xUnit mechanics.
For the Marten baseline data, right now I’m just making sure there’s at least one valid Customer document that we’ll need later:
public class BaselineData : IInitialData
{
public static Guid Customer1Id { get; } = Guid.NewGuid();
public async Task Populate(IDocumentStore store, CancellationToken cancellation)
{
await using var session = store.LightweightSession();
session.Store(new Customer
{
Id = Customer1Id,
Region = "West Cost",
Duration = new ContractDuration(DateOnly.FromDateTime(DateTime.Today.Subtract(100.Days())), DateOnly.FromDateTime(DateTime.Today.Add(100.Days())))
});
await session.SaveChangesAsync(cancellation);
}
}
To simplify the usage a little bit, I like to have a base class for integration tests that I like to call IntegrationContext:
[Collection("integration")]
public abstract class IntegrationContext : IAsyncLifetime
{
private readonly AppFixture _fixture;
protected IntegrationContext(AppFixture fixture)
{
_fixture = fixture;
}
// more....
public IAlbaHost Host => _fixture.Host;
public IDocumentStore Store => _fixture.Host.Services.GetRequiredService<IDocumentStore>();
async Task IAsyncLifetime.InitializeAsync()
{
// Using Marten, wipe out all data and reset the state
// back to exactly what we described in BaselineData
await Store.Advanced.ResetAllData();
}
// This is required because of the IAsyncLifetime
// interface. Note that I do *not* tear down database
// state after the test. That's purposeful
public Task DisposeAsync()
{
return Task.CompletedTask;
}
// This is just delegating to Alba to run HTTP requests
// end to end
public async Task<IScenarioResult> Scenario(Action<Scenario> configure)
{
return await Host.Scenario(configure);
}
// This method allows us to make HTTP calls into our system
// in memory with Alba, but do so within Wolverine's test support
// for message tracking to both record outgoing messages and to ensure
// that any cascaded work spawned by the initial command is completed
// before passing control back to the calling test
protected async Task<(ITrackedSession, IScenarioResult)> TrackedHttpCall(Action<Scenario> configuration)
{
IScenarioResult result = null;
// The outer part is tying into Wolverine's test support
// to "wait" for all detected message activity to complete
var tracked = await Host.ExecuteAndWaitAsync(async () =>
{
// The inner part here is actually making an HTTP request
// to the system under test with Alba
result = await Host.Scenario(configuration);
});
return (tracked, result);
}
}
The first thing I want to draw your attention to is the call to await Store.Advanced.ResetAllData(); in the InitializeAsync() method that will be called before each of our integration tests executing. In my approach, I strongly prefer to reset the state of the database before each test in order to start from a known system state. I’m also assuming that each test if necessary, will add additional state to the system’s Marten database as necessary for the test. This philosophically is what I’ve long called “Self-Contained Tests.” I also think it’s important to have the tests leave the database state alone after a test run so that if you are running tests one at a time you can use the left over database state to help troubleshoot why a test might have failed.
Other folks will try to spin up a separate database (maybe with TestContainers) per test or even a completely separate IHost per test, but I think that the cost of doing it that way is just too slow. I’d rather reset the system between tests and not incur the cost of recycling database containers and/or the system’s IHost. This comes at the cost of forcing your test suite to run tests in serial order, but I also think that xUnit.Net is not the best possible tool at parallel test runs, so I’m not sure you lose out on anything there.
And now for an actual test. We have an HTTP endpoint in our system we built early on that can process a LogIncident command, and create a new event stream for this new Incident with a single IncidentLogged event. I’ve skipped ahead a little bit and added a requirement that we capture a user id from an expected Claim on the ClaimsPrincipal for the current request that you’ll see reflected in the test below:
public class log_incident : IntegrationContext
{
public log_incident(AppFixture fixture) : base(fixture)
{
}
[Fact]
public async Task create_a_new_incident()
{
// We'll need a user
var user = new User(Guid.NewGuid());
// Log a new incident by calling the HTTP
// endpoint in our system
var initial = await Scenario(x =>
{
var contact = new Contact(ContactChannel.Email);
x.Post.Json(new LogIncident(BaselineData.Customer1Id, contact, "It's broken")).ToUrl("/api/incidents");
x.StatusCodeShouldBe(201);
x.WithClaim(new Claim("user-id", user.Id.ToString()));
});
var incidentId = initial.ReadAsJson<NewIncidentResponse>().IncidentId;
using var session = Store.LightweightSession();
var events = await session.Events.FetchStreamAsync(incidentId);
var logged = events.First().ShouldBeOfType<IncidentLogged>();
// This deserves more assertions, but you get the point...
logged.CustomerId.ShouldBe(BaselineData.Customer1Id);
}
}
Summary and What’s Next
The “Critter Stack” core team and our community care very deeply about effective testing, so we’ve invested from the very beginning in making integration testing as easy as possible with both Marten and Wolverine.
Alba is another little library from the JasperFx family that just makes it easier to write integration tests at the HTTP layer. Alba is perfect for doing integration testing of your web services. I definitely find it advantageous to be able to quickly bootstrap a web service project and run tests completely in memory on demand. That’s a much easier and quicker feedback cycle than trying to deploy the service and write tests that remotely interact with the web service through HTTP. And I shouldn’t even have to mention how absurdly slow it is in comparison to try to test the same web service functionality through the actual user interface with something like Selenium.
From the Marten side of things, PostgreSQL has a pretty small Docker image size, so it’s pretty painless to spin up on development boxes. Especially contrasted with situations where development teams share a centralized development database (shudder, hope not many folks still do that), having an isolated database for each developer that they can also tear down and rebuild at will certainly helps make it a lot easier to succeed with automated integration testing.
I think that document databases in general are a lot easier to deal with in automated testing than using a relational database with an ORM as the persistence tooling as it’s much less friction in setting up database schemas or to tear down database state. Marten goes a step farther than most persistence tools by having built in APIs to tear down database state or reset to baseline data sets in between tests.
We’ll dig deeper into Wolverine’s integration testing support later in this series with message handler testing, testing handlers that in turn spawn other messages, and dealing with external messaging in tests.
I think the next post is just going to be a quick survey of “Marten as Document Database” before I get back to Wolverine’s HTTP endpoint model.