Wolverine’s HTTP Model Does More For You

One of the things I’m wrestling with right now is frankly how to sell Wolverine as a server side toolset. Yes, it’s technically a message library like MassTransit or NServiceBus. It can also be used as “just” a mediator tool like MediatR. With Wolverine.HTTP, it’s even an alternative HTTP endpoint framework that’s technically an alternative to FastEndpoints, MVC Core, or Minimal API. You’ve got to categorize Wolverine somehow, and we humans naturally understand something new by comparing it to some older thing we’re already familiar with. In the case of Wolverine, it’s drastically selling the toolset short by comparing it to any of the older application frameworks I rattled off above because Wolverine fundamentally does much more to remove code ceremony, improve testability throughout your codebase, and generally just let you focus more on core application functionality than older application frameworks.

This post was triggered by a conversation I had with a friend last week who told me he was happy with his current toolset for HTTP API creation and couldn’t imagine how Wolverine’s HTTP endpoint model could possibly reduce his efforts. Challenge accepted!

For just this moment, consider a simplistic HTTP service that works on this little entity:

public record Counter(Guid Id, int Count);

Now, let’s build an HTTP endpoint that will:

  1. Receive route arguments for the Counter.Id and the current tenant id because of course let’s say that we’re using multi-tenancy with a separate database per tenant
  2. Try to look up the existing Counter entity by its id from the right tenant database
  3. If the entity doesn’t exist, return a status code 404 and get out of there
  4. If the entity does exist, increment the Count property and save the entity to the database and return a status code 204 for a successful request with an empty body

Just to make it easier on me because I already had this example code, we’re going to use Marten for persistence which happens to have much stronger multi-tenancy built in than EF Core. Knowing all that, here’s a sample MVC Core controller to implement the functionality I described above:

public class CounterController : ControllerBase
{
    [HttpPost("/api/tenants/{tenant}/counters/{id}")]
    [ProducesResponseType(204)] // empty response
    [ProducesResponseType(404)]
    public async Task<IResult> Increment(
        Guid id, 
        string tenant, 
        [FromServices] IDocumentStore store)
    {
        // Open a Marten session for the right tenant database
        await using var session = store.LightweightSession(tenant);
        var counter = await session.LoadAsync<Counter>(id, HttpContext.RequestAborted);
        if (counter == null)
        {
            return Results.NotFound();
        }
        else
        {
            counter = counter with { Count = counter.Count + 1 };
            await session.SaveChangesAsync(HttpContext.RequestAborted);
            return Results.Empty;
        }
    }
}

I’m completely open to recreating the multi-tenancy support from the Marten + Wolverine combo for EF Core and SQL Server through Wolverine, but I’m shamelessly waiting until another company is willing to engage with JasperFx Software to deliver that.

Alright, now let’s switch over to using Wolverine.HTTP with its WolverineFx.Http.Marten add on Nuget setup. Let’s drink some Wolverine koolaid and write a functionally identical endpoint the Wolverine way:

You need Wolverine 2.7.0 for this by the way!

    [WolverinePost("/api/tenants/{tenant}/counters/{id}")]
    public static IMartenOp Increment([Document(Required = true)] Counter counter)
    {
        counter = counter with { Count = counter.Count + 1 };
        return MartenOps.Store(counter);
    }

Seriously, this is the same functionality and even the same generated OpenAPI documentation. Some things to note:

  • Wolverine is able to derive much more of the OpenAPI documentation from the type signatures and from policies applied to the endpoint method, like…
  • The usage of the Document(Required = true) tells Wolverine that it will be trying to load a document of type Counter from Marten, and by default it’s going to do that through a route argument named “id”. The Required property tells Wolverine to return a 404 NotFound status code automatically if the Counter document doesn’t exist. This attribute usage also applies some OpenAPI smarts to tag the route as potentially returning a 404
  • The return value of the method is an IMartenOpside effect” just saying “go save this document”, which Wolverine will do as part of this endpoint execution. Using the side effect makes this method a nice, simple pure function that’s completely synchronous. No wrestling with async Task, await, or schlepping around CancellationToken every which way
  • Because Wolverine can see there will not be any kind of response body, it’s going to use a 204 status code to denote the empty body and tag the OpenAPI with that as well.
  • There is absolutely zero Reflection happening at runtime because Wolverine is generating and compiling code at runtime (or ahead of time for faster cold starts) that “bakes” in all of this knowledge for fast execution
  • Wolverine + Marten has a far more robust support for multi-tenancy all the way through the technology stack than any other application framework I know of in .NET (web frameworks, mediators, or messaging libraries), and you can see that evident in the code above where Marten & Wolverine would already know how to detect tenant usage in an HTTP request and do all the wiring for you all the way through the stack so you can focus on just writing business functionality.

To make this all more concrete, here’s the generated code:

// <auto-generated/>
#pragma warning disable
using Microsoft.AspNetCore.Routing;
using System;
using System.Linq;
using Wolverine.Http;
using Wolverine.Marten.Publishing;
using Wolverine.Runtime;

namespace Internal.Generated.WolverineHandlers
{
    // START: POST_api_tenants_tenant_counters_id_inc2
    public class POST_api_tenants_tenant_counters_id_inc2 : Wolverine.Http.HttpHandler
    {
        private readonly Wolverine.Http.WolverineHttpOptions _wolverineHttpOptions;
        private readonly Wolverine.Runtime.IWolverineRuntime _wolverineRuntime;
        private readonly Wolverine.Marten.Publishing.OutboxedSessionFactory _outboxedSessionFactory;

        public POST_api_tenants_tenant_counters_id_inc2(Wolverine.Http.WolverineHttpOptions wolverineHttpOptions, Wolverine.Runtime.IWolverineRuntime wolverineRuntime, Wolverine.Marten.Publishing.OutboxedSessionFactory outboxedSessionFactory) : base(wolverineHttpOptions)
        {
            _wolverineHttpOptions = wolverineHttpOptions;
            _wolverineRuntime = wolverineRuntime;
            _outboxedSessionFactory = outboxedSessionFactory;
        }



        public override async System.Threading.Tasks.Task Handle(Microsoft.AspNetCore.Http.HttpContext httpContext)
        {
            var messageContext = new Wolverine.Runtime.MessageContext(_wolverineRuntime);
            // Building the Marten session
            await using var documentSession = _outboxedSessionFactory.OpenSession(messageContext);
            if (!System.Guid.TryParse((string)httpContext.GetRouteValue("id"), out var id))
            {
                httpContext.Response.StatusCode = 404;
                return;
            }


            var counter = await documentSession.LoadAsync<Wolverine.Http.Tests.Bugs.Counter>(id, httpContext.RequestAborted).ConfigureAwait(false);
            // 404 if this required object is null
            if (counter == null)
            {
                httpContext.Response.StatusCode = 404;
                return;
            }

            
            // The actual HTTP request handler execution
            var martenOp = Wolverine.Http.Tests.Bugs.CounterEndpoint.Increment(counter);

            if (martenOp != null)
            {
                
                // Placed by Wolverine's ISideEffect policy
                martenOp.Execute(documentSession);

            }

            
            // Commit any outstanding Marten changes
            await documentSession.SaveChangesAsync(httpContext.RequestAborted).ConfigureAwait(false);

            
            // Have to flush outgoing messages just in case Marten did nothing because of https://github.com/JasperFx/wolverine/issues/536
            await messageContext.FlushOutgoingMessagesAsync().ConfigureAwait(false);

            // Wolverine automatically sets the status code to 204 for empty responses
            if (!httpContext.Response.HasStarted) httpContext.Response.StatusCode = 204;
        }

    }

    // END: POST_api_tenants_tenant_counters_id_inc2
    
    
}

Summary

Wolverine isn’t “just another messaging library / mediator / HTTP endpoint alternative.” Rather, Wolverine is a completely different animal that while fulfilling those application framework roles for server side .NET, potentially does a helluva lot more than older frameworks to help you write systems that are maintainable, testable, and resilient. And do all of that with a lot less of the typical “Clean/Onion/Hexagonal Architecture” cruft that shines in software conference talks and YouTube videos but helps lead teams into a morass of unmaintainable code in larger systems in the real world.

But yes, the Wolverine community needs to find a better way to communicate how Wolverine adds value above and beyond the more traditional server side application frameworks in .NET. I’m completely open to suggestions — and fully aware that some folks won’t like the “magic” in the “drank all the Wolverine Koolaid” approach I used.

You can of course use Wolverine with 100% explicit code and none of the magic.

Wolverine’s New PostgreSQL Messaging Transport

Wolverine just got a new PostgreSQL-backed messaging transport (with the work sponsored by a JasperFx Software client!). The use case is just this, say you’re already using Wolverine to build a system with PostgreSQL as your backing database, and want to introduce some asynchronous, background processing in your system — which you could already do with just a database backed, local queue. Going farther though, let’s say that we’d like to have a competing consumers setup for our queueing for load balancing between active nodes and we’d like to do that without having to introduce some kind of new message broker infrastructure into our existing architecture.

That’s time to bring in Wolverine’s new option for asynchronous messaging just using our existing PostgreSQL database. To set that up by itself (without using Marten, but we’ll get to that in a second), it’s these couple lines of code:

var builder = WebApplication.CreateBuilder(args);
var connectionString = builder.Configuration.GetConnectionString("postgres");

builder.Host.UseWolverine(opts =>
{
    // Setting up Postgresql-backed message storage
    // This requires a reference to Wolverine.Postgresql
    opts.PersistMessagesWithPostgresql(connectionString);

    // Other Wolverine configuration
});

Of course, you’d want to setup PostgreSQL queues for Wolverine to send to and to listen to for messages to process. That’s shown below:

using var host = await Host.CreateDefaultBuilder()
    .UseWolverine((context, opts) =>
    {
        var connectionString = context.Configuration.GetConnectionString("postgres");
        opts.UsePostgresqlPersistenceAndTransport(connectionString, "myapp")
            
            // Tell Wolverine to build out all necessary queue or scheduled message
            // tables on demand as needed
            .AutoProvision()
            
            // Optional that may be helpful in testing, but probably bad
            // in production!
            .AutoPurgeOnStartup();

        // Use this extension method to create subscriber rules
        opts.PublishAllMessages().ToPostgresqlQueue("outbound");

        // Use this to set up queue listeners
        opts.ListenToPostgresqlQueue("inbound")
            
            .CircuitBreaker(cb =>
            {
                // fine tune the circuit breaker
                // policies here
            })
            
            // Optionally specify how many messages to 
            // fetch into the listener at any one time
            .MaximumMessagesToReceive(50);
    }).StartAsync();

And that’s that, we’re completely set up for messaging via the PostgreSQL database we already have with our Wolverine application!

Just a couple things to note before you run off and try to use this:

  • Like I alluded to earlier, the PostgreSQL queueing mechanism supports competing consumers, so different nodes at runtime can be pulling and processing messages from the PostgreSQL queues
  • There is a separate set of tables for each named queue (one for the actual inbound/outbound messages, and a separate table to segregate “scheduled” messages). Utilize that separation for better performance as needed by effectively sharding the message transfers
  • As that previous bullet point implies, the PostgreSQL transport is able to support scheduled message delivery
  • As in most cases, Wolverine is able to detect whether or not the necessary tables all exist in your database, and create any missing tables for you at runtime
  • In the case of using Wolverine with Marten multi-tenancy through separate databases, the queue tables will exist in all tenant databases
  • There’s some optimizations and integration between these queues and the transactional inbox/outbox support in Wolverine for performance by reducing database chattiness whenever possible

Summary

I’m not sure I’d recommend this approach over dedicated messaging infrastructure for high volumes of messages, but it’s a way to get things done with less infrastructure in some cases and it’s a valuable tool in the Wolverine toolbox.

Building a Critter Stack Application: Vertical Slice Architecture

Hey, did you know that JasperFx Software is ready for formal support plans for Marten and Wolverine? Not only are we trying to make the “Critter Stack” tools be viable long term options for your shop, we’re also interested in hearing your opinions about the tools and how they should change. We’re also certainly open to help you succeed with your software development projects on a consulting basis whether you’re using any part of the Critter Stack or any other .NET server side tooling.

Let’s build a small web service application using the whole “Critter Stack” and their friends, one small step at a time. For right now, the “finished” code is at CritterStackHelpDesk on GitHub.

The posts in this series are:

  1. Event Storming
  2. Marten as Event Store
  3. Marten Projections
  4. Integrating Marten into Our Application
  5. Wolverine as Mediator
  6. Web Service Query Endpoints with Marten
  7. Dealing with Concurrency
  8. Wolverine’s Aggregate Handler Workflow FTW!
  9. Command Line Diagnostics with Oakton
  10. Integration Testing Harness
  11. Marten as Document Database
  12. Asynchronous Processing with Wolverine
  13. Durable Outbox Messaging and Why You Care!
  14. Wolverine HTTP Endpoints
  15. Easy Unit Testing with Pure Functions
  16. Vertical Slice Architecture (this post)
  17. Messaging with Rabbit MQ
  18. The “Stateful Resource” Model
  19. Resiliency

I’m taking a short detour in this series today as I prepare to do my “Contrarian Architecture” talk at the CodeMash 2024 conference today. In that talk (here’s a version from NDC Oslo 2023), I’m going to spend some time more or less bashing stereotypical usages of the Clean or Onion Architecture prescriptive approach.

While there’s nothing to prevent you from using either Wolverine or Marten within a typical Clean Architecture style code organization, the “Critter Stack” plays well within a lower code ceremony vertical slice architecture that I personally prefer.

First though, let’s talk about why I don’t like about the stereotypical Code/Onion Architecture approach you commonly find in enterprise .NET systems. With this common mode of code organization, the incident tracking help desk service we have been building in this series might be organized something like:

Class NameProject
IncidentControllerHelpDesk.API
IncidentServiceHelpDesk.ServiceLayer
IncidentHelpDesk.Domain
IncidentRepositoryHelpDesk.Data
Don’t laugh because a lot of people do this

This kind of code structure is primarily organized around the “nouns” of the system and reliant on the formal layering prescriptions to try to create a healthy separation of concerns. It’s probably perfectly fine for pure CRUD applications, but breaks down very badly over time for more workflow centric applications.

I despise this form of code organization in very large systems because:

  1. It scatters closely related code throughout the codebase
  2. You typically don’t spend a lot of time trying to reason about an entire layer at a time. Instead, you’re largely worried about the behavior of one single use case and the logical flow through the entire stack for that one use case
  3. The code layout tells you very little about what the application does as it’s primarily focused around technical concerns (hat tip to David Whitney for that insight)
  4. It’s high ceremony. Lots of layers, interfaces, and just a lot of stuff
  5. Abstractions around the low level persistence infrastructure can very easily lead you to poorly performing code and can make it much harder later to understand why code is performing poorly in production

Shifting to the Idiomatic Wolverine Approach

Let’s say that we’re sitting around a fire boasting of our victories in software development (that’s a lie, I’m telling horror stories about the worst systems I’ve ever seen) and you ask me “Jeremy, what is best in code?”

And I’d respond:

  • Low ceremony code that’s easy to read and write
  • Closely related code is close together
  • Unrelated code is separated
  • Code is organized around the “verbs” of the system, which in the case of Wolverine probably means the commands
  • The code structure by itself gives some insight into what the system actually does

Taking our LogIncident command, I’m going to put every drop of code related to that command in a single file called “LogIncident.cs”:

public record LogIncident(
    Guid CustomerId,
    Contact Contact,
    string Description
)
{
    public class LogIncidentValidator : AbstractValidator<LogIncident>
    {
        // I stole this idea of using inner classes to keep them
        // close to the actual model from *someone* online,
        // but don't remember who
        public LogIncidentValidator()
        {
            RuleFor(x => x.Description).NotEmpty().NotNull();
            RuleFor(x => x.Contact).NotNull();
        }
    }
};

public record NewIncidentResponse(Guid IncidentId) 
    : CreationResponse("/api/incidents/" + IncidentId);

public static class LogIncidentEndpoint
{
    [WolverineBefore]
    public static async Task<ProblemDetails> ValidateCustomer(
        LogIncident command, 
        
        // Method injection works just fine within middleware too
        IDocumentSession session)
    {
        var exists = await session.Query<Customer>().AnyAsync(x => x.Id == command.CustomerId);
        return exists
            ? WolverineContinue.NoProblems
            : new ProblemDetails { Detail = $"Unknown customer id {command.CustomerId}", Status = 400};
    }
    
    [WolverinePost("/api/incidents")]
    public static (NewIncidentResponse, IStartStream) Post(LogIncident command, User user)
    {
        var logged = new IncidentLogged(
            command.CustomerId, 
            command.Contact, 
            command.Description, 
            user.Id);

        var op = MartenOps.StartStream<Incident>(logged);
        
        return (new NewIncidentResponse(op.StreamId), op);
    }

}

Every single bit of code related to handling this operation in our system is in one file that we can read top to bottom. A few significant points about this code:

  • I think it’s working out well in other Wolverine systems to largely name the files based on command names or the request body models for HTTP endpoints. At least with systems being built with a CQRS approach. Using the command name allows the system to be more self descriptive when you’re just browsing the codebase for the first time
  • The behavioral logic is still isolated to the Post() method, and even though there is some direct data access in the same class in its LoadAsync() method, the Post() method is a pure function that can be unit tested without any mocks
  • There’s also no code unrelated to LogIncident anywhere in this file, so you bypass the problem you get in noun-centric code organizations where you have to train your brain to ignore a lot of unrelated code in an IncidentService that has nothing to do with the particular operation you’re working on at any one time
  • I’m not bothering to wrap any kind of repository abstraction around Marten’s IDocumentSession in this code sample. That’s not to say that I wouldn’t do so in the case of something more complicated, and especially if there’s some kind of complex set of data queries that would need to be reused in other commands
  • You can clearly see the cause and effect between the command input and any outcomes of that command. I think this is an important discussion all by itself because it can easily be hard to reason about that same kind of cause and effect in systems that split responsibilities within a single use case across different areas of the code and even across different projects or components. Codebases that are hard to reason about are very prone to regression errors down the line — and that’s the voice of painful experience talking.

I certainly wouldn’t use this “single file” approach on larger, more complex use cases, but it’s working out well for early Wolverine adopters so far. Since much of my criticism of Clean/Onion Architecture approaches is really about using prescriptive rules too literally, I would also say that I would deviate from this “single file” approach any time it was valuable to reuse code across commands or queries or just when the message handling for a single message gets complex enough to need or want other files to separate responsibilities just within that one use case.

Summary and What’s Next

Wolverine is optimized for a “Vertical Slice Architecture” code organization approach. Both Marten and Wolverine are meant to require as little code ceremony as they can, and that also makes the vertical slice architecture and even the single file approach I showed here be feasible.

More on vertical slice architecture:

I’m not 100% sure what I’ll tackle next in this series, but roughly I’m still planning:

  • The “stateful resource” model in the Critter Stack for infrastructure resource setup and teardown we use to provide that “it just works” experience
  • External messaging with Rabbit MQ
  • Wolverine’s resiliency and error handling capabilities
  • Logging, observability, Open Telemetry, and metrics from Wolverine
  • Subscribing to Marten events

Building a Critter Stack Application: Wolverine HTTP Endpoints

Hey, did you know that JasperFx Software is ready for formal support plans for Marten and Wolverine? Not only are we trying to make the “Critter Stack” tools be viable long term options for your shop, we’re also interested in hearing your opinions about the tools and how they should change. We’re also certainly open to help you succeed with your software development projects on a consulting basis whether you’re using any part of the Critter Stack or any other .NET server side tooling.

Let’s build a small web service application using the whole “Critter Stack” and their friends, one small step at a time. For right now, the “finished” code is at CritterStackHelpDesk on GitHub.

The posts in this series are:

  1. Event Storming
  2. Marten as Event Store
  3. Marten Projections
  4. Integrating Marten into Our Application
  5. Wolverine as Mediator
  6. Web Service Query Endpoints with Marten
  7. Dealing with Concurrency
  8. Wolverine’s Aggregate Handler Workflow FTW!
  9. Command Line Diagnostics with Oakton
  10. Integration Testing Harness
  11. Marten as Document Database
  12. Asynchronous Processing with Wolverine
  13. Durable Outbox Messaging and Why You Care!
  14. Wolverine HTTP Endpoints (this post)
  15. Easy Unit Testing with Pure Functions
  16. Vertical Slice Architecture
  17. Messaging with Rabbit MQ
  18. The “Stateful Resource” Model
  19. Resiliency

Heretofore in this series, I’ve been using ASP.Net MVC Core controllers anytime we’ve had to build HTTP endpoints for our incident tracking, help desk system in order to introduce new concepts a little more slowly.

If you would, let’s refer back to an earlier incarnation of an HTTP endpoint to handle our LogIncident command from an earlier post in this series:

public class IncidentController : ControllerBase
{
    private readonly IDocumentSession _session;
 
    public IncidentController(IDocumentSession session)
    {
        _session = session;
    }
 
    [HttpPost("/api/incidents")]
    public async Task<IResult> Log(
        [FromBody] LogIncident command
        )
    {
        var userId = currentUserId();
        var logged = new IncidentLogged(command.CustomerId, command.Contact, command.Description, userId);
 
        var incidentId = _session.Events.StartStream(logged).Id;
        await _session.SaveChangesAsync(HttpContext.RequestAborted);
 
        return Results.Created("/incidents/" + incidentId, incidentId);
    }
 
    private Guid currentUserId()
    {
        // let's say that we do something here that "finds" the
        // user id as a Guid from the ClaimsPrincipal
        var userIdClaim = User.FindFirst("user-id");
        if (userIdClaim != null && Guid.TryParse(userIdClaim.Value, out var id))
        {
            return id;
        }
 
        throw new UnauthorizedAccessException("No user");
    }
}

Just to be clear as possible here, the Wolverine HTTP endpoints feature introduced in this post can be mixed and matched with MVC Core and/or Minimal API or even FastEndpoints within the same application and routing tree. I think the ASP.Net team deserves some serious credit for making that last sentence a fact.

Today though, let’s use Wolverine HTTP endpoints and rewrite that controller method above the “Wolverine way.” To get started, add a Nuget reference to the help desk service like so:

dotnet add package WolverineFx.Http

Next, let’s break into our Program file and add Wolverine endpoints to our routing tree near the bottom of the file like so:

app.MapWolverineEndpoints(opts =>
{
    // We'll add a little more in a bit...
});

// Just to show where the above code is within the context
// of the Program file...
return await app.RunOaktonCommands(args);

Now, let’s make our first cut at a Wolverine HTTP endpoint for the LogIncident command, but I’m purposely going to do it without introducing a lot of new concepts, so please bear with me a bit:

public record NewIncidentResponse(Guid IncidentId) 
    : CreationResponse("/api/incidents/" + IncidentId);

public static class LogIncidentEndpoint
{
    [WolverinePost("/api/incidents")]
    public static NewIncidentResponse Post(
        // No [FromBody] stuff necessary
        LogIncident command,
        
        // Service injection is automatic,
        // just like message handlers
        IDocumentSession session,
        
        // You can take in an argument for HttpContext
        // or immediate members of HttpContext
        // as method arguments
        ClaimsPrincipal principal)
    {
        // Some ugly code to find the user id
        // within a claim for the currently authenticated
        // user
        Guid userId = Guid.Empty;
        var userIdClaim = principal.FindFirst("user-id");
        if (userIdClaim != null && Guid.TryParse(userIdClaim.Value, out var claimValue))
        {
            userId = claimValue;
        }
        
        var logged = new IncidentLogged(command.CustomerId, command.Contact, command.Description, userId);

        var id = session.Events.StartStream<Incident>(logged).Id;

        return new NewIncidentResponse(id);
    }
}

Here’s a few salient facts about the code above to explain what it’s doing:

  • The [WolverinePost] attribute tells Wolverine that hey, this method is an HTTP handler, and Wolverine will discover this method and add it to the application’s endpoint routing tree at bootstrapping time.
  • Just like Wolverine message handlers, the endpoint methods are flexible and Wolverine generates code around your code to mediate between the raw HttpContext for the request and your code
  • We have already enabled Marten transactional middleware for our message handlers in an earlier post, and that happily applies to Wolverine HTTP endpoints as well. That helps make our endpoint method be just a synchronous method with the transactional middleware dealing with the ugly asynchronous stuff for us.
  • You can “inject” HttpContext and its immediate children into the method signatures as I did with the ClaimsPrincipal up above
  • Method injection is automatic without any silly [FromServices] attributes, and that’s what’s happening with the IDocumentSession argument
  • The LogIncident parameter is assumed to be the HTTP request body due to being the first argument, and it will be deserialized from the incoming JSON in the request body just like you’d probably expect
  • The NewIncidentResponse type is roughly the equivalent to using Results.Created() in Minimal API to create a response body with the url of the newly created Incident stream and an HTTP status code of 201 for “Created.” What’s different about Wolverine.HTTP is that it can infer OpenAPI documentation from the signature of that type without requiring you to pollute your code by manually adding [ProducesResponseType] attributes on the method to get a “proper” OpenAPI document for the endpoint.

Moving on, that user id detection from the ClaimsPrincipal looks a little bit ugly to me, and likely to be repetitive. Let’s ameliorate that by introducing Wolverine’s flavor of HTTP middleware and move that code to this class:

// Using the custom type makes it easier
// for the Wolverine code generation to route
// things around. I'm not ashamed.
public record User(Guid Id);

public static class UserDetectionMiddleware
{
    public static (User, ProblemDetails) Load(ClaimsPrincipal principal)
    {
        var userIdClaim = principal.FindFirst("user-id");
        if (userIdClaim != null && Guid.TryParse(userIdClaim.Value, out var id))
        {
            // Everything is good, keep on trucking with this request!
            return (new User(id), WolverineContinue.NoProblems);
        }
        
        // Nope, nope, nope. We got problems, so stop the presses and emit a ProblemDetails response
        // with a 400 status code telling the caller that there's no valid user for this request
        return (new User(Guid.Empty), new ProblemDetails { Detail = "No valid user", Status = 400});
    }
}

Do note the usage of ProblemDetails in that middleware. If there is no user-id claim on the ClaimsPrincipal, we’ll abort the request by writing out the ProblemDetails stating there’s no valid user. This pattern is baked into Wolverine.HTTP to help create one off request validations. We’ll utilize this quite a bit more later.

Next, I need to add that new bit of middleware to our application. As a shortcut, I’m going to just add it to every single Wolverine HTTP endpoint by breaking back into our Program file and adding this line of code:

app.MapWolverineEndpoints(opts =>
{
    // We'll add a little more in a bit...
    
    // Creates a User object in HTTP requests based on
    // the "user-id" claim
    opts.AddMiddleware(typeof(UserDetectionMiddleware));
});

Now, back to our endpoint code and I’ll take advantage of that middleware by changing the method to this:

    [WolverinePost("/api/incidents")]
    public static NewIncidentResponse Post(
        // No [FromBody] stuff necessary
        LogIncident command,
        
        // Service injection is automatic,
        // just like message handlers
        IDocumentSession session,
        
        // This will be created for us through the new user detection
        // middleware
        User user)
    {
        var logged = new IncidentLogged(
            command.CustomerId, 
            command.Contact, 
            command.Description, 
            user.Id);

        var id = session.Events.StartStream<Incident>(logged).Id;

        return new NewIncidentResponse(id);
    }

This is a little bit of a bonus, but let’s also get rid of the need to inject the Marten IDocumentSession service by using a Wolverine “side effect” with this equivalent code:

    [WolverinePost("/api/incidents")]
    public static (NewIncidentResponse, IStartStream) Post(LogIncident command, User user)
    {
        var logged = new IncidentLogged(
            command.CustomerId, 
            command.Contact, 
            command.Description, 
            user.Id);

        var op = MartenOps.StartStream<Incident>(logged);
        
        return (new NewIncidentResponse(op.StreamId), op);
    }

In the code above I’m using the MartenOps.StartStream() method to return a “side effect” that will create a new Marten stream as part of the request instead of directly interacting with the IDocumentSession from Marten. That’s a small thing you might not care for, but it can lead to the elimination of mock objects within your unit tests as you can now write a state-based test directly against the method above like so:

public class LogIncident_handling
{
    [Fact]
    public void handle_the_log_incident_command()
    {
        // This is trivial, but the point is that 
        // we now have a pure function that can be
        // unit tested by pushing inputs in and measuring
        // outputs without any pesky mock object setup
        var contact = new Contact(ContactChannel.Email);
        var theCommand = new LogIncident(BaselineData.Customer1Id, contact, "It's broken");

        var theUser = new User(Guid.NewGuid());

        var (_, stream) = LogIncidentEndpoint.Post(theCommand, theUser);

        // Test the *decision* to emit the correct
        // events and make sure all that pesky left/right
        // hand mapping is correct
        var logged = stream.Events.Single()
            .ShouldBeOfType<IncidentLogged>();
        
        logged.CustomerId.ShouldBe(theCommand.CustomerId);
        logged.Contact.ShouldBe(theCommand.Contact);
        logged.LoggedBy.ShouldBe(theUser.Id);
    }
}

Hey, let’s add some validation too!

We’ve already introduced middleware, so let’s just incorporate the popular Fluent Validation library into our project and let it do some basic validation on the incoming LogIncident command body, and if any validation fails, pull the ripcord and parachute out of the request with a ProblemDetails body and 400 status code that describes the validation errors.

Let’s add that in by first adding some pre-packaged middleware for Wolverine.HTTP with:

dotnet add package WolverineFx.Http.FluentValidation

Next, I have to add the usage of that middleware through this new line of code:

app.MapWolverineEndpoints(opts =>
{
    // Direct Wolverine.HTTP to use Fluent Validation
    // middleware to validate any request bodies where
    // there's a known validator (or many validators)
    opts.UseFluentValidationProblemDetailMiddleware();
    
    // Creates a User object in HTTP requests based on
    // the "user-id" claim
    opts.AddMiddleware(typeof(UserDetectionMiddleware));
});

And add an actual validator for our LogIncident, and in this case that model is just an internal concern of our service, so I’ll just embed that new validator as an inner type of the command type like so:

public record LogIncident(
    Guid CustomerId,
    Contact Contact,
    string Description
)
{
    public class LogIncidentValidator : AbstractValidator<LogIncident>
    {
        // I stole this idea of using inner classes to keep them
        // close to the actual model from *someone* online,
        // but don't remember who
        public LogIncidentValidator()
        {
            RuleFor(x => x.Description).NotEmpty().NotNull();
            RuleFor(x => x.Contact).NotNull();
        }
    }
};

Now, Wolverine does have to “know” about these validators to use them within the endpoint handling, so I’ll need to have these types registered in the application’s IoC container against the right IValidator<T> interface. This is not required, but Wolverine has a (Lamar) helper to find and register these validators within your project and do so in a way that’s most efficient at runtime (i.e., there’s a micro optimization for making these validators have a Singleton life time in the container if Wolverine can see that the types are stateless). I’ll use that little helper in our Program file within the UseWolverine() configuration like so:

builder.Host.UseWolverine(opts =>
{
    // lots more stuff unfortunately, but focus on the line below
    // just for now:-)
    
    // Apply the validation middleware *and* discover and register
    // Fluent Validation validators
    opts.UseFluentValidation();

}

And that’s that. We’ve not got Fluent Validation validation in the request handling for the LogIncident command. In a later section, I’ll explain how Wolverine does this, and try to sell you all on the idea that Wolverine is able to do this more efficiently than other commonly used frameworks *cough* MediatR *cough* that depend on conditional runtime code.

One off validation with “Compound Handlers”

As you might have noticed, the LogIncident command has a CustomerId property that we’re using as is within our HTTP handler. We should never just trust the inputs of a random client, so let’s at least validate that the command refers to a real customer.

Now, typically I like to make Wolverine message handler or HTTP endpoint methods be the “happy path” and handle exception cases and one off validations with a Wolverine feature we inelegantly call “compound handlers.”

I’m going to add a new method to our LogIncidentHandler class like so:

    // Wolverine has some naming conventions for Before/Load
    // or After/AfterAsync, but you can use a more descriptive
    // method name and help Wolverine out with an attribute
    [WolverineBefore]
    public static async Task<ProblemDetails> ValidateCustomer(
        LogIncident command, 
        
        // Method injection works just fine within middleware too
        IDocumentSession session)
    {
        var exists = await session.Query<Customer>().AnyAsync(x => x.Id == command.CustomerId);
        return exists
            ? WolverineContinue.NoProblems
            : new ProblemDetails { Detail = $"Unknown customer id {command.CustomerId}", Status = 400};
    }

Integration Testing

While the individual methods and middleware can all be tested separately, you do want to put everything together with an integration test to prove out whether or not all this magic really works. As I described in an earlier post where we learned how to use Alba to create an integration testing harness for a “critter stack” application, we can write an end to end integration test against the HTTP endpoint like so (this sample doesn’t cover every permutation, but hopefully you get the point):

    [Fact]
    public async Task create_a_new_incident_happy_path()
    {
        // We'll need a user
        var user = new User(Guid.NewGuid());
        
        // Log a new incident first
        var initial = await Scenario(x =>
        {
            var contact = new Contact(ContactChannel.Email);
            x.Post.Json(new LogIncident(BaselineData.Customer1Id, contact, "It's broken")).ToUrl("/api/incidents");
            x.StatusCodeShouldBe(201);
            
            x.WithClaim(new Claim("user-id", user.Id.ToString()));
        });

        var incidentId = initial.ReadAsJson<NewIncidentResponse>().IncidentId;

        using var session = Store.LightweightSession();
        var events = await session.Events.FetchStreamAsync(incidentId);
        var logged = events.First().ShouldBeOfType<IncidentLogged>();

        // This deserves more assertions, but you get the point...
        logged.CustomerId.ShouldBe(BaselineData.Customer1Id);
    }

    [Fact]
    public async Task log_incident_with_invalid_customer()
    {
        // We'll need a user
        var user = new User(Guid.NewGuid());
        
        // Reject the new incident because the Customer for 
        // the command cannot be found
        var initial = await Scenario(x =>
        {
            var contact = new Contact(ContactChannel.Email);
            var nonExistentCustomerId = Guid.NewGuid();
            x.Post.Json(new LogIncident(nonExistentCustomerId, contact, "It's broken")).ToUrl("/api/incidents");
            x.StatusCodeShouldBe(400);
            
            x.WithClaim(new Claim("user-id", user.Id.ToString()));
        });
    }
}

Um, how does this all work?

So far I’ve shown you some “magic” code, and that tends to really upset some folks. I also made some big time claims about how Wolverine is able to be more efficient at runtime (alas, there is a significant “cold start” problem you can easily work around, so don’t get upset if your first ever Wolverine request isn’t snappy).

Wolverine works by using code generation to wrap its handling code around your code. That includes the middleware, and the usage of any IoC services as well. Moreover, do you know what the fastest IoC container is in all the .NET land? I certainly think that Lamar is at least in the game for that one, but nope, the answer is no IoC container at runtime.

One of the advantages of this approach is that we can preview the generated code to unravel the “magic” and explain what Wolverine is doing at runtime. Moreover, we’ve tried to add descriptive comments to the generated code to further explain what and why code is in place.

See more about this in my post Unraveling the Magic in Wolverine.

Here’s the generated code for our LogIncident endpoint (warning, ugly generated code ahead):

// <auto-generated/>
#pragma warning disable
using FluentValidation;
using Microsoft.AspNetCore.Routing;
using System;
using System.Linq;
using Wolverine.Http;
using Wolverine.Http.FluentValidation;
using Wolverine.Marten.Publishing;
using Wolverine.Runtime;

namespace Internal.Generated.WolverineHandlers
{
    // START: POST_api_incidents
    public class POST_api_incidents : Wolverine.Http.HttpHandler
    {
        private readonly Wolverine.Http.WolverineHttpOptions _wolverineHttpOptions;
        private readonly Wolverine.Runtime.IWolverineRuntime _wolverineRuntime;
        private readonly Wolverine.Marten.Publishing.OutboxedSessionFactory _outboxedSessionFactory;
        private readonly FluentValidation.IValidator<Helpdesk.Api.LogIncident> _validator;
        private readonly Wolverine.Http.FluentValidation.IProblemDetailSource<Helpdesk.Api.LogIncident> _problemDetailSource;

        public POST_api_incidents(Wolverine.Http.WolverineHttpOptions wolverineHttpOptions, Wolverine.Runtime.IWolverineRuntime wolverineRuntime, Wolverine.Marten.Publishing.OutboxedSessionFactory outboxedSessionFactory, FluentValidation.IValidator<Helpdesk.Api.LogIncident> validator, Wolverine.Http.FluentValidation.IProblemDetailSource<Helpdesk.Api.LogIncident> problemDetailSource) : base(wolverineHttpOptions)
        {
            _wolverineHttpOptions = wolverineHttpOptions;
            _wolverineRuntime = wolverineRuntime;
            _outboxedSessionFactory = outboxedSessionFactory;
            _validator = validator;
            _problemDetailSource = problemDetailSource;
        }



        public override async System.Threading.Tasks.Task Handle(Microsoft.AspNetCore.Http.HttpContext httpContext)
        {
            var messageContext = new Wolverine.Runtime.MessageContext(_wolverineRuntime);
            // Building the Marten session
            await using var documentSession = _outboxedSessionFactory.OpenSession(messageContext);
            // Reading the request body via JSON deserialization
            var (command, jsonContinue) = await ReadJsonAsync<Helpdesk.Api.LogIncident>(httpContext);
            if (jsonContinue == Wolverine.HandlerContinuation.Stop) return;
            
            // Execute FluentValidation validators
            var result1 = await Wolverine.Http.FluentValidation.Internals.FluentValidationHttpExecutor.ExecuteOne<Helpdesk.Api.LogIncident>(_validator, _problemDetailSource, command).ConfigureAwait(false);

            // Evaluate whether or not the execution should be stopped based on the IResult value
            if (!(result1 is Wolverine.Http.WolverineContinue))
            {
                await result1.ExecuteAsync(httpContext).ConfigureAwait(false);
                return;
            }


            (var user, var problemDetails2) = Helpdesk.Api.UserDetectionMiddleware.Load(httpContext.User);
            // Evaluate whether the processing should stop if there are any problems
            if (!(ReferenceEquals(problemDetails2, Wolverine.Http.WolverineContinue.NoProblems)))
            {
                await WriteProblems(problemDetails2, httpContext).ConfigureAwait(false);
                return;
            }


            var problemDetails3 = await Helpdesk.Api.LogIncidentEndpoint.ValidateCustomer(command, documentSession).ConfigureAwait(false);
            // Evaluate whether the processing should stop if there are any problems
            if (!(ReferenceEquals(problemDetails3, Wolverine.Http.WolverineContinue.NoProblems)))
            {
                await WriteProblems(problemDetails3, httpContext).ConfigureAwait(false);
                return;
            }


            
            // The actual HTTP request handler execution
            (var newIncidentResponse_response, var startStream) = Helpdesk.Api.LogIncidentEndpoint.Post(command, user);

            
            // Placed by Wolverine's ISideEffect policy
            startStream.Execute(documentSession);

            // This response type customizes the HTTP response
            ApplyHttpAware(newIncidentResponse_response, httpContext);
            
            // Commit any outstanding Marten changes
            await documentSession.SaveChangesAsync(httpContext.RequestAborted).ConfigureAwait(false);

            
            // Have to flush outgoing messages just in case Marten did nothing because of https://github.com/JasperFx/wolverine/issues/536
            await messageContext.FlushOutgoingMessagesAsync().ConfigureAwait(false);

            // Writing the response body to JSON because this was the first 'return variable' in the method signature
            await WriteJsonAsync(httpContext, newIncidentResponse_response);
        }

    }

    // END: POST_api_incidents
    
    
}


Summary and What’s Next

The Wolverine.HTTP library was originally built to be a supplement to MVC Core or Minimal API by allowing you to create endpoints that integrated well into Wolverine’s messaging, transactional outbox functionality, and existing transactional middleware. It has since grown into being more of a full fledged alternative for building web services, but with potential for substantially less ceremony and far more testability than MVC Core.

In later posts I’ll talk more about the runtime architecture and how Wolverine squeezes out more performance by eliminating conditional runtime switching, reducing object allocations, and sidestepping the dictionary lookups that are endemic to other “flexible” .NET frameworks like MVC Core.

Wolverine.HTTP has not yet been used with Razor at all, and I’m not sure that will ever happen. Not to worry though, you can happily use Wolverine.HTTP in the same application with MVC Core controllers or even Minimal API endpoints.

OpenAPI support has been a constant challenge with Wolverine.HTTP as the OpenAPI generation in ASP.Net Core is very MVC-centric, but I think we’re in much better shape now.

In the next post, I think we’ll introduce asynchronous messaging with Rabbit MQ. At some point in this series I’m going to talk more about how the “Critter Stack” is well suited for a lower ceremony vertical slice architecture that (hopefully) creates a maintainable and testable codebase without all the typical Clean/Onion Architecture baggage that I could personally do without.

And just for fun…

My “History” with ASP.Net MVC

There’s no useful content in this section, just some navel-gazing. Even though I really haven’t had to use ASP.Net MVC too terribly much, I do have a long history with it:

  1. In the beginning, there was what we now call ASP Classic, and it was good. For that day and time anyway when we would happily code directly in production and before TDD and SOLID and namby-pamby “source control.” (I started my development career in “Shadow IT” if that’s not obvious here). And when we did use source control, it was VSS because on the sly because the official source control in the office was something far, far worse that was COBOL-centric that I don’t think even exists any longer.
  2. Next there was ASP.Net WebForms and it was dreadful. I hated it.
  3. We started collectively learning about Agile and wanted to practice Test Driven Development, and began to hate WebForms even more
  4. Ruby on Rails came out in the middle 00’s and made what later became the ALT.Net community absolutely loathe WebForms even more than we already did
  5. At an MVP Summit on the Microsoft campus, the one and only Scott Guthrie, the Gu himself, showed a very early prototype of ASP.Net MVC to a handful of us and I was intrigued. That continued onward through the official unveiling of MVC at the very first ALT.Net open spaces event in Austin in ’07.
  6. A few collaborators and I decided that early ASP.Net MVC was too high ceremony and went all “Captain Ahab” trying to make an alternative, open source framework called FubuMVC go as an alternative — all while NancyFx, a “yet another Sinatra clone” became far more successful years before Microsoft finally got around to their own inevitable Sinatra clone (Minimal API)
  7. After .NET Core came along and made .NET a helluva lot better ecosystem, I decided that whatever, MVC Core is fine, it’s not going to be the biggest problem on our project, and if the client wants to use it, there’s no need to be upset about it. It’s fine, no really.
  8. MVC Core has gotten some incremental improvements over time that made it lower ceremony than earlier ASP.Net MVC, and that’s worth calling out as a positive
  9. People working with MVC Core started running into the problem of bloated controllers, and started using early MediatR as a way to kind of, sort of manage controller bloat by offloading it into focused command handlers. I mocked that approach mercilessly, but that was partially because of how awful a time I had helping folks do absurdly complicated middleware schemes with MediatR using StructureMap or Lamar (MVC Core + MediatR is probably worthwhile as a forcing function to avoid the controller bloat problems with MVC Core by itself)
  10. I worked on several long-running codebases built with MVC Core based on Clean Architecture templates that were ginormous piles of technical debt, and I absolutely blame MVC Core as a contributing factor for that
  11. I’m back to mildly disliking MVC Core (and I’m outright hostile to Clean/Onion templates). Not that you can’t write maintainable systems with MVC Core, but I think that its idiomatic usage can easily lead to unmaintainable systems. Let’s just say that I don’t think that MVC Core — and especially combined with some kind of Clean/Onion Architecture template as it very commonly is out in the wild — leads folks to the “pit of success” in the long run

Building a Critter Stack Application: Durable Outbox Messaging and Why You Care!

Hey, did you know that JasperFx Software is ready for formal support plans for Marten and Wolverine? Not only are we trying to make the “Critter Stack” tools be viable long term options for your shop, we’re also interested in hearing your opinions about the tools and how they should change. We’re also certainly open to help you succeed with your software development projects on a consulting basis whether you’re using any part of the Critter Stack or any other .NET server side tooling.

Let’s build a small web service application using the whole “Critter Stack” and their friends, one small step at a time. For right now, the “finished” code is at CritterStackHelpDesk on GitHub.

The posts in this series are:

  1. Event Storming
  2. Marten as Event Store
  3. Marten Projections
  4. Integrating Marten into Our Application
  5. Wolverine as Mediator
  6. Web Service Query Endpoints with Marten
  7. Dealing with Concurrency
  8. Wolverine’s Aggregate Handler Workflow FTW!
  9. Command Line Diagnostics with Oakton
  10. Integration Testing Harness
  11. Marten as Document Database
  12. Asynchronous Processing with Wolverine
  13. Durable Outbox Messaging and Why You Care! (this post)
  14. Wolverine HTTP Endpoints
  15. Easy Unit Testing with Pure Functions
  16. Vertical Slice Architecture
  17. Messaging with Rabbit MQ
  18. The “Stateful Resource” Model
  19. Resiliency

As we layer in new technical concepts from both Wolverine and Marten to build out incident tracking, help desk API, we looked at this message handler in the last post that both saved data, and published a message to an asynchronous, local queue that would act upon the newly saved data at some point.

public static class CategoriseIncidentHandler
{
    public static readonly Guid SystemId = Guid.NewGuid();
     
    [AggregateHandler]
    // The object? as return value will be interpreted
    // by Wolverine as appending one or zero events
    public static async Task<object?> Handle(
        CategoriseIncident command, 
        IncidentDetails existing,
        IMessageBus bus)
    {
        if (existing.Category != command.Category)
        {
            // Send the message to any and all subscribers to this message
            await bus.PublishAsync(
                new TryAssignPriority { IncidentId = existing.Id });
            return new IncidentCategorised
            {
                Category = command.Category,
                UserId = SystemId
            };
        }
 
        // Wolverine will interpret this as "do no work"
        return null;
    }
}

To recap, that message handler is potentially appending an IncidentCategorised event to an Incident event stream and publishing a command message named TryAssignPriority that will trigger a downstream action to try to assign a new priority to our Incident.

This relatively simple message handler (and we’ll make it even simpler in a later post in this series) creates a host of potential problems for our system:

  • In a naive usage of messaging tools, there’s a race condition between the outbound `TryAssignPriority` message being picked up by its handler and the database changes getting committed to the database. I have seen this cause nasty, hard to reproduce bugs through in real life production applications when once in awhile the message is processed before the database changes are made, and the system behaves incorrectly because the expected data is not yet committed by the original command finishing.
  • Maybe the actual message sending fails, but the database changes succeed, so the system is in an inconsistent state.
  • Maybe the outgoing message is happily published successfully, but the database changes fail, so that when the TryAssignPriority message is handled, it’s working against old system state.
  • Event if everything succeeds perfectly, the outgoing message should never actually be published until the transaction is complete.

To be clear, even without the usage of the outbox feature we’re about to use, Wolverine will apply an “in memory outbox” in message handlers such that all the messages published through IMessageBus.PublishAsync()/SendAsync()/etc. will be held in memory until the successful completion of the message handler. That by itself is enough to prevent the race condition between the database changes and the outgoing messages.

At this point, let’s introduce Wolverine’s transactional outbox support that was built specifically to solve or prevent the potential problems I listed up above. In this case, Wolverine has a transactional outbox & inbox support built into its integrations with PostgreSQL and Marten.

To rewind a little bit, in an earlier post where we first introduced the Marten + Wolverine integration, I had added a call to IntegrateWithWolverine() to the Marten configuration in our Program file:

using Wolverine.Marten;
 
var builder = WebApplication.CreateBuilder(args);
 
builder.Services.AddMarten(opts =>
{
    // This would be from your configuration file in typical usage
    opts.Connection(Servers.PostgresConnectionString);
    opts.DatabaseSchemaName = "wolverine_middleware";
})
    // This is the wolverine integration for the outbox/inbox,
    // transactional middleware, saga persistence we don't care about
    // yet
    .IntegrateWithWolverine()
     
    // Just letting Marten build out known database schema elements upfront
    // Helps with Wolverine integration in development
    .ApplyAllDatabaseChangesOnStartup();

Among other things, the call to IntegrateWithWolverine() up above directs Wolverine to use the PostgreSQL database for Marten as the durable storage for incoming and outgoing messages as part of Wolverine’s transactional inbox and outbox. The basic goal of this subsystem is to create consistency (really “eventual consistency“) between database transactions and outgoing messages without having to resort to endlessly painful distributed transactions.

Now, we’ve got another step to make. As of right now, Wolverine makes a determination of whether or not to use the durable outbox storage based on the destination of the outgoing message — with the theory that teams might easily want to mix and match durable messaging and less resource intensive “fire and forget” messaging within the same application. In this help desk service, we’ll make that easy and just say that all message processing in local queues (we set up TryAssignPriority to be handled through a local queue in the previous post) to be durable. In the UseWolverine() configuration, I’ll add this line of code to do that:

builder.Host.UseWolverine(opts =>
{
    // More configuration...

    // Automatic transactional middleware
    opts.Policies.AutoApplyTransactions();
    
    // Opt into the transactional inbox for local 
    // queues
    opts.Policies.UseDurableLocalQueues();
    
    // Opt into the transactional inbox/outbox on all messaging
    // endpoints
    opts.Policies.UseDurableOutboxOnAllSendingEndpoints();

    // Set up from the previous post
    opts.LocalQueueFor<TryAssignPriority>()
        // By default, local queues allow for parallel processing with a maximum
        // parallel count equal to the number of processors on the executing
        // machine, but you can override the queue to be sequential and single file
        .Sequential()

        // Or add more to the maximum parallel count!
        .MaximumParallelMessages(10);
});

I (Jeremy) may very well declare this “endpoint by endpoint” declaration of durability to have been a big mistake because confused some users and vote to change this in a later version of Wolverine.

With this outbox functionality in place, the messaging and transaction workflow behind the scenes of that handler shown above is to:

  1. When the outgoing TryAssignPriority message is published, Wolverine will “route” that message into its internal Envelope structure that includes the message itself and all the necessary metadata and information Wolverine would need to actually send the message later
  2. The outbox integration will append the outgoing message as a pending operation to the current Marten session
  3. The IncidentCategorised event will be appended to the current Marten session
  4. The Marten session is committed (IDocumentSession.SaveChangesAsync()), which will persist the new event and a copy of the outgoing Envelope into the outbox or inbox (scheduled messages or messages to local queues will be persisted in the incoming table) tables in one single, batched database command and by a native PostgreSQL transaction.
  5. Assuming the database transaction succeeds, the outgoing messages are “released” to Wolverine’s outgoing message publishing in memory (we’re coming back to that last point in a bit)
  6. Once Wolverine is able to successfully publish the message to the outgoing transport, it will delete the database table record for that outgoing message.

The 4th point is important I think. The close integration between Marten & Wolverine allows for more efficient processing by combining the database operations to minimize database round trips. In cases where the outgoing message transport is also batched (Azure Service Bus or AWS SQS for example), the database command to delete messages is also optimized for one call using PostgreSQL array support. I guess the main point of bringing this up is just to say there’s been quite a bit of thought and outright micro-optimizations done to this infrastructure.

But what about…?

  • the process is shut down cleanly? Wolverine tries to “drain” all in flight work first, and then “release” that process’s ownership of the persisted messages
  • the process crashes before messages floating around the local queues or outgoing message publishing finishes? Wolverine is able to detect a “dormant node” and reassign the persisted incoming and outgoing messages to be processed by another node. Or in the case of a single node, restart that work when the process is restarted.
  • the Wolverine tables don’t yet exist in the database? Wolverine has similar database management to Marten (it’s all the shared Weasel library doing that behind the scenes) and will happily build out missing tables in its default setting
  • an application using a database per tenant multi-tenancy strategy? Wolverine creates separate inbox or outbox storage in each tenant database. It’s complicated and took quite awhile to build, but it works. If no tenant is specified, the inbox/outbox in a “default” database is used
  • I need to use the outbox approach for consistency outside of a message handler, like when handling an HTTP request that happens to make both database changes and publish messages? That’s a really good question, and arguably one of the best reasons to use Wolverine over other .NET messaging tools because as we’ll see in later posts, that’s perfectly possible and quite easy. There is a recipe for using the Wolverine outbox functionality with MVC Core or Minimal API shown here.

Summary and What’s Next

The outbox (and closely related inbox) support is hugely important inside of any system that uses asynchronous messaging as a way of creating consistency and resiliency. Wolverine’s implementation is significantly different (and honestly more complicated) than typical implementations that depend on just polling from an outbound database table. That’s a positive in some ways because we believe that Wolverine’s approach is more efficient and will lead to greater throughput.

There is also similar inbox/outbox functionality and optimizations for Wolverine with EF Core using either PostgreSQL or Sql Server as the backing storage. In the future, I hope to see the EF Core and Sql Server support improve, but for right now, the Marten integration is getting the most attention and usage. I’d also love to see Wolverine grow to include support for alternative databases, with Azure CosmosDb and AWS Dynamo Db being leading contenders. We’ll see.

As for what’s next, let me figure out what sounds easy for the next post in January. In the meantime, Happy New Year’s everybody!

Wolverine’s HTTP Gets a Lot Better at OpenAPI (Swagger)

Hey, did you know that JasperFx Software is ready for formal support plans for Marten and Wolverine? Not only are we trying to make the “Critter Stack” tools be viable long term options for your shop, we’re also interested in hearing your opinions about the tools and how they should change. We’re also certainly open to help you succeed with your software development projects on a consulting basis whether you’re using any part of the Critter Stack or any other .NET server side tooling. Reach us anytime at sales@jasperfx.net or on Discord!

I just published Wolverine 1.13.0 this evening with some significant improvements (see the release notes here). Beyond the normal scattering of bug fixes (and some significant improvements to the MQTT support in Wolverine for a JasperFx Software client who we’re helping build an IoT system), the main headline is that Wolverine does a substantially better job generating OpenAPI documentation for its HTTP endpoint model.

When I’m building web services of any kind I tend to lean very hard into doing integration testing with Alba, and because of that, I also tend not to use Swashbuckle or an equivalent tool very often during development and that has apparently been a blind spot for me in building Wolverine.HTTP so far. To play out a typical conversation I frequently have with other server side .NET developers talking about tooling for web services, I think:

  1. MVC Core by itself — but this is hugely acerbated by unfortunately popular prescriptive architectural patterns that organize code around NounController / NounService / NounRepository code organization — can easily lead to unmaintainable code in bloated controller classes and plenty of work for software consultants who get brought in later to clean up after the system wildly outgrew the original team’s “Clean Architecture” approach
  2. I’m not convinced that Minimal API is any better for larger applications
  3. The MVC Core controllers delegating to an inner “mediator” tool strategy may help divide the code into more maintainable code, but it adds what I think is an unacceptable level of extra code ceremony. Also acerbated by prescriptive architectures
  4. You should use Wolverine.HTTP! It’s much lower ceremony code than the “controllers + mediator” strategy, but still sets you up for a vertical slice architecture! And it integrates well with Marten or Wolverine messaging!

Other developers: This all sounds great! Pause. Hey, the web services with this thing seem to work just fine, but man, the Swashbuckle/NSwag/Angular client generation is all kinds of not good! I’m going back to “Wolverine as MediatR”.

To which I reply:

But no more of that after today because the Wolverine HTTP OpenAPI generation just took a huge leap forward after the 1.13 release!

Here’s a sample of what I mean. From the Wolverine.HTTP test suite, here’s an endpoint method that uses Marten to load an Invoice document, modify it, then save it:

    [WolverinePost("/invoices/{invoiceId}/pay")]
    public static IMartenOp Pay([Document] Invoice invoice)
    {
        invoice.Paid = true;
        return MartenOps.Store(invoice);
    }

The [Document] attribute tells Wolverine to load the Invoice from Marten, and part of its convention will match on the invoiceId route argument from the route pattern. That failed before in a couple ways:

  1. Swashbuckle can’t be convinced that the Invoice argument isn’t the request body
  2. If you omit an Guid invoiceId argument from the route, Swashbuckle wasn’t seeing invoiceId as a route parameter and didn’t let you specify that in the Swashbuckle page.
  3. Swashbuckle definitely didn’t get that IMartenOp is a specialized Wolverine side effect that shouldn’t be used as the response body.

Now though, that endpoint looks like this in Swashbuckle:

Which is now correct and actually usable! (The 404 is valid because there’s a route argument and that status is returned if the Invoice referred to by the invoiceId route argument does not exist).

To call out some improvements for Wolverine.HTTP users, at least the Swashbuckle generation handles:

  • Route arguments that are used by Wolverine, but not necessarily in the main method signature. So no stupid, unused [FromRoute] string id method parameters
  • Querystring arguments are reflected in the Swashbuckle page
  • [FromHeader] arguments are reflected in Swashbuckle
  • HTTP endpoints that return some kind of tuple correctly show the response body if there is one — and that’s a commonly used and powerful capability of Wolverine’s HTTP endpoints that previously fouled up the OpenAPI generation
  • The usage of [EmptyResponse] correctly sets up the 204 status code behavior with no extraneous 200 or 404 status codes coming in by default
  • Ignoring method injected service parameters in the main method

For a little background, after getting plenty of helpful feedback from Wolverine users, I finally took some more serious time to go investigate the problems and root causes. After digging in much deeper to the AspNetCore and Swashbuckle internals, I came to the conclusion that the OpenAPI internals in AspNetCore are batshit crazy far too hard coded to MVC Core and that Wolverine absolutely had to have its own provider for generating OpenAPI documents off of its own semantic model. Fortunately, AspNetCore and Swashbuckle are both open source, so I could easily get to the source code to reverse engineer what they do under the covers (plus JetBrains Rider is a rock star at disassembling code on the fly). Wolverine.HTTP 1.13 now registers its own strategy for generating the OpenAPI documentation for Wolverine endpoints and keeps the built in MVC Core-centric strategy from applying to the same Wolverine endpoints.

I’m sure there will be other issues over time, but so far, this has addressed every known issue with our OpenAPI generation. I’m hoping this goes a long way toward removing impediments to more users adopting Wolverine.HTTP because as I’ve said before, I think the Wolverine model leads to much lower ceremony code, better testability over all, and potentially to significantly better maintainability of larger systems that today turn into huge messes with MVC Core.

Building a Critter Stack Application: Asynchronous Processing with Wolverine

Hey, did you know that JasperFx Software is ready for formal support plans for Marten and Wolverine? Not only are we trying to make the “Critter Stack” tools be viable long term options for your shop, we’re also interested in hearing your opinions about the tools and how they should change. We’re also certainly open to help you succeed with your software development projects on a consulting basis whether you’re using any part of the Critter Stack or any other .NET server side tooling.

Let’s build a small web service application using the whole “Critter Stack” and their friends, one small step at a time. For right now, the “finished” code is at CritterStackHelpDesk on GitHub.

The posts in this series are:

  1. Event Storming
  2. Marten as Event Store
  3. Marten Projections
  4. Integrating Marten into Our Application
  5. Wolverine as Mediator
  6. Web Service Query Endpoints with Marten
  7. Dealing with Concurrency
  8. Wolverine’s Aggregate Handler Workflow FTW!
  9. Command Line Diagnostics with Oakton
  10. Integration Testing Harness
  11. Marten as Document Database
  12. Asynchronous Processing with Wolverine (this post)
  13. Durable Outbox Messaging and Why You Care!
  14. Wolverine HTTP Endpoints
  15. Easy Unit Testing with Pure Functions
  16. Vertical Slice Architecture
  17. Messaging with Rabbit MQ
  18. The “Stateful Resource” Model
  19. Resiliency

As we continue to add new functionality to our incident tracking, help desk system, we have been using Marten for persistence and Wolverine for command execution within MVC Core controllers (with cameos from Alba for testing support and Oakton for command line utilities).

In the workflow we’ve built out so far for the little system shown below, we’ve created a command called CategorizeIncident that for the moment is only sent to the system through HTTP calls from a user interface.

Let’s say that in our system that we may have some domain logic rules based on customer data that we could use to try to prioritize an incident automatically once the incident is categorized. To that end, let’s create a new command named `TryAssignPriority` like this:

public class TryAssignPriority
{
    public Guid IncidentId { get; set; }
}

We’d like to kick off this work any time an incident is categorized, but we might not want to necessarily do that work within the scope of the web request that’s capturing the CategorizeIncident command. Partially this would be a potential scalability issue to potentially offload work from the web server, partially to make the user interface as responsive as possible by not making it wait for slower web service responses, but mostly because I want an excuse to introduce Wolverine’s ability to asynchronously process work through local, in memory queues.

Most of the code in this post is an intermediate form that I’m using just to introduce concepts in the simplest way I can think of. In later posts I’ll show more idiomatic Wolverine ways to do things to arrive at the final version that is in GitHub.

Alright, now that we’ve got our new command class above, let’s publish that locally through Wolverine by breaking into our earlier CategoriseIncidentHandler that I’ll show here in a “before” state:

public static class CategoriseIncidentHandler
{
    public static readonly Guid SystemId = Guid.NewGuid();
    
    [AggregateHandler]
    public static IEnumerable<object> Handle(CategoriseIncident command, IncidentDetails existing)
    {
        if (existing.Category != command.Category)
        {
            yield return new IncidentCategorised
            {
                Category = command.Category,
                UserId = SystemId
            };
        }
    }
}

In this next version, I’m going to add a single call to Wolverine’s main IMessageBus entry point to publish the new TryAssignPriority command message:

public static class CategoriseIncidentHandler
{
    public static readonly Guid SystemId = Guid.NewGuid();
    
    [AggregateHandler]
    // The object? as return value will be interpreted
    // by Wolverine as appending one or zero events
    public static async Task<object?> Handle(
        CategoriseIncident command, 
        IncidentDetails existing,
        IMessageBus bus)
    {
        if (existing.Category != command.Category)
        {
            // Send the message to any and all subscribers to this message
            await bus.PublishAsync(new TryAssignPriority { IncidentId = existing.Id });
            return new IncidentCategorised
            {
                Category = command.Category,
                UserId = SystemId
            };
        }

        // Wolverine will interpret this as "do no work"
        return null;
    }
}

I didn’t do anything that is necessarily out of order here. We haven’t built a message handler for TryAssignPriority or done anything to register subscribers, but that can come later because the PublishAsync() call up above will quietly do nothing if there are no known subscribers for the message.

For asynchronous messaging veterans out there, I will discuss Wolverine’s support for a transactional outbox for a later post. For now, just know that there’s at the very least an in-memory outbox around any message handler that will not send out any pending published messages until after the original message is successfully handled. If you’re not familiar with the “transactional outbox” pattern, please come back to read the follow up post on that later because you absolutely need to understand that to use asynchronous messaging infrastructure like Wolverine.

Next, let’s just add a skeleton message handler for our TryAssignPriority command message in the root API projection:

public static class TryAssignPriorityHandler
{
    public static void Handle(TryAssignPriority command)
    {
        Console.WriteLine("Hey, somebody wants me to prioritize incident " + command.IncidentId);
    }
}

Switching to the command line (you may need to have the PostgreSQL database running for this next thing to work #sadtrombone), I’m going to call dotnet run -- describe to preview my help desk API a little bit.

Under the section of the textual output with the header “Wolverine Message Routing”, you’ll see the message routing tree for Wolverine’s known message types:

┌─────────────────────────────────┬──────────────────────────────────────────┬──────────────────┐
│ Message Type                    │ Destination                              │ Content Type     │
├─────────────────────────────────┼──────────────────────────────────────────┼──────────────────┤
│ Helpdesk.Api.CategoriseIncident │ local://helpdesk.api.categoriseincident/ │ application/json │
│ Helpdesk.Api.TryAssignPriority  │ local://helpdesk.api.tryassignpriority/  │ application/json │
└─────────────────────────────────┴──────────────────────────────────────────┴──────────────────┘

As you can hopefully see in that table up above, just by the fact that Wolverine “knows” there is a handler in the local application for the TryAssignPriority message type, it’s going to route messages of that type to a local queue where it will be executed later in a separate thread.

Don’t worry, this conventional routing, the parallelization settings, and just about anything you can think of is configurable, but let’s mostly stay with defaults for right now.

Switching to the Wolverine configuration in the Program file, here’s a little taste of some of the ways we could control the exact parameters of the asynchronous processing for this local, in memory queue:

builder.Host.UseWolverine(opts =>
{
    // more configuration...

    // Adding a single Rabbit MQ messaging rule
    opts.PublishMessage<RingAllTheAlarms>()
        .ToRabbitExchange("notifications");

    opts.LocalQueueFor<TryAssignPriority>()
        // By default, local queues allow for parallel processing with a maximum
        // parallel count equal to the number of processors on the executing
        // machine, but you can override the queue to be sequential and single file
        .Sequential()

        // Or add more to the maximum parallel count!
        .MaximumParallelMessages(10);
    
    // Or if so desired, you can route specific messages to 
    // specific local queues when ordering is important
    opts.Policies.DisableConventionalLocalRouting();
    opts.Publish(x =>
    {
        x.Message<TryAssignPriority>();
        x.Message<CategoriseIncident>();

        x.ToLocalQueue("commands").Sequential();
    });
});

Summary and What’s Next

Through its local queues function, Wolverine has very strong support for managing asynchronous work within a local process. Any of Wolverine’s message handling capability is usable within these local queues. You also have complete control over the parallelization of the messages being handled in these local queues.

This functionality does raise a lot of questions that I will try to answer in subsequent posts in this series:

  • For the sake of system consistency, we absolutely have to talk about Wolverine’s transactional outbox support
  • How we can use Wolverine’s integration testing support to test our system even when it is spawning additional messages that may be handled asynchronously
  • Wolverine’s ability to automatically forward captured events in Marten to message handlers for side effects
  • How to utilize Wolverine’s “special sauce” to craft message handlers as pure functions that are more easily unit tested than what we have so far
  • Wolverine’s built in Open Telemetry support to trace the asynchronous work end to end
  • Wolverine’s error handling policies to make our system as resilient as possible

Thanks for reading! I’ve been pleasantly surprised how well this series has been received so far. I think this will be the last entry until after Christmas, but I think I will write at least 7-8 more just to keep introducing bits of Critter Stack capabilities in small bites. In the meantime, Merry Christmas and Happy Holidays to you all!

Building a Critter Stack Application: Marten as Document Database

Hey, did you know that JasperFx Software is ready for formal support plans for Marten and Wolverine? Not only are we trying to make the “Critter Stack” tools be viable long term options for your shop, we’re also interested in hearing your opinions about the tools and how they should change. We’re also certainly open to help you succeed with your software development projects on a consulting basis whether you’re using any part of the Critter Stack or any other .NET server side tooling.

Let’s build a small web service application using the whole “Critter Stack” and their friends, one small step at a time. For right now, the “finished” code is at CritterStackHelpDesk on GitHub.

The posts in this series are:

  1. Event Storming
  2. Marten as Event Store
  3. Marten Projections
  4. Integrating Marten into Our Application
  5. Wolverine as Mediator
  6. Web Service Query Endpoints with Marten
  7. Dealing with Concurrency
  8. Wolverine’s Aggregate Handler Workflow FTW!
  9. Command Line Diagnostics with Oakton
  10. Integration Testing Harness
  11. Marten as Document Database (this post)
  12. Asynchronous Processing with Wolverine
  13. Durable Outbox Messaging and Why You Care!
  14. Wolverine HTTP Endpoints
  15. Easy Unit Testing with Pure Functions
  16. Vertical Slice Architecture
  17. Messaging with Rabbit MQ
  18. The “Stateful Resource” Model
  19. Resiliency

So far, we’ve been completely focused on using Marten as an Event Store. While the Marten team is very committed to the event sourcing feature set, it’s pretty likely that you’ll have other data persistence needs in your system that won’t fit the event sourcing paradigm. Not to worry though, because Marten also has a very robust “PostgreSQL as Document Database” feature set that’s perfect for low friction data persistence outside of the event storage. We’ve even used it in earlier posts as Marten projections utilize Marten’s document database features when projections are running Inline or Async (i.e., not Live).

Since we’ve already got Marten integrated into our help desk application at this point, let’s just start with a document to represent customers:

public class Customer
{
    public Guid Id { get; set; }

    // We'll use this later for some "logic" about how incidents
    // can be automatically prioritized
    public Dictionary<IncidentCategory, IncidentPriority> Priorities { get; set; }
        = new();
    
    public string? Region { get; set; }
    
    public ContractDuration Duration { get; set; } 
}

public record ContractDuration(DateOnly Start, DateOnly End);

To be honest, I’m guessing at what a Customer might involve in the end, but it’s okay that I don’t know that upfront per se because as we’ll see soon, Marten makes it very easy to evolve your persisted documents.

Having built the integration test harness for our application in the last post, let’s drop right into an integration test that persists a new Customer document object, and reloads a copy from the persisted data:

public class using_customer_document : IntegrationContext
{
    public using_customer_document(AppFixture fixture) : base(fixture)
    {
    }

    [Fact]
    public async Task persist_and_load_customer_data()
    {
        var customer = new Customer
        {
            Duration = new ContractDuration(new DateOnly(2023, 12, 1), new DateOnly(2024, 12, 1)),
            Region = "West Coast",
            Priorities = new Dictionary<IncidentCategory, IncidentPriority>
            {
                { IncidentCategory.Database, IncidentPriority.High }
            }
        };
        
        // As a convenience just because you'll use it so often in tests,
        // I made a property named "Store" on the base class for quick access to
        // the DocumentStore for the application
        // ALWAYS remember to dispose any sessions you open in tests!
        await using var session = Store.LightweightSession();
        
        // Tell Marten to save the new document
        session.Store(customer);

        // commit any pending changes
        await session.SaveChangesAsync();

        // Marten is assigning an Id for you when one doesn't already
        // exist, so that's where that value comes from
        var copy = await session.LoadAsync<Customer>(customer.Id);
        
        // Just proving to you that it's not the same object
        copy.ShouldNotBeSameAs(customer);
        
        copy.Duration.ShouldBe(customer.Duration);
    }
}

As long as the configured database for our help desk API is available, the test above will happily pass. I’d like to draw your attention to a couple things about that test above:

  • Notice that I didn’t have to make any changes to our application’s AddMarten() configuration in the Program file first because Marten is able to create storage for the new Customer document type on the fly when it first encounters it with its default settings
  • Marten is able to infer that the Id property of the new Customer type is the identity (that can be overridden), and when you add a new Customer document to the session that has an empty Guid as its Id, Marten will quickly assign and set a sequential Guid value for its identity. If you’re wondering, Marten can do this even if the property is scoped as private.
  • The Store() method is effectively an “upsert,” that takes advantage of PostgreSQL’s very efficient, built in upsert syntax. Marten does also support Insert and Update operations, but Store is just an easy default

Behind the scenes, Marten is just serializing our document to JSON and storing that data in a PostgreSQL JSONB column type that will allow for efficient querying within the JSON body later (if you’re immediately asking “why isn’t this thing supporting Sql Server?!?, it’s because only PostgreSQL has the JSONB type). If your document type can be round-tripped by either the venerable Newtonsoft.Json library or the newer System.Text.Json library, that document type can be persisted by Marten with zero explicit mapping.

In many cases, Marten’s approach to object persistence can lead to far less friction and boilerplate code than the equivalent functionality using EF Core, the .NET developer tool of choice. Moreover, using Marten requires a lot fewer database migrations as you change and evolve your document structure, giving developers far more ability to iterate over the shape of their persisted types as opposed to an ORM + Relational Database combination.

And of course, this is .NET, so Marten does come with LINQ support, so we can do queries like this:

        var results = await session.Query<Customer>()
            .Where(x => x.Region == "West Coast")
            .OrderByDescending(x => x.Duration.End)
            .ToListAsync();

As you’ll already know if you happen to follow me on Mastodon, we’re hopefully nearing the end of some very substantial improvements to the LINQ support for the forthcoming Marten v7 release.

While the document database feature set in Marten is pretty deep, the last thing I want to show in this post is that yes, you can create indexes within the JSON body for faster querying as needed. This time, I am going to the AddMarten() configuration in the Program file and add a little bit of code to index the Customer document on its Region field:

builder.Services.AddMarten(opts =>
{
    // other configuration...

    // This will create a btree index within the JSONB data
    opts.Schema.For<Customer>().Index(x => x.Region);
});

Summary and What’s Next

Once upon a time, Marten started with a pressing need to have a reliable, ACID-compliant document database feature set, and we originally chose PostgreSQL because of its unique JSON feature set. Almost on a lark, I added a nascent event sourcing capability before the original Marten 1.0 release. To my surprise, the event sourcing feature set is the main driver of Marten adoption by far, but Marten still has its original feature set to make the rock solid PostgreSQL database engine function as a document database for .NET developers.

Even in a system using event sourcing, there’s almost always some kind of relatively static reference data that’s better suited for Marten’s document database feature set or even going back to using PostgreSQL as the outstanding relational database engine that it is.

In the next post, now that we also know how to store and retrieve customer documents with Marten, we’re going to introduce Wolverine’s “compound handler” capability and see how that can help us factor our code into being very testable.

Building a Critter Stack Application: Wolverine’s Aggregate Handler Workflow FTW!

TL;DR: The full critter stack combo can make CQRS command handler code much simpler and easier to test than any other framework on the planet. Fight me.

Hey, did you know that JasperFx Software is ready for formal support plans for Marten and Wolverine? Not only are we trying to make the “Critter Stack” tools be viable long term options for your shop, we’re also interested in hearing your opinions about the tools and how they should change. We’re also certainly open to help you succeed with your software development projects on a consulting basis whether you’re using any part of the Critter Stack or any other .NET server side tooling.

Let’s build a small web service application using the whole “Critter Stack” and their friends, one small step at a time. For right now, the “finished” code is at CritterStackHelpDesk on GitHub.

The posts in this series are:

  1. Event Storming
  2. Marten as Event Store
  3. Marten Projections
  4. Integrating Marten into Our Application
  5. Wolverine as Mediator
  6. Web Service Query Endpoints with Marten
  7. Dealing with Concurrency
  8. Wolverine’s Aggregate Handler Workflow FTW! (this post)
  9. Command Line Diagnostics with Oakton
  10. Integration Testing Harness
  11. Marten as Document Database
  12. Asynchronous Processing with Wolverine
  13. Durable Outbox Messaging and Why You Care!
  14. Wolverine HTTP Endpoints
  15. Easy Unit Testing with Pure Functions
  16. Vertical Slice Architecture
  17. Messaging with Rabbit MQ
  18. The “Stateful Resource” Model
  19. Resiliency

This series has been written partially in response to some constructive criticism that my writings on the “Critter Stack” suffered from introducing too many libraries or concepts all at once. As a reaction to that, this series is trying to only introduce one new capability or library at a time — which brought on some constructive criticism from someone else that the series isn’t making it obvious why anyone should care about the “Critter Stack” in the first place. So especially for Rob Conery, I give you:

Last time out we talked using Marten’s facilities for optimistic concurrency or exclusive locking to protect our system from inconsistencies due to concurrent commands being processed against the same incident event stream. In the process of that post, I showed the code for a command handler for the CategoriseIncident command shown below that I purposely wrote in a long hand form as explicitly as possible to avoid introducing too many new concepts at once:

public static class LongHandCategoriseIncidentHandler
{
    public static async Task Handle(
        CategoriseIncident command, 
        IDocumentSession session, 
        CancellationToken cancellationToken)
    {
        var stream = await session
            .Events
            .FetchForWriting<IncidentDetails>(command.Id, cancellationToken);

        // Don't worry, we're going to clean this up later
        if (stream.Aggregate == null)
        {
            throw new ArgumentOutOfRangeException(nameof(command), "Unknown incident id " + command.Id);
        }
        
        // We need to validate whether this command actually 
        // should do anything
        if (stream.Aggregate.Category != command.Category)
        {
            var categorised = new IncidentCategorised
            {
                Category = command.Category,
                UserId = SystemId
            };

            stream.AppendOne(categorised);
            
            await session.SaveChangesAsync(cancellationToken);
        }
    }

Hopefully that code is relatively easy to follow, but it’s still pretty busy and there’s a mixture of business logic and fiddling with infrastructure code that’s not particularly helpful when the code inevitably gets more complicated than that as the requirements grow. As we’ll learn about later in this series, both Marten and Wolverine have some built in tooling to enable effective automated integration testing and do so much more effectively than just about any other tool out there. All the same though, you just don’t want to be testing the business logic by trudging through integration tests if you don’t have to (see my only rule of testing).

So let’s definitely look at how Wolverine plays nicely with Marten using its aggregate handler workflow recipe to simplify our handler for easier unit testing and just flat out cleaner code.

First off, I’m going to add the WolverineFx.Marten Nuget to our application:

dotnet add package WolverineFx.Marten

Next, break into our application’s Program file and add one call to the Marten configuration to incorporate some Wolverine goodness into Marten in our application:

builder.Services.AddMarten(opts =>
{
    // Existing Marten configuration...
})
    // This is a mild optimization
    .UseLightweightSessions()

    // Use this directive to add Wolverine transactional middleware for Marten
    // and the Wolverine transactional outbox support as well
    .IntegrateWithWolverine();

And now, let’s rewrite our CategoriseIncident command handler with a completely equivalent implementation using the “aggregate handler workflow” recipe:

public static class CategoriseIncidentHandler
{
    // Kinda faked, don't pay any attention to this please!
    public static readonly Guid SystemId = Guid.Parse("4773f679-dcf2-4f99-bc2d-ce196815dd29");

    // This Wolverine handler appends an IncidentCategorised event to an event stream
    // for the related IncidentDetails aggregate referred to by the CategoriseIncident.IncidentId
    // value from the command
    [AggregateHandler]
    public static IEnumerable<object> Handle(CategoriseIncident command, IncidentDetails existing)
    {
        if (existing.Category != command.Category)
        {
            // This event will be appended to the incident
            // stream after this method is called
            yield return new IncidentCategorised
            {
                Category = command.Category,
                UserId = SystemId
            };
        }
    }
}

In the handler method above, the presence of the[AggregateHandler]attribute directs Wolverine to wrap some middleware around the execution of our Handle() method that:

  • “Knows” the aggregate type in question is the second argument to the handler method, so in this case, IncidentDetails
  • Scans the CategoriseIncident type looking for a property that identifies the IncidentDetails (which will make it utilize the Id property in this case, but the docs spell this convention in detail)
  • Does all the work to delegate and coordinate work in the logical command flow between the Marten infrastructure and our little bitty Handle() method

To visualize this, Wolverine is generating its own internal message handler for CategoriseIncident that has this simplified workflow:

And as a preview to a topic I’ll dive into in much more detail in a later post, here’s part of the (admittedly ugly in the way that only auto-generated code can be) C# code that Wolverine generates around our handler method:

public override async System.Threading.Tasks.Task HandleAsync(Wolverine.Runtime.MessageContext context, System.Threading.CancellationToken cancellation)
{
    // The actual message body
    var categoriseIncident = (Helpdesk.Api.CategoriseIncident)context.Envelope.Message;

    await using var documentSession = _outboxedSessionFactory.OpenSession(context);
    var eventStore = documentSession.Events;
    
    // Loading Marten aggregate
    var eventStream = await eventStore.FetchForWriting<Helpdesk.Api.IncidentDetails>(categoriseIncident.Id, categoriseIncident.Version, cancellation).ConfigureAwait(false);

    
    // The actual message execution
    var outgoing1 = Helpdesk.Api.CategoriseIncidentHandler.Handle(categoriseIncident, eventStream.Aggregate);

    if (outgoing1 != null)
    {
        
        // Capturing any possible events returned from the command handlers
        eventStream.AppendMany(outgoing1);

    }

    await documentSession.SaveChangesAsync(cancellation).ConfigureAwait(false);
}

And lastly, we’ve now reduced our CategoriseIncident command handler to the point where the code that we are actually having to write is a pure function, meaning that it’s a simple matter of inputs and outputs with no dependency on any kind of stateful infrastructure. You absolutely care about isolating any kind of business logic into pure functions because that code becomes much easier to unit test.

And to prove that last statement, here’s what the unit tests for our Handle(CategoriseIncident, IncidentDetails) could look like using xUnit.Net and Shouldly:

public class CategoriseIncidentTests
{
    [Fact]
    public void raise_categorized_event_if_changed()
    {
        // Arrange
        var command = new CategoriseIncident
        {
            Category = IncidentCategory.Database
        };

        var details = new IncidentDetails(
            Guid.NewGuid(), 
            Guid.NewGuid(), 
            IncidentStatus.Closed, 
            new IncidentNote[0],
            IncidentCategory.Hardware);

        // Act
        var events = CategoriseIncidentEndpoint.Post(command, details);

        // Assert
        var categorised = events.Single().ShouldBeOfType<IncidentCategorised>();
        categorised
            .Category.ShouldBe(IncidentCategory.Database);
    }

    [Fact]
    public void do_not_raise_event_if_the_category_would_not_change()
    {
        // Arrange
        var command = new CategoriseIncident
        {
            Category = IncidentCategory.Database
        };

        var details = new IncidentDetails(Guid.NewGuid(), Guid.NewGuid(), IncidentStatus.Closed, new IncidentNote[0],
            IncidentCategory.Database);

        // Act
        var events = CategoriseIncidentEndpoint.Post(command, details);
        
        // Assert no events were appended
        events.ShouldBeEmpty();
    }
}

In the unit test code above, we were able to exercise the decision about what events (if any) should be appended to the incident event stream without any dependency whatsoever on any kind of infrastructure. The easiest kind of unit test to write and to read later is a test that has a clear relationship between the test inputs and outputs with minimal noise code for setting up state — and that’s exactly what we have up above. No message mock object set up, no need to setup database state, nothing. Just, “here’s the existing state and this command, now tell me what events should be appended.”

Summary and What’s Next

The full Critter Stack “aggregate handler workflow” recipe leads to very low ceremony code to implement command handlers within a CQRS style architecture. This recipe also leads to a code structure where your business logic is relatively easy to test with fast running unit testing. And we arrived at that point without having to watch umpteen hours of “Clean Architecture” YouTube snake oil videos, introducing a ton of “Ports and Adapter” style abstractions to clutter up our code, or scattering our code for the single CategoriseIncident message handler across 3-4 “Onion Architecture” projects within a massive .NET solution.

This approach was heavily inspired by the Decider pattern that originated for Event Sourcing within the F# community. But whereas the F# approach uses language tricks (and I don’t mean that pejoratively here), Wolverine is getting to a lower ceremony approach by doing that runtime code generation around our code.

If you look back to the sequence diagram up above that tries to explain the control flow, Wolverine is purposely using Jim Shore’s idea of the “A-Frame Architecture” (it’s not really an architectural style despite the name, so don’t even try to do an apples to apples comparison between it and something more prescriptive like the Clean Architecture). In this approach, Wolverine is purposely decoupling the Marten infrastructure away from the CategoriseIncident handler logic that is implementing the business logic that “decides” what to do next by mediating between Marten and the handler. The “A-Frame” name comes from visualizing that mediation like this (Wolverine calls into the infrastructure services like Marten and the business logic so the domain logic doesn’t have to):

Now, there’s a lot more stuff that our command handlers may very well need to implement, including:

  • Message input validation
  • Instrumentation and observability
  • Error handling and resiliency protections ’cause it’s an imperfect world!
  • Publishing the new events to some other internal message handler that will take additional actions after our first command has “decided” what to do next
  • Publishing the new events as some kind of external message to another process
  • Enrolling in a transactional outbox of some sort or another to keep the system in a consistent state — and you really need to care about this capability!!!

And oh, yeah, do all that with minimal code ceremony, be testable with unit tests as much as possible, and be feasible to do automated integration testing when we have to.

We’ll get to all the items in that list above in this series, but I think in the next post I’d like to introduce Wolverine’s HTTP handler recipe and build out more aggregate command handlers, but this time with an HTTP endpoint. Until next time…

Building a Critter Stack Application: Wolverine as Mediator

Hey, did you know that JasperFx Software is ready for formal support plans for Marten and Wolverine? Not only are we trying to make the “Critter Stack” tools be viable long term options for your shop, we’re also interested in hearing your opinions about the tools and how they should change. We’re also certainly open to help you succeed with your software development projects on a consulting basis whether you’re using any part of the Critter Stack or any other .NET server side tooling.

Let’s build a small web service application using the whole “Critter Stack” and their friends, one small step at a time. For right now, the “finished” code is at CritterStackHelpDesk on GitHub.

The posts in this series are:

  1. Event Storming
  2. Marten as Event Store
  3. Marten Projections
  4. Integrating Marten into Our Application
  5. Wolverine as Mediator (this post)
  6. Web Service Query Endpoints with Marten
  7. Dealing with Concurrency
  8. Wolverine’s Aggregate Handler Workflow FTW!
  9. Command Line Diagnostics with Oakton
  10. Integration Testing Harness
  11. Marten as Document Database
  12. Asynchronous Processing with Wolverine
  13. Durable Outbox Messaging and Why You Care!
  14. Wolverine HTTP Endpoints
  15. Easy Unit Testing with Pure Functions
  16. Vertical Slice Architecture
  17. Messaging with Rabbit MQ
  18. The “Stateful Resource” Model
  19. Resiliency

In the previous posts I’ve been focused on Marten as a persistence tool. Today I want to introduce Wolverine into the mix, but strictly as a “Mediator” tool within the commonly used MVC Core or Minimal API tools for web service development.

While Wolverine does much, much more than what we’re going to use today, let’s stay with the them of keeping these posts short and just dip our toes into the Wolverine water with a simple usage.

Using our web service project from previous posts, I’m going to add a reference to the main Wolverine nuget through:

dotnet add package WolverineFx

Next, let’s add Wolverine to our application with this one line of code within our Program file:

builder.Host.UseWolverine(opts =>
{
    // We'll add more here later, but the defaults are all
    // good enough for now
});

As a quick aside, Wolverine is added directly to the IHostBuilder instead of IServiceCollection through a “Use****()” method because it’s also quietly sliding in Lamar as the underlying IoC container. Some folks have been upset at that, so let’s be upfront with that right now. While I may talk about Lamar diagnostics as part of this series, it’s unlikely that that will ever be an issue for most users in any way. Lamar has some specific functionality that was built specifically for Wolverine and utilized quite heavily.

This time out, let’s move into the “C(ommand)” part of our CQRS architecture and build some handling for the CategoriseIncident command we’d initially discovered in our Event Storming session:

public class CategoriseIncident
{
    public Guid Id { get; set; }
    public IncidentCategory Category { get; set; }
    public int Version { get; set; }
}

And next, let’s build our very first ever Wolverine message handler for this command that will load the existing IncidentDetails for the designated incident, decide if the category is being changed, and add a new event to the event stream using Marten’s IDocumentSession service. That handler in code done purposely in an explicit, “long hand” style could be this — but in later posts we will use other Wolverine capabilities to make this code much simpler while even introducing a lot more robust set of validations:

public static class CategoriseIncidentHandler
{
    public static async Task Handle(
        CategoriseIncident command, 
        IDocumentSession session, 
        CancellationToken cancellationToken)
    {
        // Find the existing state of the referenced Incident
        var existing = await session
            .Events
            .AggregateStreamAsync<IncidentDetails>(command.Id, token: cancellationToken);

        // Don't worry, we're going to clean this up later
        if (existing == null)
        {
            throw new ArgumentOutOfRangeException(nameof(command), "Unknown incident id " + command.Id);
        }
        
        // We need to validate whether this command actually 
        // should do anything
        if (existing.Category != command.Category)
        {
            var categorised = new IncidentCategorised
            {
                Category = command.Category,
                UserId = SystemId
            };

            session.Events.Append(command.Id, categorised);
            await session.SaveChangesAsync(cancellationToken);
        }
    }
    
    // This is kinda faked out, nothing to see here!
    public static readonly Guid SystemId = Guid.NewGuid();
}

There’s a couple things I want you to note about the handler class above:

  • We’re not going to make any kind of explicit configuration to help Wolverine discover and use that handler class. Instead, Wolverine is going to discover that within our main service assembly because it’s a public, concrete class suffixed with the name “Handler” (there are other alternatives for this discovery if you don’t like that approach).
  • Wolverine “knows” that the Handle() method is a handler for the CategoriseIncident command because the method is named “Handle” and the first argument is that command type
  • Note that this handler is a static type. It doesn’t have to be, but doing so helps Wolverine shave off some object allocations at runtime.
  • Also note that Wolverine message handlers happily support “method injection” and allow you to inject IoC service dependencies like the Marten IDocumentSession through method arguments. You can also do the more traditional .NET approach of pulling everything through a constructor and setting instance fields, but hey, why not write simpler code?
  • While it’s perfectly legal to handle multiple message types in the same handler class, I typically recommend making that a one to one relationship in most cases

And next, let’s put this into context by having an MVC Core controller expose an HTTP route for this command type, then pass the command on to Wolverine where it will mediate between the HTTP outer world and the inner world of the application services like Marten:

// I'm doing it this way for now because this is 
// a common usage, but we'll move away from 
// this later into more of a "vertical slice"
// approach of organizing code
public class IncidentController : ControllerBase
{
    [HttpPost("/api/incidents/categorize")]
    public Task Categorize(
        [FromBody] CategoriseIncident command,
        [FromServices] IMessageBus bus)

        // IMessageBus is the main entry point into
        // using Wolverine
        => bus.InvokeAsync(command);
}

Summary and Next Time

In this post we looked at the very simplest usage of Wolverine, how to integrate that into your codebase, and how to get started writing command handlers with Wolverine. What I’d like you to take away is that Wolverine is a very different animal from “IHandler of T” frameworks like MediatR, NServiceBus, MassTransit, or Brighter that require mandatory interface signatures and/or base classes. Even when writing long hand code as I did, I hope you can notice already how much lower code ceremony Wolverine requires compared to more typical .NET frameworks that solve similar problems to Wolverine.

I very purposely wrote the message handlers in a very explicit way, and left out some significant use cases like concurrency protection, user input validation, and cross cutting concerns. I’m not 100% sure where I want to go next, but in this next week we’ll look at concurrency protections with Marten, highly efficient GET HTTP endpoints with Marten and ASP.Net Core, and start getting into Wolverine’s HTTP endpoint model.

Why you might ask are all the Wolverine nugets suffixed with “Fx?” The Marten core team and some of our closest collaborators really liked the name “Wolverine” for this project and instantly came up with the project graphics, but when we tried to start publishing Nuget packages, we found out that someone is squatting on the name “Wolverine” in Nuget and we weren’t able to get the rights to that name. Rather than change course, we stubbornly went full speed ahead with the “WolverineFx” naming scheme just for the published Nugets.

Let’s Get Controversial for (only) a Minute!

When my wife and I watched the Silicon Valley show, I think she was bemused when I told her there was a pretty heated debate in development circles over “tabs vs spaces.”

I don’t want this to detract too much from the actual content of this series, but I have very mixed feelings about ASP.Net MVC Core as a framework and the whole idea of using a “mediator” as popularized by the MediatR library within an MVC Core application.

I’ve gone back and forth on both ASP.Net MVC in its various incarnations and also on MediatR both alone and as a complement to MVC Core. Where I’ve landed at right now is the opinion that MVC Core used by itself is a very flawed framework that can easily lead to unmaintainable code over time as an enterprise system grows over time as typical interpretations of the “Clean Architecture” style in concert with MVC Core’s routing rules lead unwary developers to creating bloated MVC controller classes.

While I was admittedly unimpressed with MediatR as I first encountered it on its own merits in isolation, what I will happily admit is that the usage of MediatR is helpful within MVC Core controllers as a way to offload operation specific code into more manageable pieces as opposed to the bloated controllers that frequently result from using MVC Core. I have since occasionally recommended the usage of MediatR within MVC Core codebases to my consulting clients as a way to help make their code easier to maintain over time.

If you’re interested, I touched on this theme somewhat in my talk A Contrarian View of Software Architecture from NDC Oslo 2023. And yes, I absolutely think you can build maintainable systems with MVC Core over time even without the MediatR crutch, but I think you have to veer away from the typical usage of MVC Core to do so and be very mindful of how you’re using the framework. In other words, MVC Core does not by itself lead teams to a “pit of success” for maintainable code in the long run. I think that MediatR or Wolverine with MVC Core can help, but I think we can do better in the long run by moving away from MVC Core. a

By the time this series is over, I will be leaning very hard into organizing code in a vertical slice architecture style and seeing how to use the Critter Stack to create maintainability and testability without the typically complex “Ports and Adapter” style architecture that well meaning server side development teams have been trying to use in the past decade or two.

While I introduced Wolverine today as a “mediator” tool within MVC Core, by the time this series is done we’ll move away from MVC Core with or without MediatR or “Wolverine as MediatR” and use Wolverine’s HTTP endpoint model by itself as simpler alternative with less code ceremony — and I’m going to try hard to make the case that that simpler model is a superior way to build systems.