Building a Critter Stack Application: The “Stateful Resource” Model

Hey, did you know that JasperFx Software is ready for formal support plans for Marten and Wolverine? Not only are we trying to make the “Critter Stack” tools be viable long term options for your shop, we’re also interested in hearing your opinions about the tools and how they should change. We’re also certainly open to help you succeed with your software development projects on a consulting basis whether you’re using any part of the Critter Stack or any other .NET server side tooling.

Let’s build a small web service application using the whole “Critter Stack” and their friends, one small step at a time. For right now, the “finished” code is at CritterStackHelpDesk on GitHub.

The posts in this series are:

  1. Event Storming
  2. Marten as Event Store
  3. Marten Projections
  4. Integrating Marten into Our Application
  5. Wolverine as Mediator
  6. Web Service Query Endpoints with Marten
  7. Dealing with Concurrency
  8. Wolverine’s Aggregate Handler Workflow FTW!
  9. Command Line Diagnostics with Oakton
  10. Integration Testing Harness
  11. Marten as Document Database
  12. Asynchronous Processing with Wolverine
  13. Durable Outbox Messaging and Why You Care!
  14. Wolverine HTTP Endpoints
  15. Easy Unit Testing with Pure Functions
  16. Vertical Slice Architecture
  17. Messaging with Rabbit MQ
  18. The “Stateful Resource” Model (this post)
  19. Resiliency

I’ve personally spent quite a bit of time helping teams and organizations deal with older, legacy codebases where it might easily take a couple days of working painstakingly through the instructions in a large Wiki page of some sort in order to make their codebase work on a local development environment. That’s indicative of a high friction environment, and definitely not what we’d ideally like to have for our own teams.

Thinking about the external dependencies of our incident tracking, help desk api we’ve utilized:

  1. Marten for persistence, which requires our system to need PostgreSQL database schema objects
  2. Wolverine’s PostgreSQL-backed transactional outbox support, which also requires its own set of PostgreSQL database schema objects
  3. Rabbit MQ for asynchronous messaging, which requires queues, exchanges, and bindings to be set up in our message broker for the application to work

That’s a bit of stuff that needs to be configured within the Rabbit MQ or PostgreSQL infrastructure around our service in order to run our integration tests or the application itself for local testing.

Instead of the error prone, painstaking manual set up laboriously laid out in a Wiki page somewhere where you can’t remember where it is, let’s leverage the Critter Stack’s “Stateful Resource” model to quickly set our system up ready to run in development.

Building on our existing application configuration, I’m going to add a couple more lines of code to our system’s Program file:

// Depending on your DevOps setup and policies,
// you may or may not actually want this enabled
// in production installations, but some folks do
if (builder.Environment.IsDevelopment())
{
    // This will direct our application to set up
    // all known "stateful resources" at application bootstrapping
    // time
    builder.Services.AddResourceSetupOnStartup();
}

And that’s that. If you’re using the integration test harness like we did in an earlier post, or just starting up the application normally, the application will check for the existence of any of the following, and try to build out anything that’s missing from:

  • The known Marten document tables and all the database objects to support Marten’s event sourcing
  • The necessary tables and functions for Wolverine’s transactional inbox, outbox, and scheduled message tables (I’ll add a post later on those)
  • The known Rabbit MQ exchanges, queues, and bindings

Your application will have to have administrative privileges over all the resources for any of this to work of course, but you would have that at development time at least.

With this capability in place, the procedure for a new developer getting started with our codebase is to:

  1. Does a clean git clone of our codebase on to his local box
  2. Runs docker compose up to start up all the necessary infrastructure they need to run the system or the system’s integration tests locally
  3. Just run the integration tests or start the system and go!

Easy-peasy.

But wait, there’s more! Assuming you have Oakton set up as your command line like we did in an earlier post, you’ve got some command line tooling that can help as well.

If you omit the call to builder.Services.AddResourceSetupOnStartup();, you could still go to the command line and use this command just once to set everything up:

dotnet run -- resources setup

To check on the status of any or all of the resources, you can use:

dotnet run -- resources check

which for the HelpDesk.API, gives you this:

If you want to tear down all the existing data — and at least attempt to purge any Rabbit MQ queues of all messages — you can use:

dotnet run -- resources clear

There’s a few other options you can read about in the Oakton documentation for the Stateful Resource model, but for right now, type dotnet run -- help resources and you can see Oakton’s built in help for the resources command that runs down the supported usage:

Summary and What’s Next

The Critter Stack is trying really hard to create a productive, low friction development ecosystem for your projects. One of the ways it tries to make that happen is by being able to set up infrastructural dependencies automatically at runtime so a developer and just “clone n’ go” without the excruciating pain of the multi-page Wiki getting started instructions so painfully common in legacy codebases.

This stateful resource model is also supported for Kafka transport (which is also local development friendly) and the cloud native Azure Service Bus transport and AWS SQS transport (Wolverine + AWS SQS does work with LocalStack just fine). In the cloud native cases, the credentials from the Wolverine application will have to have the necessary rights to create queues, topics, and subscriptions. In the case of the cloud native transports, there is an option to prefix all the names of the queues, topics, and subscriptions to still create an isolated environment per developer for a better local development story even when relying on cloud native technologies.

I think I’ll add another post to this series where I switch the messaging to one of the cloud native approaches.

As for what’s next in this increasingly long series, I think we still have logging, open telemetry and metrics, resiliency, and maybe a post on Wolverine’s middleware support. That list is somewhat driven by recency bias around questions I’ve been asked here or there about Wolverine.

Building a Critter Stack Application: Messaging with Rabbit MQ

Hey, did you know that JasperFx Software is ready for formal support plans for Marten and Wolverine? Not only are we trying to make the “Critter Stack” tools be viable long term options for your shop, we’re also interested in hearing your opinions about the tools and how they should change. We’re also certainly open to help you succeed with your software development projects on a consulting basis whether you’re using any part of the Critter Stack or any other .NET server side tooling.

Let’s build a small web service application using the whole “Critter Stack” and their friends, one small step at a time. For right now, the “finished” code is at CritterStackHelpDesk on GitHub.

The posts in this series are:

  1. Event Storming
  2. Marten as Event Store
  3. Marten Projections
  4. Integrating Marten into Our Application
  5. Wolverine as Mediator
  6. Web Service Query Endpoints with Marten
  7. Dealing with Concurrency
  8. Wolverine’s Aggregate Handler Workflow FTW!
  9. Command Line Diagnostics with Oakton
  10. Integration Testing Harness
  11. Marten as Document Database
  12. Asynchronous Processing with Wolverine
  13. Durable Outbox Messaging and Why You Care!
  14. Wolverine HTTP Endpoints
  15. Easy Unit Testing with Pure Functions
  16. Vertical Slice Architecture
  17. Messaging with Rabbit MQ (this post)
  18. The “Stateful Resource” Model
  19. Resiliency

To this point in the series, everything has happened within the context of our single HelpDesk.API project. We’ve utilized HTTP endpoints, Wolverine as a mediator, and sent messages through Wolverine’s local queueing features. Today, let’s add Rabbit MQ to the mix as a super, local development-friendly option for distributed processing and just barely dip our toes into Wolverine’s asynchronous messaging support.

As a reminder, here’s a diagram of our incident tracking, help desk system:

In our case, we’re going to create a separate service to handle outgoing emails and SMS messaging I’ve inevitably named the “NotificationService.” For the communication between the Help Desk API and the Notification Service, we’re going to use a Rabbit MQ queue to send RingAllTheAlarms messages from our Help Desk API to the downstream Notification Service, where that will formulate an email body or SMS message or who knows what according to our agent’s personal preferences.

I’ve heard a couple derivations over the years of Zawinski’s Law, stating that every system will eventually grow until it can read mail (or contain a half-assed implementation of LISP). My corollary to that is that every enterprise system will inevitably grow to include a separate service for sending notifications to users.

Earlier, we had build a message handler that potentially sent a RingAllTheAlarms message if an incident was assigned a critical priority:

    [AggregateHandler]
    public static (Events, OutgoingMessages) Handle(
        TryAssignPriority command, 
        IncidentDetails details,
        Customer customer)
    {
        var events = new Events();
        var messages = new OutgoingMessages();

        if (details.Category.HasValue && customer.Priorities.TryGetValue(details.Category.Value, out var priority))
        {
            if (details.Priority != priority)
            {
                events.Add(new IncidentPrioritised(priority, command.UserId));

                if (priority == IncidentPriority.Critical)
                {
                    messages.Add(new RingAllTheAlarms(command.IncidentId));
                }
            }
        }

        return (events, messages);
    }

When our system tries to publish that RingAllTheAlarms message, Wolverine tries to route that message to a subscribing endpoint (local queues are also considered to be endpoints by Wolverine), and publishes the message to each subscriber — or does nothing if there are no known subscribers for that message type.

Let’s first create our new Notification Service from scratch, with a quick call to:

dotnet new console

After that, I admittedly took a short cut and just added a project reference to our Help Desk API project because it’s late at night as I write this and I’m lazy by nature. In real usage you probably at least start with a shared library just to define the message types that are exchanged between two or more processes:

To be clear, Wolverine does not require you to use shared types for the message bodies between Wolverine applications, but that frequently turns out to be the easiest mechanism to get started and it can easily be sufficient in many situations.

Back to our new Notification Service. I’m going to add a reference to Wolverine’s Rabbit MQ transport library (Wolverine.RabbitMQ) with:

dotnet add package WolverineFx.RabbitMQ

With that in place, the entire (faked up) Notification Service code is this:

using Helpdesk.Api;
using Microsoft.Extensions.Hosting;
using Oakton;
using Wolverine;
using Wolverine.RabbitMQ;

return await Host.CreateDefaultBuilder()
    .UseWolverine(opts =>
    {
        // Connect to Rabbit MQ
        // The default like this expects to connect to a Rabbit MQ
        // broker running in the localhost at the default Rabbit MQ
        // port
        opts.UseRabbitMq();

        // Tell Wolverine to listen for incoming messages
        // from a Rabbit MQ queue 
        opts.ListenToRabbitQueue("notifications");
    }).RunOaktonCommands(args);


// Just to see that there is a message handler for the RingAllTheAlarms
// message
public static class RingAllTheAlarmsHandler
{
    public static void Handle(RingAllTheAlarms message)
    {
        Console.WriteLine("I'm going to scream out an alert about incident " + message.IncidentId);
    }
}

Moving back to our Help Desk API project, I’m going to add a reference to the WolverineFx.RabbitMQ Nuget, and add this code to define the outgoing subscription for the RingAllTheAlarms message:

builder.Host.UseWolverine(opts =>
{
    // Other configuration...
    
    // Opt into the transactional inbox/outbox on all messaging
    // endpoints
    opts.Policies.UseDurableOutboxOnAllSendingEndpoints();
    
    // Connecting to a local Rabbit MQ broker
    // at the default port
    opts.UseRabbitMq();

    // Adding a single Rabbit MQ messaging rule
    opts.PublishMessage<RingAllTheAlarms>()
        .ToRabbitExchange("notifications");

    // Other configuration...
});

I’m going to very highly recommend that you read up a little bit on Rabbit MQ’s model of exchanges, queues, and bindings before you try to use it in anger because every message broker seems to have subtly different behavior. Just for this post though, you’ll see that the Help Desk API is publishing to a Rabbit MQ exchange named “notifications” and the Notification Service is listening to a queue named “notifications”. To fully connect the two services through Rabbit MQ, you’d need to add a binding from the “notifications” exchange to the “notifications” queue. You can certainly do that through any Rabbit MQ management mechanism, but you could also define that binding in Wolverine itself and let Wolverine put that altogether for you at runtime much like Wolverine and Marten can for their database schema dependencies.

Let’s revisit the Notification Service code and make it set up a little bit more for us in the Wolverine setup to automatically build the right Rabbit MQ exchange, queue, and binding between our applications like so:

return await Host.CreateDefaultBuilder()
    .UseWolverine(opts =>
    {
        opts.UseRabbitMq()
            // Make it build out any missing exchanges, queues, or bindings that
            // the system knows about as ncessary
            .AutoProvision()
            
            // This is just to make Wolverine help us out to configure Rabbit MQ end to end
            // This isn't mandatory, but it might help you be more productive at development 
            // time
            .BindExchange("notifications").ToQueue("notifications", "notification_binding");

        // Tell Wolverine to listen for incoming messages
        // from a Rabbit MQ queue 
        opts.ListenToRabbitQueue("notifications");
    }).RunOaktonCommands(args);

And that’s actually that, we’re completely ready to go assuming there’s a Rabbit MQ broker running on our local development box — which I usually do just through docker compose (here’s the docker-compose.yaml file from this sample application).

One thing to note for folks seeing this who are coming from a MassTransit or NServiceBus background, Wolverine does not need you to specify any kind of connectivity between message handlers and listening endpoints. That might become an “opt in” feature some day, but there’s nothing like that in Wolverine today.

Summary and What’s Next

I just barely exposed a little bit of what Wolverine can while using Rabbit MQ as a messaging transport. There’s a ton of levers and knobs to adjust for increased throughput or for more strict message ordering. There’s also a conventional routing capability that might be a good default for getting started.

As far as when you should use asynchronous messaging, my thinking is that you should pretty well always use asynchronous messaging between two processing unless you have to have the inline response from the downstream system. Otherwise, I think that using asynchronous messaging techniques helps to decouple systems from each other temporally, and gives you more tools for creating robust and resilient systems through error handling policies.

And speaking of “resiliency”, I think that will be the subject of one of the remaining posts in this series.

Quick Update on Marten 7.0 (and Wolverine 2.0)

There’s a new Marten 7.0 beta 4 release out today with a new round of bug fixes and some performance enhancements. We’re getting closer to getting a 7.0 release out, so I thought I’d update the world a bit on what’s remaining. I’d also love to give folks a chance to weigh in on some of the outstanding work that may or may not make the cut for 7.0 or slide to later. Due to some commitments to clients, I’m hoping to have the release out by early February at the latest, but we’ll see.

A Wolverine 2.0 release will follow shortly, but that’s going to be almost completely about upgrading Wolverine to use the latest Marten and Weasel dependencies and shouldn’t result in any breaking changes.

What’s In Flight or Outstanding

There’s several medium sized efforts either in flight, or yet to come. User feedback is certainly welcome:

  • Low level database execution improvements. We’re doing a lot of work to integrate relatively newer ADO.Net features from Npgsql that will help us wring out a little better performance. As part of that work, we’re going to replace our homegrown resiliency feature (IRetryPolicy) with a more efficient and likely more effective approach using Polly baked into Marten. I was hesitant to take on Polly before because of its tendency to be a diamond dependency issue, but I think we’ve changed our minds about the risk/reward equation here. I think we’ll also get a little performance and scalability boost by using Polly’s static Lambda approach in place of our current approach. The reality is that while you probably shouldn’t be too consumed with micro-optimizations in application development, it’s much more valuable in infrastructure code like Marten to be as performant as possible.
  • Open Telemetry support baked in. I think this is a low hanging fruit issue that might be a great place for anyone to jump in. Please feel free to weigh in on the possible approaches we’ve outlined.
  • Better scalability for asynchronous projections and the ability to deploy projection and event changes with less or even zero downtime compared to the current Marten. I’ll refer you to a longer discussion for feedback on possible directions. That discussion also touches on topics around event data migrations and archival strategies.
  • Enabling built in support for strong typed identifiers. This is far more work than I personally think it’s worth, but plenty of folks tell us that it’s a must have feature even to the point where they tell us they won’t use Marten until this exists. This kind of thing is what drives me personally to make disparaging remarks about the DDD community’s seeming love of code ceremony. Grr.
  • “Partial” document updates with native PostgreSQL features. We’ve had this functionality for years, but it depends on the PLv8 extension to PostgreSQL that’s continuously harder to use, especially on the cloud. I think this could be a big win, especially for users coming from MongoDb
  • Dynamic Tenant Database Discovery — customer request, and that means it goes to a the top of the priority list. Weird how it works that way.
  • What else folks? I don’t want the release to drag on forever, but there’s plenty of other things to do

LINQ Improvements

From my perspective, the effective rewrite of the LINQ provider support for V7 is the single biggest change and improvement for Marten 7. As always, I’m hopeful that this shores up Marten’s technical foundation for years to come. I’d sum that work up as:

  • Glass Half Full: the new LINQ support covers a lot more scenarios that were missing previously, and especially improves both the number of supported use cases and the efficiency of the generated SQL for querying within child collections in many cases. Moreover, the new LINQ support should be better about telling you when it can’t support something instead of doing erroneous searches, and should be in much better shape for when we need to add new permutations to the support from user requests later.
  • Glass Half Empty: It took a long, long time to get this done and it was quite an opportunity cost for me personally. We also got a large GitHub sponsorship for this work, and while I was and am very grateful for that, I’m also feeling guilty about how long it took to finish that work.

And that folks is the life of a semi-successful OSS author in one nutshell.

If you’re curious, here’s a write up on GitHub about the new LINQ internals that was meant for Marten contributors.

Building a Critter Stack Application: Vertical Slice Architecture

Hey, did you know that JasperFx Software is ready for formal support plans for Marten and Wolverine? Not only are we trying to make the “Critter Stack” tools be viable long term options for your shop, we’re also interested in hearing your opinions about the tools and how they should change. We’re also certainly open to help you succeed with your software development projects on a consulting basis whether you’re using any part of the Critter Stack or any other .NET server side tooling.

Let’s build a small web service application using the whole “Critter Stack” and their friends, one small step at a time. For right now, the “finished” code is at CritterStackHelpDesk on GitHub.

The posts in this series are:

  1. Event Storming
  2. Marten as Event Store
  3. Marten Projections
  4. Integrating Marten into Our Application
  5. Wolverine as Mediator
  6. Web Service Query Endpoints with Marten
  7. Dealing with Concurrency
  8. Wolverine’s Aggregate Handler Workflow FTW!
  9. Command Line Diagnostics with Oakton
  10. Integration Testing Harness
  11. Marten as Document Database
  12. Asynchronous Processing with Wolverine
  13. Durable Outbox Messaging and Why You Care!
  14. Wolverine HTTP Endpoints
  15. Easy Unit Testing with Pure Functions
  16. Vertical Slice Architecture (this post)
  17. Messaging with Rabbit MQ
  18. The “Stateful Resource” Model
  19. Resiliency

I’m taking a short detour in this series today as I prepare to do my “Contrarian Architecture” talk at the CodeMash 2024 conference today. In that talk (here’s a version from NDC Oslo 2023), I’m going to spend some time more or less bashing stereotypical usages of the Clean or Onion Architecture prescriptive approach.

While there’s nothing to prevent you from using either Wolverine or Marten within a typical Clean Architecture style code organization, the “Critter Stack” plays well within a lower code ceremony vertical slice architecture that I personally prefer.

First though, let’s talk about why I don’t like about the stereotypical Code/Onion Architecture approach you commonly find in enterprise .NET systems. With this common mode of code organization, the incident tracking help desk service we have been building in this series might be organized something like:

Class NameProject
IncidentControllerHelpDesk.API
IncidentServiceHelpDesk.ServiceLayer
IncidentHelpDesk.Domain
IncidentRepositoryHelpDesk.Data
Don’t laugh because a lot of people do this

This kind of code structure is primarily organized around the “nouns” of the system and reliant on the formal layering prescriptions to try to create a healthy separation of concerns. It’s probably perfectly fine for pure CRUD applications, but breaks down very badly over time for more workflow centric applications.

I despise this form of code organization in very large systems because:

  1. It scatters closely related code throughout the codebase
  2. You typically don’t spend a lot of time trying to reason about an entire layer at a time. Instead, you’re largely worried about the behavior of one single use case and the logical flow through the entire stack for that one use case
  3. The code layout tells you very little about what the application does as it’s primarily focused around technical concerns (hat tip to David Whitney for that insight)
  4. It’s high ceremony. Lots of layers, interfaces, and just a lot of stuff
  5. Abstractions around the low level persistence infrastructure can very easily lead you to poorly performing code and can make it much harder later to understand why code is performing poorly in production

Shifting to the Idiomatic Wolverine Approach

Let’s say that we’re sitting around a fire boasting of our victories in software development (that’s a lie, I’m telling horror stories about the worst systems I’ve ever seen) and you ask me “Jeremy, what is best in code?”

And I’d respond:

  • Low ceremony code that’s easy to read and write
  • Closely related code is close together
  • Unrelated code is separated
  • Code is organized around the “verbs” of the system, which in the case of Wolverine probably means the commands
  • The code structure by itself gives some insight into what the system actually does

Taking our LogIncident command, I’m going to put every drop of code related to that command in a single file called “LogIncident.cs”:

public record LogIncident(
    Guid CustomerId,
    Contact Contact,
    string Description
)
{
    public class LogIncidentValidator : AbstractValidator<LogIncident>
    {
        // I stole this idea of using inner classes to keep them
        // close to the actual model from *someone* online,
        // but don't remember who
        public LogIncidentValidator()
        {
            RuleFor(x => x.Description).NotEmpty().NotNull();
            RuleFor(x => x.Contact).NotNull();
        }
    }
};

public record NewIncidentResponse(Guid IncidentId) 
    : CreationResponse("/api/incidents/" + IncidentId);

public static class LogIncidentEndpoint
{
    [WolverineBefore]
    public static async Task<ProblemDetails> ValidateCustomer(
        LogIncident command, 
        
        // Method injection works just fine within middleware too
        IDocumentSession session)
    {
        var exists = await session.Query<Customer>().AnyAsync(x => x.Id == command.CustomerId);
        return exists
            ? WolverineContinue.NoProblems
            : new ProblemDetails { Detail = $"Unknown customer id {command.CustomerId}", Status = 400};
    }
    
    [WolverinePost("/api/incidents")]
    public static (NewIncidentResponse, IStartStream) Post(LogIncident command, User user)
    {
        var logged = new IncidentLogged(
            command.CustomerId, 
            command.Contact, 
            command.Description, 
            user.Id);

        var op = MartenOps.StartStream<Incident>(logged);
        
        return (new NewIncidentResponse(op.StreamId), op);
    }

}

Every single bit of code related to handling this operation in our system is in one file that we can read top to bottom. A few significant points about this code:

  • I think it’s working out well in other Wolverine systems to largely name the files based on command names or the request body models for HTTP endpoints. At least with systems being built with a CQRS approach. Using the command name allows the system to be more self descriptive when you’re just browsing the codebase for the first time
  • The behavioral logic is still isolated to the Post() method, and even though there is some direct data access in the same class in its LoadAsync() method, the Post() method is a pure function that can be unit tested without any mocks
  • There’s also no code unrelated to LogIncident anywhere in this file, so you bypass the problem you get in noun-centric code organizations where you have to train your brain to ignore a lot of unrelated code in an IncidentService that has nothing to do with the particular operation you’re working on at any one time
  • I’m not bothering to wrap any kind of repository abstraction around Marten’s IDocumentSession in this code sample. That’s not to say that I wouldn’t do so in the case of something more complicated, and especially if there’s some kind of complex set of data queries that would need to be reused in other commands
  • You can clearly see the cause and effect between the command input and any outcomes of that command. I think this is an important discussion all by itself because it can easily be hard to reason about that same kind of cause and effect in systems that split responsibilities within a single use case across different areas of the code and even across different projects or components. Codebases that are hard to reason about are very prone to regression errors down the line — and that’s the voice of painful experience talking.

I certainly wouldn’t use this “single file” approach on larger, more complex use cases, but it’s working out well for early Wolverine adopters so far. Since much of my criticism of Clean/Onion Architecture approaches is really about using prescriptive rules too literally, I would also say that I would deviate from this “single file” approach any time it was valuable to reuse code across commands or queries or just when the message handling for a single message gets complex enough to need or want other files to separate responsibilities just within that one use case.

Summary and What’s Next

Wolverine is optimized for a “Vertical Slice Architecture” code organization approach. Both Marten and Wolverine are meant to require as little code ceremony as they can, and that also makes the vertical slice architecture and even the single file approach I showed here be feasible.

More on vertical slice architecture:

I’m not 100% sure what I’ll tackle next in this series, but roughly I’m still planning:

  • The “stateful resource” model in the Critter Stack for infrastructure resource setup and teardown we use to provide that “it just works” experience
  • External messaging with Rabbit MQ
  • Wolverine’s resiliency and error handling capabilities
  • Logging, observability, Open Telemetry, and metrics from Wolverine
  • Subscribing to Marten events

Building a Critter Stack Application: Easy Unit Testing with Pure Functions

Hey, did you know that JasperFx Software is ready for formal support plans for Marten and Wolverine? Not only are we trying to make the “Critter Stack” tools be viable long term options for your shop, we’re also interested in hearing your opinions about the tools and how they should change. We’re also certainly open to help you succeed with your software development projects on a consulting basis whether you’re using any part of the Critter Stack or any other .NET server side tooling.

Let’s build a small web service application using the whole “Critter Stack” and their friends, one small step at a time. For right now, the “finished” code is at CritterStackHelpDesk on GitHub.

The posts in this series are:

  1. Event Storming
  2. Marten as Event Store
  3. Marten Projections
  4. Integrating Marten into Our Application
  5. Wolverine as Mediator
  6. Web Service Query Endpoints with Marten
  7. Dealing with Concurrency
  8. Wolverine’s Aggregate Handler Workflow FTW!
  9. Command Line Diagnostics with Oakton
  10. Integration Testing Harness
  11. Marten as Document Database
  12. Asynchronous Processing with Wolverine
  13. Durable Outbox Messaging and Why You Care!
  14. Wolverine HTTP Endpoints
  15. Easy Unit Testing with Pure Functions (this post)
  16. Vertical Slice Architecture
  17. Messaging with Rabbit MQ
  18. The “Stateful Resource” Model
  19. Resiliency

Let’s start this post by making a bold statement that I’ll probably regret, but still spend the rest of this post trying to back up:

Remembering the basic flow of our incident tracking, help desk service in this series, we’ve got this workflow:

Starting in the middle with the “Categorize Incident”, our system’s workflow is something like:

  1. A technician will send a request to change the category of the incident
  2. If the system determines that the request will be changing the category, the system will append a new event to mark that state, and also publish a new command message to try to assign a priority to the incident automatically based on the customer data
  3. When the system handles that new “Try Assign Priority” command, it will look at the customer’s settings, and likewise append another event to record the change of priority for the incident. If the incident changes, it will also publish a message to an external “Notification Service” — but for this post, let’s just worry about whether we’re correctly publishing the right message

In an earlier post, I showed this version of a message handler for the CategoriseIncident command:

public static class CategoriseIncidentHandler
{
    public static readonly Guid SystemId = Guid.NewGuid();
      
    [AggregateHandler]
    // The object? as return value will be interpreted
    // by Wolverine as appending one or zero events
    public static async Task<object?> Handle(
        CategoriseIncident command, 
        IncidentDetails existing,
        IMessageBus bus)
    {
        if (existing.Category != command.Category)
        {
            // Send the message to any and all subscribers to this message
            await bus.PublishAsync(
                new TryAssignPriority { IncidentId = existing.Id });
            return new IncidentCategorised
            {
                Category = command.Category,
                UserId = SystemId
            };
        }
  
        // Wolverine will interpret this as "do no work"
        return null;
    }
}

Notice that this handler is injecting the Wolverine IMessageBus service into the handler method. We could test this code as is with a “fake” for IMessageBus just to verify whether the expected outgoing message for TryAssignPriority goes out or not. Helpfully, Wolverine even supplies a “spy” version of IMessageBus called TestMessageContext that can be used in unit tests as a stand in just to record what the outgoing messages were.

My strong preference though is to use Wolverine’s concept of cascading messages to write a pure function such that the behavioral logic can be tested without any mocks, stubs, or other fakes. In the sample code above, we had been using Wolverine as “just” a “Mediator” within an MVC Core controller. This time around, let’s ditch the unnecessary “Mediator” ceremony and use a Wolverine HTTP endpoint for the same functionality. In this case we can write the same functionality as a pure function like so:

public static class CategoriseIncidentEndpoint
{
    [WolverinePost("/api/incidents/categorise"), AggregateHandler]
    public static (Events, OutgoingMessages) Post(
        CategoriseIncident command, 
        IncidentDetails existing, 
        User user)
    {
        var events = new Events();
        var messages = new OutgoingMessages();
        
        if (existing.Category != command.Category)
        {
            // Append a new event to the incident
            // stream
            events += new IncidentCategorised
            {
                Category = command.Category,
                UserId = user.Id
            };

            // Send a command message to try to assign the priority
            messages.Add(new TryAssignPriority
            {
                IncidentId = existing.Id
            });
        }

        return (events, messages);
    }
}

In the endpoint above, we’re “pushing” all of the required inputs for our business logic in the Post() method that makes a decision about what state changes should be captured and what additional actions should be done through outgoing, cascaded messages.

A couple notes about this code:

  • It’s using the aggregate handler workflow we introduced in an earlier post to “push” the IncidentDetails aggregate for the incident stream into the method. We’ll need this information to “decide” what to do next
  • The Events type is a Wolverine construct that tells Wolverine “hey, the objects in this collection are meant to be appended as events to the event stream for this aggregate.”
  • Likewise, the OutgoingMessages type is a Wolverine construct that — wait for it — tells Wolverine that the objects contained in that collection should be published as cascading messages after the database transaction succeeds
  • The Marten + Wolverine transactional middleware is calling Marten’s IDocumentSession.SaveChangesAsync() to commit the logical transaction, and also dealing with the transaction outbox mechanics for the cascading messages from the OutgoingMessages collection.

Alright, with all that said, let’s look at what a unit test for a CategoriseIncident command message that results in the category being changed:

    [Fact]
    public void raise_categorized_event_if_changed()
    {
        var command = new CategoriseIncident
        {
            Category = IncidentCategory.Database
        };

        var details = new IncidentDetails(
            Guid.NewGuid(), 
            Guid.NewGuid(), 
            IncidentStatus.Closed, 
            Array.Empty<IncidentNote>(),
            IncidentCategory.Hardware);

        var user = new User(Guid.NewGuid());
        var (events, messages) = CategoriseIncidentEndpoint.Post(command, details, user);

        // There should be one appended event
        var categorised = events.Single()
            .ShouldBeOfType<IncidentCategorised>();
        
        categorised
            .Category.ShouldBe(IncidentCategory.Database);
        
        categorised.UserId.ShouldBe(user.Id);

        // And there should be a single outgoing message
        var message = messages.Single()
            .ShouldBeOfType<TryAssignPriority>();
        
        message.IncidentId.ShouldBe(details.Id);
        message.UserId.ShouldBe(user.Id);

    }

In real life, I’d probably opt to break that unit test into a BDD-like context and individual tests to assert the expected event(s) being appended and the expected outgoing messages, but this is conceptually easier and I didn’t sleep well last night, so this is what you get!

Let’s move on to the message handler for the TryAssignPriority message, and also make this a pure function so we can easily test the behavior:

public static class TryAssignPriorityHandler
{
    // Wolverine will call this method before the "real" Handler method,
    // and it can "magically" connect that the Customer object should be delivered
    // to the Handle() method at runtime
    public static Task<Customer?> LoadAsync(IncidentDetails details, IDocumentSession session)
    {
        return session.LoadAsync<Customer>(details.CustomerId);
    }

    // There's some database lookup at runtime, but I've isolated that above, so the
    // behavioral logic that "decides" what to do is a pure function below. 
    [AggregateHandler]
    public static (Events, OutgoingMessages) Handle(
        TryAssignPriority command, 
        IncidentDetails details,
        Customer customer)
    {
        var events = new Events();
        var messages = new OutgoingMessages();

        if (details.Category.HasValue && customer.Priorities.TryGetValue(details.Category.Value, out var priority))
        {
            if (details.Priority != priority)
            {
                events.Add(new IncidentPrioritised(priority, command.UserId));

                if (priority == IncidentPriority.Critical)
                {
                    messages.Add(new RingAllTheAlarms(command.IncidentId));
                }
            }
        }

        return (events, messages);
    }
}

I’d ask you to notice the LoadAsync() method above. It’s part of the logical handler workflow, but Wolverine is letting us keep that separate from the main “decider” message Handle() method. We’d have to test the entire handler with an integration test eventually, but we can happily write fast running, fine grained unit tests on the expected behavior by just “pushing” inputs into the Handle() method and measuring the events and outgoing messages just by checking the return values.

Summary and What’s Next

Wolverine’s approach has always been driven by the desire to make your application code as testable as possible. Originally that meant to just keep the framework (Wolverine itself) out of your application code as much as possible. Later on, the Wolverine community was influenced by more Functional Programming techniques and Jim Shore’s paper on Testing without Mocks.

Specifically, Wolverine embraced the idea of the “A-Frame Architecture”, with Wolverine itself in the role of the mediator/controller/conductor coordinates between infrastructural concerns like Marten and your own business logic code in message handlers or HTTP endpoint methods without creating a direct coupling between you behavioral logic code and your infrastructure:

If you take advantage of Wolverine features like cascading messages, side effects, and compound handlers to decompose your system in a more FP-esque way while letting Wolverine handle the coordination, you can arrive at much more testable code.

I said earlier that I’d get to Rabbit MQ messaging soon, and I’ll get around to that soon. To fit in with one of my CodeMash 2024 talks on this Friday, I might take a little side trip into how the “Critter Stack” plays well inside of a low ceremony vertical slice architecture as I get ready to absolutely blast away at the “Clean/Onion Architecture” this week.

Building a Critter Stack Application: Wolverine HTTP Endpoints

Hey, did you know that JasperFx Software is ready for formal support plans for Marten and Wolverine? Not only are we trying to make the “Critter Stack” tools be viable long term options for your shop, we’re also interested in hearing your opinions about the tools and how they should change. We’re also certainly open to help you succeed with your software development projects on a consulting basis whether you’re using any part of the Critter Stack or any other .NET server side tooling.

Let’s build a small web service application using the whole “Critter Stack” and their friends, one small step at a time. For right now, the “finished” code is at CritterStackHelpDesk on GitHub.

The posts in this series are:

  1. Event Storming
  2. Marten as Event Store
  3. Marten Projections
  4. Integrating Marten into Our Application
  5. Wolverine as Mediator
  6. Web Service Query Endpoints with Marten
  7. Dealing with Concurrency
  8. Wolverine’s Aggregate Handler Workflow FTW!
  9. Command Line Diagnostics with Oakton
  10. Integration Testing Harness
  11. Marten as Document Database
  12. Asynchronous Processing with Wolverine
  13. Durable Outbox Messaging and Why You Care!
  14. Wolverine HTTP Endpoints (this post)
  15. Easy Unit Testing with Pure Functions
  16. Vertical Slice Architecture
  17. Messaging with Rabbit MQ
  18. The “Stateful Resource” Model
  19. Resiliency

Heretofore in this series, I’ve been using ASP.Net MVC Core controllers anytime we’ve had to build HTTP endpoints for our incident tracking, help desk system in order to introduce new concepts a little more slowly.

If you would, let’s refer back to an earlier incarnation of an HTTP endpoint to handle our LogIncident command from an earlier post in this series:

public class IncidentController : ControllerBase
{
    private readonly IDocumentSession _session;
 
    public IncidentController(IDocumentSession session)
    {
        _session = session;
    }
 
    [HttpPost("/api/incidents")]
    public async Task<IResult> Log(
        [FromBody] LogIncident command
        )
    {
        var userId = currentUserId();
        var logged = new IncidentLogged(command.CustomerId, command.Contact, command.Description, userId);
 
        var incidentId = _session.Events.StartStream(logged).Id;
        await _session.SaveChangesAsync(HttpContext.RequestAborted);
 
        return Results.Created("/incidents/" + incidentId, incidentId);
    }
 
    private Guid currentUserId()
    {
        // let's say that we do something here that "finds" the
        // user id as a Guid from the ClaimsPrincipal
        var userIdClaim = User.FindFirst("user-id");
        if (userIdClaim != null && Guid.TryParse(userIdClaim.Value, out var id))
        {
            return id;
        }
 
        throw new UnauthorizedAccessException("No user");
    }
}

Just to be clear as possible here, the Wolverine HTTP endpoints feature introduced in this post can be mixed and matched with MVC Core and/or Minimal API or even FastEndpoints within the same application and routing tree. I think the ASP.Net team deserves some serious credit for making that last sentence a fact.

Today though, let’s use Wolverine HTTP endpoints and rewrite that controller method above the “Wolverine way.” To get started, add a Nuget reference to the help desk service like so:

dotnet add package WolverineFx.Http

Next, let’s break into our Program file and add Wolverine endpoints to our routing tree near the bottom of the file like so:

app.MapWolverineEndpoints(opts =>
{
    // We'll add a little more in a bit...
});

// Just to show where the above code is within the context
// of the Program file...
return await app.RunOaktonCommands(args);

Now, let’s make our first cut at a Wolverine HTTP endpoint for the LogIncident command, but I’m purposely going to do it without introducing a lot of new concepts, so please bear with me a bit:

public record NewIncidentResponse(Guid IncidentId) 
    : CreationResponse("/api/incidents/" + IncidentId);

public static class LogIncidentEndpoint
{
    [WolverinePost("/api/incidents")]
    public static NewIncidentResponse Post(
        // No [FromBody] stuff necessary
        LogIncident command,
        
        // Service injection is automatic,
        // just like message handlers
        IDocumentSession session,
        
        // You can take in an argument for HttpContext
        // or immediate members of HttpContext
        // as method arguments
        ClaimsPrincipal principal)
    {
        // Some ugly code to find the user id
        // within a claim for the currently authenticated
        // user
        Guid userId = Guid.Empty;
        var userIdClaim = principal.FindFirst("user-id");
        if (userIdClaim != null && Guid.TryParse(userIdClaim.Value, out var claimValue))
        {
            userId = claimValue;
        }
        
        var logged = new IncidentLogged(command.CustomerId, command.Contact, command.Description, userId);

        var id = session.Events.StartStream<Incident>(logged).Id;

        return new NewIncidentResponse(id);
    }
}

Here’s a few salient facts about the code above to explain what it’s doing:

  • The [WolverinePost] attribute tells Wolverine that hey, this method is an HTTP handler, and Wolverine will discover this method and add it to the application’s endpoint routing tree at bootstrapping time.
  • Just like Wolverine message handlers, the endpoint methods are flexible and Wolverine generates code around your code to mediate between the raw HttpContext for the request and your code
  • We have already enabled Marten transactional middleware for our message handlers in an earlier post, and that happily applies to Wolverine HTTP endpoints as well. That helps make our endpoint method be just a synchronous method with the transactional middleware dealing with the ugly asynchronous stuff for us.
  • You can “inject” HttpContext and its immediate children into the method signatures as I did with the ClaimsPrincipal up above
  • Method injection is automatic without any silly [FromServices] attributes, and that’s what’s happening with the IDocumentSession argument
  • The LogIncident parameter is assumed to be the HTTP request body due to being the first argument, and it will be deserialized from the incoming JSON in the request body just like you’d probably expect
  • The NewIncidentResponse type is roughly the equivalent to using Results.Created() in Minimal API to create a response body with the url of the newly created Incident stream and an HTTP status code of 201 for “Created.” What’s different about Wolverine.HTTP is that it can infer OpenAPI documentation from the signature of that type without requiring you to pollute your code by manually adding [ProducesResponseType] attributes on the method to get a “proper” OpenAPI document for the endpoint.

Moving on, that user id detection from the ClaimsPrincipal looks a little bit ugly to me, and likely to be repetitive. Let’s ameliorate that by introducing Wolverine’s flavor of HTTP middleware and move that code to this class:

// Using the custom type makes it easier
// for the Wolverine code generation to route
// things around. I'm not ashamed.
public record User(Guid Id);

public static class UserDetectionMiddleware
{
    public static (User, ProblemDetails) Load(ClaimsPrincipal principal)
    {
        var userIdClaim = principal.FindFirst("user-id");
        if (userIdClaim != null && Guid.TryParse(userIdClaim.Value, out var id))
        {
            // Everything is good, keep on trucking with this request!
            return (new User(id), WolverineContinue.NoProblems);
        }
        
        // Nope, nope, nope. We got problems, so stop the presses and emit a ProblemDetails response
        // with a 400 status code telling the caller that there's no valid user for this request
        return (new User(Guid.Empty), new ProblemDetails { Detail = "No valid user", Status = 400});
    }
}

Do note the usage of ProblemDetails in that middleware. If there is no user-id claim on the ClaimsPrincipal, we’ll abort the request by writing out the ProblemDetails stating there’s no valid user. This pattern is baked into Wolverine.HTTP to help create one off request validations. We’ll utilize this quite a bit more later.

Next, I need to add that new bit of middleware to our application. As a shortcut, I’m going to just add it to every single Wolverine HTTP endpoint by breaking back into our Program file and adding this line of code:

app.MapWolverineEndpoints(opts =>
{
    // We'll add a little more in a bit...
    
    // Creates a User object in HTTP requests based on
    // the "user-id" claim
    opts.AddMiddleware(typeof(UserDetectionMiddleware));
});

Now, back to our endpoint code and I’ll take advantage of that middleware by changing the method to this:

    [WolverinePost("/api/incidents")]
    public static NewIncidentResponse Post(
        // No [FromBody] stuff necessary
        LogIncident command,
        
        // Service injection is automatic,
        // just like message handlers
        IDocumentSession session,
        
        // This will be created for us through the new user detection
        // middleware
        User user)
    {
        var logged = new IncidentLogged(
            command.CustomerId, 
            command.Contact, 
            command.Description, 
            user.Id);

        var id = session.Events.StartStream<Incident>(logged).Id;

        return new NewIncidentResponse(id);
    }

This is a little bit of a bonus, but let’s also get rid of the need to inject the Marten IDocumentSession service by using a Wolverine “side effect” with this equivalent code:

    [WolverinePost("/api/incidents")]
    public static (NewIncidentResponse, IStartStream) Post(LogIncident command, User user)
    {
        var logged = new IncidentLogged(
            command.CustomerId, 
            command.Contact, 
            command.Description, 
            user.Id);

        var op = MartenOps.StartStream<Incident>(logged);
        
        return (new NewIncidentResponse(op.StreamId), op);
    }

In the code above I’m using the MartenOps.StartStream() method to return a “side effect” that will create a new Marten stream as part of the request instead of directly interacting with the IDocumentSession from Marten. That’s a small thing you might not care for, but it can lead to the elimination of mock objects within your unit tests as you can now write a state-based test directly against the method above like so:

public class LogIncident_handling
{
    [Fact]
    public void handle_the_log_incident_command()
    {
        // This is trivial, but the point is that 
        // we now have a pure function that can be
        // unit tested by pushing inputs in and measuring
        // outputs without any pesky mock object setup
        var contact = new Contact(ContactChannel.Email);
        var theCommand = new LogIncident(BaselineData.Customer1Id, contact, "It's broken");

        var theUser = new User(Guid.NewGuid());

        var (_, stream) = LogIncidentEndpoint.Post(theCommand, theUser);

        // Test the *decision* to emit the correct
        // events and make sure all that pesky left/right
        // hand mapping is correct
        var logged = stream.Events.Single()
            .ShouldBeOfType<IncidentLogged>();
        
        logged.CustomerId.ShouldBe(theCommand.CustomerId);
        logged.Contact.ShouldBe(theCommand.Contact);
        logged.LoggedBy.ShouldBe(theUser.Id);
    }
}

Hey, let’s add some validation too!

We’ve already introduced middleware, so let’s just incorporate the popular Fluent Validation library into our project and let it do some basic validation on the incoming LogIncident command body, and if any validation fails, pull the ripcord and parachute out of the request with a ProblemDetails body and 400 status code that describes the validation errors.

Let’s add that in by first adding some pre-packaged middleware for Wolverine.HTTP with:

dotnet add package WolverineFx.Http.FluentValidation

Next, I have to add the usage of that middleware through this new line of code:

app.MapWolverineEndpoints(opts =>
{
    // Direct Wolverine.HTTP to use Fluent Validation
    // middleware to validate any request bodies where
    // there's a known validator (or many validators)
    opts.UseFluentValidationProblemDetailMiddleware();
    
    // Creates a User object in HTTP requests based on
    // the "user-id" claim
    opts.AddMiddleware(typeof(UserDetectionMiddleware));
});

And add an actual validator for our LogIncident, and in this case that model is just an internal concern of our service, so I’ll just embed that new validator as an inner type of the command type like so:

public record LogIncident(
    Guid CustomerId,
    Contact Contact,
    string Description
)
{
    public class LogIncidentValidator : AbstractValidator<LogIncident>
    {
        // I stole this idea of using inner classes to keep them
        // close to the actual model from *someone* online,
        // but don't remember who
        public LogIncidentValidator()
        {
            RuleFor(x => x.Description).NotEmpty().NotNull();
            RuleFor(x => x.Contact).NotNull();
        }
    }
};

Now, Wolverine does have to “know” about these validators to use them within the endpoint handling, so I’ll need to have these types registered in the application’s IoC container against the right IValidator<T> interface. This is not required, but Wolverine has a (Lamar) helper to find and register these validators within your project and do so in a way that’s most efficient at runtime (i.e., there’s a micro optimization for making these validators have a Singleton life time in the container if Wolverine can see that the types are stateless). I’ll use that little helper in our Program file within the UseWolverine() configuration like so:

builder.Host.UseWolverine(opts =>
{
    // lots more stuff unfortunately, but focus on the line below
    // just for now:-)
    
    // Apply the validation middleware *and* discover and register
    // Fluent Validation validators
    opts.UseFluentValidation();

}

And that’s that. We’ve not got Fluent Validation validation in the request handling for the LogIncident command. In a later section, I’ll explain how Wolverine does this, and try to sell you all on the idea that Wolverine is able to do this more efficiently than other commonly used frameworks *cough* MediatR *cough* that depend on conditional runtime code.

One off validation with “Compound Handlers”

As you might have noticed, the LogIncident command has a CustomerId property that we’re using as is within our HTTP handler. We should never just trust the inputs of a random client, so let’s at least validate that the command refers to a real customer.

Now, typically I like to make Wolverine message handler or HTTP endpoint methods be the “happy path” and handle exception cases and one off validations with a Wolverine feature we inelegantly call “compound handlers.”

I’m going to add a new method to our LogIncidentHandler class like so:

    // Wolverine has some naming conventions for Before/Load
    // or After/AfterAsync, but you can use a more descriptive
    // method name and help Wolverine out with an attribute
    [WolverineBefore]
    public static async Task<ProblemDetails> ValidateCustomer(
        LogIncident command, 
        
        // Method injection works just fine within middleware too
        IDocumentSession session)
    {
        var exists = await session.Query<Customer>().AnyAsync(x => x.Id == command.CustomerId);
        return exists
            ? WolverineContinue.NoProblems
            : new ProblemDetails { Detail = $"Unknown customer id {command.CustomerId}", Status = 400};
    }

Integration Testing

While the individual methods and middleware can all be tested separately, you do want to put everything together with an integration test to prove out whether or not all this magic really works. As I described in an earlier post where we learned how to use Alba to create an integration testing harness for a “critter stack” application, we can write an end to end integration test against the HTTP endpoint like so (this sample doesn’t cover every permutation, but hopefully you get the point):

    [Fact]
    public async Task create_a_new_incident_happy_path()
    {
        // We'll need a user
        var user = new User(Guid.NewGuid());
        
        // Log a new incident first
        var initial = await Scenario(x =>
        {
            var contact = new Contact(ContactChannel.Email);
            x.Post.Json(new LogIncident(BaselineData.Customer1Id, contact, "It's broken")).ToUrl("/api/incidents");
            x.StatusCodeShouldBe(201);
            
            x.WithClaim(new Claim("user-id", user.Id.ToString()));
        });

        var incidentId = initial.ReadAsJson<NewIncidentResponse>().IncidentId;

        using var session = Store.LightweightSession();
        var events = await session.Events.FetchStreamAsync(incidentId);
        var logged = events.First().ShouldBeOfType<IncidentLogged>();

        // This deserves more assertions, but you get the point...
        logged.CustomerId.ShouldBe(BaselineData.Customer1Id);
    }

    [Fact]
    public async Task log_incident_with_invalid_customer()
    {
        // We'll need a user
        var user = new User(Guid.NewGuid());
        
        // Reject the new incident because the Customer for 
        // the command cannot be found
        var initial = await Scenario(x =>
        {
            var contact = new Contact(ContactChannel.Email);
            var nonExistentCustomerId = Guid.NewGuid();
            x.Post.Json(new LogIncident(nonExistentCustomerId, contact, "It's broken")).ToUrl("/api/incidents");
            x.StatusCodeShouldBe(400);
            
            x.WithClaim(new Claim("user-id", user.Id.ToString()));
        });
    }
}

Um, how does this all work?

So far I’ve shown you some “magic” code, and that tends to really upset some folks. I also made some big time claims about how Wolverine is able to be more efficient at runtime (alas, there is a significant “cold start” problem you can easily work around, so don’t get upset if your first ever Wolverine request isn’t snappy).

Wolverine works by using code generation to wrap its handling code around your code. That includes the middleware, and the usage of any IoC services as well. Moreover, do you know what the fastest IoC container is in all the .NET land? I certainly think that Lamar is at least in the game for that one, but nope, the answer is no IoC container at runtime.

One of the advantages of this approach is that we can preview the generated code to unravel the “magic” and explain what Wolverine is doing at runtime. Moreover, we’ve tried to add descriptive comments to the generated code to further explain what and why code is in place.

See more about this in my post Unraveling the Magic in Wolverine.

Here’s the generated code for our LogIncident endpoint (warning, ugly generated code ahead):

// <auto-generated/>
#pragma warning disable
using FluentValidation;
using Microsoft.AspNetCore.Routing;
using System;
using System.Linq;
using Wolverine.Http;
using Wolverine.Http.FluentValidation;
using Wolverine.Marten.Publishing;
using Wolverine.Runtime;

namespace Internal.Generated.WolverineHandlers
{
    // START: POST_api_incidents
    public class POST_api_incidents : Wolverine.Http.HttpHandler
    {
        private readonly Wolverine.Http.WolverineHttpOptions _wolverineHttpOptions;
        private readonly Wolverine.Runtime.IWolverineRuntime _wolverineRuntime;
        private readonly Wolverine.Marten.Publishing.OutboxedSessionFactory _outboxedSessionFactory;
        private readonly FluentValidation.IValidator<Helpdesk.Api.LogIncident> _validator;
        private readonly Wolverine.Http.FluentValidation.IProblemDetailSource<Helpdesk.Api.LogIncident> _problemDetailSource;

        public POST_api_incidents(Wolverine.Http.WolverineHttpOptions wolverineHttpOptions, Wolverine.Runtime.IWolverineRuntime wolverineRuntime, Wolverine.Marten.Publishing.OutboxedSessionFactory outboxedSessionFactory, FluentValidation.IValidator<Helpdesk.Api.LogIncident> validator, Wolverine.Http.FluentValidation.IProblemDetailSource<Helpdesk.Api.LogIncident> problemDetailSource) : base(wolverineHttpOptions)
        {
            _wolverineHttpOptions = wolverineHttpOptions;
            _wolverineRuntime = wolverineRuntime;
            _outboxedSessionFactory = outboxedSessionFactory;
            _validator = validator;
            _problemDetailSource = problemDetailSource;
        }



        public override async System.Threading.Tasks.Task Handle(Microsoft.AspNetCore.Http.HttpContext httpContext)
        {
            var messageContext = new Wolverine.Runtime.MessageContext(_wolverineRuntime);
            // Building the Marten session
            await using var documentSession = _outboxedSessionFactory.OpenSession(messageContext);
            // Reading the request body via JSON deserialization
            var (command, jsonContinue) = await ReadJsonAsync<Helpdesk.Api.LogIncident>(httpContext);
            if (jsonContinue == Wolverine.HandlerContinuation.Stop) return;
            
            // Execute FluentValidation validators
            var result1 = await Wolverine.Http.FluentValidation.Internals.FluentValidationHttpExecutor.ExecuteOne<Helpdesk.Api.LogIncident>(_validator, _problemDetailSource, command).ConfigureAwait(false);

            // Evaluate whether or not the execution should be stopped based on the IResult value
            if (!(result1 is Wolverine.Http.WolverineContinue))
            {
                await result1.ExecuteAsync(httpContext).ConfigureAwait(false);
                return;
            }


            (var user, var problemDetails2) = Helpdesk.Api.UserDetectionMiddleware.Load(httpContext.User);
            // Evaluate whether the processing should stop if there are any problems
            if (!(ReferenceEquals(problemDetails2, Wolverine.Http.WolverineContinue.NoProblems)))
            {
                await WriteProblems(problemDetails2, httpContext).ConfigureAwait(false);
                return;
            }


            var problemDetails3 = await Helpdesk.Api.LogIncidentEndpoint.ValidateCustomer(command, documentSession).ConfigureAwait(false);
            // Evaluate whether the processing should stop if there are any problems
            if (!(ReferenceEquals(problemDetails3, Wolverine.Http.WolverineContinue.NoProblems)))
            {
                await WriteProblems(problemDetails3, httpContext).ConfigureAwait(false);
                return;
            }


            
            // The actual HTTP request handler execution
            (var newIncidentResponse_response, var startStream) = Helpdesk.Api.LogIncidentEndpoint.Post(command, user);

            
            // Placed by Wolverine's ISideEffect policy
            startStream.Execute(documentSession);

            // This response type customizes the HTTP response
            ApplyHttpAware(newIncidentResponse_response, httpContext);
            
            // Commit any outstanding Marten changes
            await documentSession.SaveChangesAsync(httpContext.RequestAborted).ConfigureAwait(false);

            
            // Have to flush outgoing messages just in case Marten did nothing because of https://github.com/JasperFx/wolverine/issues/536
            await messageContext.FlushOutgoingMessagesAsync().ConfigureAwait(false);

            // Writing the response body to JSON because this was the first 'return variable' in the method signature
            await WriteJsonAsync(httpContext, newIncidentResponse_response);
        }

    }

    // END: POST_api_incidents
    
    
}


Summary and What’s Next

The Wolverine.HTTP library was originally built to be a supplement to MVC Core or Minimal API by allowing you to create endpoints that integrated well into Wolverine’s messaging, transactional outbox functionality, and existing transactional middleware. It has since grown into being more of a full fledged alternative for building web services, but with potential for substantially less ceremony and far more testability than MVC Core.

In later posts I’ll talk more about the runtime architecture and how Wolverine squeezes out more performance by eliminating conditional runtime switching, reducing object allocations, and sidestepping the dictionary lookups that are endemic to other “flexible” .NET frameworks like MVC Core.

Wolverine.HTTP has not yet been used with Razor at all, and I’m not sure that will ever happen. Not to worry though, you can happily use Wolverine.HTTP in the same application with MVC Core controllers or even Minimal API endpoints.

OpenAPI support has been a constant challenge with Wolverine.HTTP as the OpenAPI generation in ASP.Net Core is very MVC-centric, but I think we’re in much better shape now.

In the next post, I think we’ll introduce asynchronous messaging with Rabbit MQ. At some point in this series I’m going to talk more about how the “Critter Stack” is well suited for a lower ceremony vertical slice architecture that (hopefully) creates a maintainable and testable codebase without all the typical Clean/Onion Architecture baggage that I could personally do without.

And just for fun…

My “History” with ASP.Net MVC

There’s no useful content in this section, just some navel-gazing. Even though I really haven’t had to use ASP.Net MVC too terribly much, I do have a long history with it:

  1. In the beginning, there was what we now call ASP Classic, and it was good. For that day and time anyway when we would happily code directly in production and before TDD and SOLID and namby-pamby “source control.” (I started my development career in “Shadow IT” if that’s not obvious here). And when we did use source control, it was VSS because on the sly because the official source control in the office was something far, far worse that was COBOL-centric that I don’t think even exists any longer.
  2. Next there was ASP.Net WebForms and it was dreadful. I hated it.
  3. We started collectively learning about Agile and wanted to practice Test Driven Development, and began to hate WebForms even more
  4. Ruby on Rails came out in the middle 00’s and made what later became the ALT.Net community absolutely loathe WebForms even more than we already did
  5. At an MVP Summit on the Microsoft campus, the one and only Scott Guthrie, the Gu himself, showed a very early prototype of ASP.Net MVC to a handful of us and I was intrigued. That continued onward through the official unveiling of MVC at the very first ALT.Net open spaces event in Austin in ’07.
  6. A few collaborators and I decided that early ASP.Net MVC was too high ceremony and went all “Captain Ahab” trying to make an alternative, open source framework called FubuMVC go as an alternative — all while NancyFx, a “yet another Sinatra clone” became far more successful years before Microsoft finally got around to their own inevitable Sinatra clone (Minimal API)
  7. After .NET Core came along and made .NET a helluva lot better ecosystem, I decided that whatever, MVC Core is fine, it’s not going to be the biggest problem on our project, and if the client wants to use it, there’s no need to be upset about it. It’s fine, no really.
  8. MVC Core has gotten some incremental improvements over time that made it lower ceremony than earlier ASP.Net MVC, and that’s worth calling out as a positive
  9. People working with MVC Core started running into the problem of bloated controllers, and started using early MediatR as a way to kind of, sort of manage controller bloat by offloading it into focused command handlers. I mocked that approach mercilessly, but that was partially because of how awful a time I had helping folks do absurdly complicated middleware schemes with MediatR using StructureMap or Lamar (MVC Core + MediatR is probably worthwhile as a forcing function to avoid the controller bloat problems with MVC Core by itself)
  10. I worked on several long-running codebases built with MVC Core based on Clean Architecture templates that were ginormous piles of technical debt, and I absolutely blame MVC Core as a contributing factor for that
  11. I’m back to mildly disliking MVC Core (and I’m outright hostile to Clean/Onion templates). Not that you can’t write maintainable systems with MVC Core, but I think that its idiomatic usage can easily lead to unmaintainable systems. Let’s just say that I don’t think that MVC Core — and especially combined with some kind of Clean/Onion Architecture template as it very commonly is out in the wild — leads folks to the “pit of success” in the long run

See you at CodeMash 2024!

Hey, did you know that JasperFx Software is ready for formal support plans for Marten and Wolverine? Not only are we trying to make the “Critter Stack” tools be viable long term options for your shop, we’re also interested in hearing your opinions about the tools and how they should change.

Hey folks, I’ll be making my first CodeMash appearance since before the pandemic. I’m happy to talk your ears off about the Critter Stack tools, but also just to connect with the technical community and learn more about what other folks are doing these days. See you there this week!

I’m giving a pair of talks this time out:

A Contrarian View of Software Architecture

In this talk, Jeremy will cast some aspersions on some of the industry best practices that teams adopt in order to create maintainable software, but can ironically be the very cause of debilitating technical debt. Jeremy will also attempt to explain a vision of how to sidestep these problems and other alternatives for codebase organization. And all with many pop culture references that are too old for his college age son to recognize.

CQRS with Event Sourcing using the “Critter Stack”

The “Critter Stack” tools (Marten and Wolverine) combine to form a very low ceremony approach to building software using a CQRS architectural approach combined with event sourcing for the persistence. In this talk, Jeremy will show how to use the Critter Stack to build a small web service. In particular, this talk tries to prove that the Critter Stack leads to simple code that is well suited for both fine grained unit testing of the business rules and efficient automated integration testing of the whole application. He’ll also show you how the Critter Stack fits very well into a “Vertical Slice Architecture” that sidesteps the technical complexity of the “Clean Architecture” approaches that Jeremy is going to ruthlessly mock in his first talk.

My Technical Plans and Aspirations for 2024

Hey, did you know that JasperFx Software is now able to offer formal support plans and consulting for Marten and Wolverine? Not only are we trying to make the “Critter Stack” tools be viable long term options for your shop, we’re also interested in hearing your opinions about the tools and how they should change. We’re also certainly open to help you succeed with your software development projects on a consulting basis whether you’re using any part of the Critter Stack or any other .NET server side tooling.

I’ve written posts like this in early January over the past several years laying out my grand hopes for my OSS work in the new year, and if you’re curious, you can check out my theoretical plans from 2021, 2022, and 2023. I’m always wrong of course, and there’s going to be a few things on my list this year that are repeats from the past couple years. I’m still going to claim my superpower as an OSS developer is having a much longer attention span than the average developer, but that cuts both ways.

But first…

My 2023 in Review

I had a huge year in 2023 by any possible measure. After 15 years of constant effort and a couple hurtful false starts along the way, I started a new company named JasperFx Software LLC as both a software development consultancy and to build a sustainable business model around the “Critter Stack” tools of Marten and Wolverine. Let me stop here and say how much I appreciate our early customers and I’m looking forward to expanding on those relationships in the New Year’s!

Technically speaking, I was most excited — and disappointed a little bit about how long it took — for the Wolverine 1.0 release this summer! That was especially gratifying for me because Wolverine took 5-6 years and a pretty substantial reboot and rename in 2022 to fully gestate into what it is now. Wolverine might not be exploding in download numbers (yet), but it’s attracted a great community of early users and we’ve collectively pushed Wolverine to 1.13 now with a ton of new features and usability improvements that weren’t on my radar a year ago at all.

Personally, my highlights were finally meeting my collaborator and friend Oskar Dudycz in real life at NDC Oslo — which supposed to have happened years earlier but a certain worldwide pandemic delayed that for a few years. I also enjoyed my trip to the KCDC conference last year, and turned that into a road trip with my older son to visit family along the way.

Oh, and this just the other day:

On to…

The Grand Plans for 2024!

My most important goal for 2024 is to reduce my personal stress level that’s been a fallout from spinning up the new company. Wish me luck on that one.

First, let’s start with what’s either heavily in flight, then the work JasperFx is doing for clients in January/February this year:

  • Marten 7.0 is moving along pretty well right now. The biggest chunk of work so far has been the completely revamped LINQ support that improves both the span of supported LINQ use cases and is able to generate much more efficient SQL for nested child collection searching. Besides adding a lot more polish overall, we’re making improvements to Marten’s performance by utilizing newer Npgsql features like data sources, finally building out a native “partial” update model that doesn’t depend on Javascript running in PostgreSQL, and revamping Marten’s retry functionality. And that doesn’t even address improvements to the event store functionality.
  • There’ll also be a Wolverine 2.0 early this year, but I think that will mostly be about integrating Wolverine with Marten 7.0 and probably dropping .NET 6 support.
  • A JasperFx customer has engaged us to build out functionality to be able to utilize and manage new tenant databases inside a “database per tenant” multi-tenancy strategy using both Marten and Wolverine without requiring any downtime.
  • For a different JasperFx customer, we’re finally building in the long planned ability to scale Marten’s event store features to “really big” workloads by being able to adaptively distribute projection work across the running nodes within a cluster instead of today’s “hot/cold” failover approach. That’s been on my list of goals for the New Year for several years running, but it finally happens early in 2024
  • As part of the previous bullet, we’re building in the ability to do zero downtime deployments of changes to event projections. As part of those plans, we’re also aiming for true blue/green deployment capabilities for Marten’s event sourcing feature set.
  • “First class subscriptions” from Marten’s event store through Wolverine’s messaging features

For the last two bullet points, that brings me to JasperFx’s plans for world domination (or at least enough revenue to keep growing).

I know some folks are annoyed at our potential push for an open core model, but using a paid model for advanced features. I understand that, but I think that that option will create a more sustainable environment for the open core model to continue. My personal dividing line is that any feature that is almost automatically going to require us to help users utilize or configure it, or leads to very large transaction throughput absolutely deserves to be paid for.

The details aren’t firmed up by any means, but the “Critter Stack” is moving to an Open-core model where the existing libraries continue under the MIT license while we also offer a new set of functionality for complex usages, advanced monitoring and management, and improved scalability. Tentatively, we’re shamelessly calling this the “CritterStackPro.” The first couple features are all related to the event sourcing scalability and deployment capabilities our largest customer has commissioned that I described up above. I’m very excited to see this all come to fruition after years of planning and discussions.

Beyond that, we’ve got some ideas and plenty of user feedback about what would be valuable for a potential management console for the “Critter Stack” tools.

Other Vaguely Thought Up Aspirations

  • Continue to push Marten & Wolverine to be the best possible technical platform for building event driven architectures
  • I can’t speak to any specifics yet (’cause I don’t know them anyway), but there will be some improved integration recipes for Marten/Wolverine with Hot Chocolate both via user request and through a JasperFx Software customer
  • Add more robust sample applications and tutorials for both Marten and Wolverine to our various websites
  • Oskar already has a new code name for our next “Critter Stack” tool. I’m not saying that will be Marten-like event sourcing support and first class Wolverine support using Sql Server, but I’m not “not saying” that’s what it would be either.
  • I’m still somewhat interested in an optimized serverless mode for both Marten and Wolverine to really leverage AOT compilation, but man, that’s going to take some effort
  • Somehow, some way, get or build out better infrastructure for the kind of automated integration testing we do with Marten and Wolverine

And that’s enough dreaming for now. I’m looking forward to seeing how the Critter Stack tools and our community continues to grow and progress in 2024. Happy New Year’s everyone!

Building a Critter Stack Application: Durable Outbox Messaging and Why You Care!

Hey, did you know that JasperFx Software is ready for formal support plans for Marten and Wolverine? Not only are we trying to make the “Critter Stack” tools be viable long term options for your shop, we’re also interested in hearing your opinions about the tools and how they should change. We’re also certainly open to help you succeed with your software development projects on a consulting basis whether you’re using any part of the Critter Stack or any other .NET server side tooling.

Let’s build a small web service application using the whole “Critter Stack” and their friends, one small step at a time. For right now, the “finished” code is at CritterStackHelpDesk on GitHub.

The posts in this series are:

  1. Event Storming
  2. Marten as Event Store
  3. Marten Projections
  4. Integrating Marten into Our Application
  5. Wolverine as Mediator
  6. Web Service Query Endpoints with Marten
  7. Dealing with Concurrency
  8. Wolverine’s Aggregate Handler Workflow FTW!
  9. Command Line Diagnostics with Oakton
  10. Integration Testing Harness
  11. Marten as Document Database
  12. Asynchronous Processing with Wolverine
  13. Durable Outbox Messaging and Why You Care! (this post)
  14. Wolverine HTTP Endpoints
  15. Easy Unit Testing with Pure Functions
  16. Vertical Slice Architecture
  17. Messaging with Rabbit MQ
  18. The “Stateful Resource” Model
  19. Resiliency

As we layer in new technical concepts from both Wolverine and Marten to build out incident tracking, help desk API, we looked at this message handler in the last post that both saved data, and published a message to an asynchronous, local queue that would act upon the newly saved data at some point.

public static class CategoriseIncidentHandler
{
    public static readonly Guid SystemId = Guid.NewGuid();
     
    [AggregateHandler]
    // The object? as return value will be interpreted
    // by Wolverine as appending one or zero events
    public static async Task<object?> Handle(
        CategoriseIncident command, 
        IncidentDetails existing,
        IMessageBus bus)
    {
        if (existing.Category != command.Category)
        {
            // Send the message to any and all subscribers to this message
            await bus.PublishAsync(
                new TryAssignPriority { IncidentId = existing.Id });
            return new IncidentCategorised
            {
                Category = command.Category,
                UserId = SystemId
            };
        }
 
        // Wolverine will interpret this as "do no work"
        return null;
    }
}

To recap, that message handler is potentially appending an IncidentCategorised event to an Incident event stream and publishing a command message named TryAssignPriority that will trigger a downstream action to try to assign a new priority to our Incident.

This relatively simple message handler (and we’ll make it even simpler in a later post in this series) creates a host of potential problems for our system:

  • In a naive usage of messaging tools, there’s a race condition between the outbound `TryAssignPriority` message being picked up by its handler and the database changes getting committed to the database. I have seen this cause nasty, hard to reproduce bugs through in real life production applications when once in awhile the message is processed before the database changes are made, and the system behaves incorrectly because the expected data is not yet committed by the original command finishing.
  • Maybe the actual message sending fails, but the database changes succeed, so the system is in an inconsistent state.
  • Maybe the outgoing message is happily published successfully, but the database changes fail, so that when the TryAssignPriority message is handled, it’s working against old system state.
  • Event if everything succeeds perfectly, the outgoing message should never actually be published until the transaction is complete.

To be clear, even without the usage of the outbox feature we’re about to use, Wolverine will apply an “in memory outbox” in message handlers such that all the messages published through IMessageBus.PublishAsync()/SendAsync()/etc. will be held in memory until the successful completion of the message handler. That by itself is enough to prevent the race condition between the database changes and the outgoing messages.

At this point, let’s introduce Wolverine’s transactional outbox support that was built specifically to solve or prevent the potential problems I listed up above. In this case, Wolverine has a transactional outbox & inbox support built into its integrations with PostgreSQL and Marten.

To rewind a little bit, in an earlier post where we first introduced the Marten + Wolverine integration, I had added a call to IntegrateWithWolverine() to the Marten configuration in our Program file:

using Wolverine.Marten;
 
var builder = WebApplication.CreateBuilder(args);
 
builder.Services.AddMarten(opts =>
{
    // This would be from your configuration file in typical usage
    opts.Connection(Servers.PostgresConnectionString);
    opts.DatabaseSchemaName = "wolverine_middleware";
})
    // This is the wolverine integration for the outbox/inbox,
    // transactional middleware, saga persistence we don't care about
    // yet
    .IntegrateWithWolverine()
     
    // Just letting Marten build out known database schema elements upfront
    // Helps with Wolverine integration in development
    .ApplyAllDatabaseChangesOnStartup();

Among other things, the call to IntegrateWithWolverine() up above directs Wolverine to use the PostgreSQL database for Marten as the durable storage for incoming and outgoing messages as part of Wolverine’s transactional inbox and outbox. The basic goal of this subsystem is to create consistency (really “eventual consistency“) between database transactions and outgoing messages without having to resort to endlessly painful distributed transactions.

Now, we’ve got another step to make. As of right now, Wolverine makes a determination of whether or not to use the durable outbox storage based on the destination of the outgoing message — with the theory that teams might easily want to mix and match durable messaging and less resource intensive “fire and forget” messaging within the same application. In this help desk service, we’ll make that easy and just say that all message processing in local queues (we set up TryAssignPriority to be handled through a local queue in the previous post) to be durable. In the UseWolverine() configuration, I’ll add this line of code to do that:

builder.Host.UseWolverine(opts =>
{
    // More configuration...

    // Automatic transactional middleware
    opts.Policies.AutoApplyTransactions();
    
    // Opt into the transactional inbox for local 
    // queues
    opts.Policies.UseDurableLocalQueues();
    
    // Opt into the transactional inbox/outbox on all messaging
    // endpoints
    opts.Policies.UseDurableOutboxOnAllSendingEndpoints();

    // Set up from the previous post
    opts.LocalQueueFor<TryAssignPriority>()
        // By default, local queues allow for parallel processing with a maximum
        // parallel count equal to the number of processors on the executing
        // machine, but you can override the queue to be sequential and single file
        .Sequential()

        // Or add more to the maximum parallel count!
        .MaximumParallelMessages(10);
});

I (Jeremy) may very well declare this “endpoint by endpoint” declaration of durability to have been a big mistake because confused some users and vote to change this in a later version of Wolverine.

With this outbox functionality in place, the messaging and transaction workflow behind the scenes of that handler shown above is to:

  1. When the outgoing TryAssignPriority message is published, Wolverine will “route” that message into its internal Envelope structure that includes the message itself and all the necessary metadata and information Wolverine would need to actually send the message later
  2. The outbox integration will append the outgoing message as a pending operation to the current Marten session
  3. The IncidentCategorised event will be appended to the current Marten session
  4. The Marten session is committed (IDocumentSession.SaveChangesAsync()), which will persist the new event and a copy of the outgoing Envelope into the outbox or inbox (scheduled messages or messages to local queues will be persisted in the incoming table) tables in one single, batched database command and by a native PostgreSQL transaction.
  5. Assuming the database transaction succeeds, the outgoing messages are “released” to Wolverine’s outgoing message publishing in memory (we’re coming back to that last point in a bit)
  6. Once Wolverine is able to successfully publish the message to the outgoing transport, it will delete the database table record for that outgoing message.

The 4th point is important I think. The close integration between Marten & Wolverine allows for more efficient processing by combining the database operations to minimize database round trips. In cases where the outgoing message transport is also batched (Azure Service Bus or AWS SQS for example), the database command to delete messages is also optimized for one call using PostgreSQL array support. I guess the main point of bringing this up is just to say there’s been quite a bit of thought and outright micro-optimizations done to this infrastructure.

But what about…?

  • the process is shut down cleanly? Wolverine tries to “drain” all in flight work first, and then “release” that process’s ownership of the persisted messages
  • the process crashes before messages floating around the local queues or outgoing message publishing finishes? Wolverine is able to detect a “dormant node” and reassign the persisted incoming and outgoing messages to be processed by another node. Or in the case of a single node, restart that work when the process is restarted.
  • the Wolverine tables don’t yet exist in the database? Wolverine has similar database management to Marten (it’s all the shared Weasel library doing that behind the scenes) and will happily build out missing tables in its default setting
  • an application using a database per tenant multi-tenancy strategy? Wolverine creates separate inbox or outbox storage in each tenant database. It’s complicated and took quite awhile to build, but it works. If no tenant is specified, the inbox/outbox in a “default” database is used
  • I need to use the outbox approach for consistency outside of a message handler, like when handling an HTTP request that happens to make both database changes and publish messages? That’s a really good question, and arguably one of the best reasons to use Wolverine over other .NET messaging tools because as we’ll see in later posts, that’s perfectly possible and quite easy. There is a recipe for using the Wolverine outbox functionality with MVC Core or Minimal API shown here.

Summary and What’s Next

The outbox (and closely related inbox) support is hugely important inside of any system that uses asynchronous messaging as a way of creating consistency and resiliency. Wolverine’s implementation is significantly different (and honestly more complicated) than typical implementations that depend on just polling from an outbound database table. That’s a positive in some ways because we believe that Wolverine’s approach is more efficient and will lead to greater throughput.

There is also similar inbox/outbox functionality and optimizations for Wolverine with EF Core using either PostgreSQL or Sql Server as the backing storage. In the future, I hope to see the EF Core and Sql Server support improve, but for right now, the Marten integration is getting the most attention and usage. I’d also love to see Wolverine grow to include support for alternative databases, with Azure CosmosDb and AWS Dynamo Db being leading contenders. We’ll see.

As for what’s next, let me figure out what sounds easy for the next post in January. In the meantime, Happy New Year’s everybody!

Wolverine’s HTTP Gets a Lot Better at OpenAPI (Swagger)

Hey, did you know that JasperFx Software is ready for formal support plans for Marten and Wolverine? Not only are we trying to make the “Critter Stack” tools be viable long term options for your shop, we’re also interested in hearing your opinions about the tools and how they should change. We’re also certainly open to help you succeed with your software development projects on a consulting basis whether you’re using any part of the Critter Stack or any other .NET server side tooling. Reach us anytime at sales@jasperfx.net or on Discord!

I just published Wolverine 1.13.0 this evening with some significant improvements (see the release notes here). Beyond the normal scattering of bug fixes (and some significant improvements to the MQTT support in Wolverine for a JasperFx Software client who we’re helping build an IoT system), the main headline is that Wolverine does a substantially better job generating OpenAPI documentation for its HTTP endpoint model.

When I’m building web services of any kind I tend to lean very hard into doing integration testing with Alba, and because of that, I also tend not to use Swashbuckle or an equivalent tool very often during development and that has apparently been a blind spot for me in building Wolverine.HTTP so far. To play out a typical conversation I frequently have with other server side .NET developers talking about tooling for web services, I think:

  1. MVC Core by itself — but this is hugely acerbated by unfortunately popular prescriptive architectural patterns that organize code around NounController / NounService / NounRepository code organization — can easily lead to unmaintainable code in bloated controller classes and plenty of work for software consultants who get brought in later to clean up after the system wildly outgrew the original team’s “Clean Architecture” approach
  2. I’m not convinced that Minimal API is any better for larger applications
  3. The MVC Core controllers delegating to an inner “mediator” tool strategy may help divide the code into more maintainable code, but it adds what I think is an unacceptable level of extra code ceremony. Also acerbated by prescriptive architectures
  4. You should use Wolverine.HTTP! It’s much lower ceremony code than the “controllers + mediator” strategy, but still sets you up for a vertical slice architecture! And it integrates well with Marten or Wolverine messaging!

Other developers: This all sounds great! Pause. Hey, the web services with this thing seem to work just fine, but man, the Swashbuckle/NSwag/Angular client generation is all kinds of not good! I’m going back to “Wolverine as MediatR”.

To which I reply:

But no more of that after today because the Wolverine HTTP OpenAPI generation just took a huge leap forward after the 1.13 release!

Here’s a sample of what I mean. From the Wolverine.HTTP test suite, here’s an endpoint method that uses Marten to load an Invoice document, modify it, then save it:

    [WolverinePost("/invoices/{invoiceId}/pay")]
    public static IMartenOp Pay([Document] Invoice invoice)
    {
        invoice.Paid = true;
        return MartenOps.Store(invoice);
    }

The [Document] attribute tells Wolverine to load the Invoice from Marten, and part of its convention will match on the invoiceId route argument from the route pattern. That failed before in a couple ways:

  1. Swashbuckle can’t be convinced that the Invoice argument isn’t the request body
  2. If you omit an Guid invoiceId argument from the route, Swashbuckle wasn’t seeing invoiceId as a route parameter and didn’t let you specify that in the Swashbuckle page.
  3. Swashbuckle definitely didn’t get that IMartenOp is a specialized Wolverine side effect that shouldn’t be used as the response body.

Now though, that endpoint looks like this in Swashbuckle:

Which is now correct and actually usable! (The 404 is valid because there’s a route argument and that status is returned if the Invoice referred to by the invoiceId route argument does not exist).

To call out some improvements for Wolverine.HTTP users, at least the Swashbuckle generation handles:

  • Route arguments that are used by Wolverine, but not necessarily in the main method signature. So no stupid, unused [FromRoute] string id method parameters
  • Querystring arguments are reflected in the Swashbuckle page
  • [FromHeader] arguments are reflected in Swashbuckle
  • HTTP endpoints that return some kind of tuple correctly show the response body if there is one — and that’s a commonly used and powerful capability of Wolverine’s HTTP endpoints that previously fouled up the OpenAPI generation
  • The usage of [EmptyResponse] correctly sets up the 204 status code behavior with no extraneous 200 or 404 status codes coming in by default
  • Ignoring method injected service parameters in the main method

For a little background, after getting plenty of helpful feedback from Wolverine users, I finally took some more serious time to go investigate the problems and root causes. After digging in much deeper to the AspNetCore and Swashbuckle internals, I came to the conclusion that the OpenAPI internals in AspNetCore are batshit crazy far too hard coded to MVC Core and that Wolverine absolutely had to have its own provider for generating OpenAPI documents off of its own semantic model. Fortunately, AspNetCore and Swashbuckle are both open source, so I could easily get to the source code to reverse engineer what they do under the covers (plus JetBrains Rider is a rock star at disassembling code on the fly). Wolverine.HTTP 1.13 now registers its own strategy for generating the OpenAPI documentation for Wolverine endpoints and keeps the built in MVC Core-centric strategy from applying to the same Wolverine endpoints.

I’m sure there will be other issues over time, but so far, this has addressed every known issue with our OpenAPI generation. I’m hoping this goes a long way toward removing impediments to more users adopting Wolverine.HTTP because as I’ve said before, I think the Wolverine model leads to much lower ceremony code, better testability over all, and potentially to significantly better maintainability of larger systems that today turn into huge messes with MVC Core.