Building a Critter Stack Application: Easy Unit Testing with Pure Functions

Hey, did you know that JasperFx Software is ready for formal support plans for Marten and Wolverine? Not only are we trying to make the “Critter Stack” tools be viable long term options for your shop, we’re also interested in hearing your opinions about the tools and how they should change. We’re also certainly open to help you succeed with your software development projects on a consulting basis whether you’re using any part of the Critter Stack or any other .NET server side tooling.

Let’s build a small web service application using the whole “Critter Stack” and their friends, one small step at a time. For right now, the “finished” code is at CritterStackHelpDesk on GitHub.

The posts in this series are:

  1. Event Storming
  2. Marten as Event Store
  3. Marten Projections
  4. Integrating Marten into Our Application
  5. Wolverine as Mediator
  6. Web Service Query Endpoints with Marten
  7. Dealing with Concurrency
  8. Wolverine’s Aggregate Handler Workflow FTW!
  9. Command Line Diagnostics with Oakton
  10. Integration Testing Harness
  11. Marten as Document Database
  12. Asynchronous Processing with Wolverine
  13. Durable Outbox Messaging and Why You Care!
  14. Wolverine HTTP Endpoints
  15. Easy Unit Testing with Pure Functions (this post)
  16. Vertical Slice Architecture
  17. Messaging with Rabbit MQ
  18. The “Stateful Resource” Model
  19. Resiliency

Let’s start this post by making a bold statement that I’ll probably regret, but still spend the rest of this post trying to back up:

Remembering the basic flow of our incident tracking, help desk service in this series, we’ve got this workflow:

Starting in the middle with the “Categorize Incident”, our system’s workflow is something like:

  1. A technician will send a request to change the category of the incident
  2. If the system determines that the request will be changing the category, the system will append a new event to mark that state, and also publish a new command message to try to assign a priority to the incident automatically based on the customer data
  3. When the system handles that new “Try Assign Priority” command, it will look at the customer’s settings, and likewise append another event to record the change of priority for the incident. If the incident changes, it will also publish a message to an external “Notification Service” — but for this post, let’s just worry about whether we’re correctly publishing the right message

In an earlier post, I showed this version of a message handler for the CategoriseIncident command:

public static class CategoriseIncidentHandler
{
    public static readonly Guid SystemId = Guid.NewGuid();
      
    [AggregateHandler]
    // The object? as return value will be interpreted
    // by Wolverine as appending one or zero events
    public static async Task<object?> Handle(
        CategoriseIncident command, 
        IncidentDetails existing,
        IMessageBus bus)
    {
        if (existing.Category != command.Category)
        {
            // Send the message to any and all subscribers to this message
            await bus.PublishAsync(
                new TryAssignPriority { IncidentId = existing.Id });
            return new IncidentCategorised
            {
                Category = command.Category,
                UserId = SystemId
            };
        }
  
        // Wolverine will interpret this as "do no work"
        return null;
    }
}

Notice that this handler is injecting the Wolverine IMessageBus service into the handler method. We could test this code as is with a “fake” for IMessageBus just to verify whether the expected outgoing message for TryAssignPriority goes out or not. Helpfully, Wolverine even supplies a “spy” version of IMessageBus called TestMessageContext that can be used in unit tests as a stand in just to record what the outgoing messages were.

My strong preference though is to use Wolverine’s concept of cascading messages to write a pure function such that the behavioral logic can be tested without any mocks, stubs, or other fakes. In the sample code above, we had been using Wolverine as “just” a “Mediator” within an MVC Core controller. This time around, let’s ditch the unnecessary “Mediator” ceremony and use a Wolverine HTTP endpoint for the same functionality. In this case we can write the same functionality as a pure function like so:

public static class CategoriseIncidentEndpoint
{
    [WolverinePost("/api/incidents/categorise"), AggregateHandler]
    public static (Events, OutgoingMessages) Post(
        CategoriseIncident command, 
        IncidentDetails existing, 
        User user)
    {
        var events = new Events();
        var messages = new OutgoingMessages();
        
        if (existing.Category != command.Category)
        {
            // Append a new event to the incident
            // stream
            events += new IncidentCategorised
            {
                Category = command.Category,
                UserId = user.Id
            };

            // Send a command message to try to assign the priority
            messages.Add(new TryAssignPriority
            {
                IncidentId = existing.Id
            });
        }

        return (events, messages);
    }
}

In the endpoint above, we’re “pushing” all of the required inputs for our business logic in the Post() method that makes a decision about what state changes should be captured and what additional actions should be done through outgoing, cascaded messages.

A couple notes about this code:

  • It’s using the aggregate handler workflow we introduced in an earlier post to “push” the IncidentDetails aggregate for the incident stream into the method. We’ll need this information to “decide” what to do next
  • The Events type is a Wolverine construct that tells Wolverine “hey, the objects in this collection are meant to be appended as events to the event stream for this aggregate.”
  • Likewise, the OutgoingMessages type is a Wolverine construct that — wait for it — tells Wolverine that the objects contained in that collection should be published as cascading messages after the database transaction succeeds
  • The Marten + Wolverine transactional middleware is calling Marten’s IDocumentSession.SaveChangesAsync() to commit the logical transaction, and also dealing with the transaction outbox mechanics for the cascading messages from the OutgoingMessages collection.

Alright, with all that said, let’s look at what a unit test for a CategoriseIncident command message that results in the category being changed:

    [Fact]
    public void raise_categorized_event_if_changed()
    {
        var command = new CategoriseIncident
        {
            Category = IncidentCategory.Database
        };

        var details = new IncidentDetails(
            Guid.NewGuid(), 
            Guid.NewGuid(), 
            IncidentStatus.Closed, 
            Array.Empty<IncidentNote>(),
            IncidentCategory.Hardware);

        var user = new User(Guid.NewGuid());
        var (events, messages) = CategoriseIncidentEndpoint.Post(command, details, user);

        // There should be one appended event
        var categorised = events.Single()
            .ShouldBeOfType<IncidentCategorised>();
        
        categorised
            .Category.ShouldBe(IncidentCategory.Database);
        
        categorised.UserId.ShouldBe(user.Id);

        // And there should be a single outgoing message
        var message = messages.Single()
            .ShouldBeOfType<TryAssignPriority>();
        
        message.IncidentId.ShouldBe(details.Id);
        message.UserId.ShouldBe(user.Id);

    }

In real life, I’d probably opt to break that unit test into a BDD-like context and individual tests to assert the expected event(s) being appended and the expected outgoing messages, but this is conceptually easier and I didn’t sleep well last night, so this is what you get!

Let’s move on to the message handler for the TryAssignPriority message, and also make this a pure function so we can easily test the behavior:

public static class TryAssignPriorityHandler
{
    // Wolverine will call this method before the "real" Handler method,
    // and it can "magically" connect that the Customer object should be delivered
    // to the Handle() method at runtime
    public static Task<Customer?> LoadAsync(IncidentDetails details, IDocumentSession session)
    {
        return session.LoadAsync<Customer>(details.CustomerId);
    }

    // There's some database lookup at runtime, but I've isolated that above, so the
    // behavioral logic that "decides" what to do is a pure function below. 
    [AggregateHandler]
    public static (Events, OutgoingMessages) Handle(
        TryAssignPriority command, 
        IncidentDetails details,
        Customer customer)
    {
        var events = new Events();
        var messages = new OutgoingMessages();

        if (details.Category.HasValue && customer.Priorities.TryGetValue(details.Category.Value, out var priority))
        {
            if (details.Priority != priority)
            {
                events.Add(new IncidentPrioritised(priority, command.UserId));

                if (priority == IncidentPriority.Critical)
                {
                    messages.Add(new RingAllTheAlarms(command.IncidentId));
                }
            }
        }

        return (events, messages);
    }
}

I’d ask you to notice the LoadAsync() method above. It’s part of the logical handler workflow, but Wolverine is letting us keep that separate from the main “decider” message Handle() method. We’d have to test the entire handler with an integration test eventually, but we can happily write fast running, fine grained unit tests on the expected behavior by just “pushing” inputs into the Handle() method and measuring the events and outgoing messages just by checking the return values.

Summary and What’s Next

Wolverine’s approach has always been driven by the desire to make your application code as testable as possible. Originally that meant to just keep the framework (Wolverine itself) out of your application code as much as possible. Later on, the Wolverine community was influenced by more Functional Programming techniques and Jim Shore’s paper on Testing without Mocks.

Specifically, Wolverine embraced the idea of the “A-Frame Architecture”, with Wolverine itself in the role of the mediator/controller/conductor coordinates between infrastructural concerns like Marten and your own business logic code in message handlers or HTTP endpoint methods without creating a direct coupling between you behavioral logic code and your infrastructure:

If you take advantage of Wolverine features like cascading messages, side effects, and compound handlers to decompose your system in a more FP-esque way while letting Wolverine handle the coordination, you can arrive at much more testable code.

I said earlier that I’d get to Rabbit MQ messaging soon, and I’ll get around to that soon. To fit in with one of my CodeMash 2024 talks on this Friday, I might take a little side trip into how the “Critter Stack” plays well inside of a low ceremony vertical slice architecture as I get ready to absolutely blast away at the “Clean/Onion Architecture” this week.

Building a Critter Stack Application: Wolverine HTTP Endpoints

Hey, did you know that JasperFx Software is ready for formal support plans for Marten and Wolverine? Not only are we trying to make the “Critter Stack” tools be viable long term options for your shop, we’re also interested in hearing your opinions about the tools and how they should change. We’re also certainly open to help you succeed with your software development projects on a consulting basis whether you’re using any part of the Critter Stack or any other .NET server side tooling.

Let’s build a small web service application using the whole “Critter Stack” and their friends, one small step at a time. For right now, the “finished” code is at CritterStackHelpDesk on GitHub.

The posts in this series are:

  1. Event Storming
  2. Marten as Event Store
  3. Marten Projections
  4. Integrating Marten into Our Application
  5. Wolverine as Mediator
  6. Web Service Query Endpoints with Marten
  7. Dealing with Concurrency
  8. Wolverine’s Aggregate Handler Workflow FTW!
  9. Command Line Diagnostics with Oakton
  10. Integration Testing Harness
  11. Marten as Document Database
  12. Asynchronous Processing with Wolverine
  13. Durable Outbox Messaging and Why You Care!
  14. Wolverine HTTP Endpoints (this post)
  15. Easy Unit Testing with Pure Functions
  16. Vertical Slice Architecture
  17. Messaging with Rabbit MQ
  18. The “Stateful Resource” Model
  19. Resiliency

Heretofore in this series, I’ve been using ASP.Net MVC Core controllers anytime we’ve had to build HTTP endpoints for our incident tracking, help desk system in order to introduce new concepts a little more slowly.

If you would, let’s refer back to an earlier incarnation of an HTTP endpoint to handle our LogIncident command from an earlier post in this series:

public class IncidentController : ControllerBase
{
    private readonly IDocumentSession _session;
 
    public IncidentController(IDocumentSession session)
    {
        _session = session;
    }
 
    [HttpPost("/api/incidents")]
    public async Task<IResult> Log(
        [FromBody] LogIncident command
        )
    {
        var userId = currentUserId();
        var logged = new IncidentLogged(command.CustomerId, command.Contact, command.Description, userId);
 
        var incidentId = _session.Events.StartStream(logged).Id;
        await _session.SaveChangesAsync(HttpContext.RequestAborted);
 
        return Results.Created("/incidents/" + incidentId, incidentId);
    }
 
    private Guid currentUserId()
    {
        // let's say that we do something here that "finds" the
        // user id as a Guid from the ClaimsPrincipal
        var userIdClaim = User.FindFirst("user-id");
        if (userIdClaim != null && Guid.TryParse(userIdClaim.Value, out var id))
        {
            return id;
        }
 
        throw new UnauthorizedAccessException("No user");
    }
}

Just to be clear as possible here, the Wolverine HTTP endpoints feature introduced in this post can be mixed and matched with MVC Core and/or Minimal API or even FastEndpoints within the same application and routing tree. I think the ASP.Net team deserves some serious credit for making that last sentence a fact.

Today though, let’s use Wolverine HTTP endpoints and rewrite that controller method above the “Wolverine way.” To get started, add a Nuget reference to the help desk service like so:

dotnet add package WolverineFx.Http

Next, let’s break into our Program file and add Wolverine endpoints to our routing tree near the bottom of the file like so:

app.MapWolverineEndpoints(opts =>
{
    // We'll add a little more in a bit...
});

// Just to show where the above code is within the context
// of the Program file...
return await app.RunOaktonCommands(args);

Now, let’s make our first cut at a Wolverine HTTP endpoint for the LogIncident command, but I’m purposely going to do it without introducing a lot of new concepts, so please bear with me a bit:

public record NewIncidentResponse(Guid IncidentId) 
    : CreationResponse("/api/incidents/" + IncidentId);

public static class LogIncidentEndpoint
{
    [WolverinePost("/api/incidents")]
    public static NewIncidentResponse Post(
        // No [FromBody] stuff necessary
        LogIncident command,
        
        // Service injection is automatic,
        // just like message handlers
        IDocumentSession session,
        
        // You can take in an argument for HttpContext
        // or immediate members of HttpContext
        // as method arguments
        ClaimsPrincipal principal)
    {
        // Some ugly code to find the user id
        // within a claim for the currently authenticated
        // user
        Guid userId = Guid.Empty;
        var userIdClaim = principal.FindFirst("user-id");
        if (userIdClaim != null && Guid.TryParse(userIdClaim.Value, out var claimValue))
        {
            userId = claimValue;
        }
        
        var logged = new IncidentLogged(command.CustomerId, command.Contact, command.Description, userId);

        var id = session.Events.StartStream<Incident>(logged).Id;

        return new NewIncidentResponse(id);
    }
}

Here’s a few salient facts about the code above to explain what it’s doing:

  • The [WolverinePost] attribute tells Wolverine that hey, this method is an HTTP handler, and Wolverine will discover this method and add it to the application’s endpoint routing tree at bootstrapping time.
  • Just like Wolverine message handlers, the endpoint methods are flexible and Wolverine generates code around your code to mediate between the raw HttpContext for the request and your code
  • We have already enabled Marten transactional middleware for our message handlers in an earlier post, and that happily applies to Wolverine HTTP endpoints as well. That helps make our endpoint method be just a synchronous method with the transactional middleware dealing with the ugly asynchronous stuff for us.
  • You can “inject” HttpContext and its immediate children into the method signatures as I did with the ClaimsPrincipal up above
  • Method injection is automatic without any silly [FromServices] attributes, and that’s what’s happening with the IDocumentSession argument
  • The LogIncident parameter is assumed to be the HTTP request body due to being the first argument, and it will be deserialized from the incoming JSON in the request body just like you’d probably expect
  • The NewIncidentResponse type is roughly the equivalent to using Results.Created() in Minimal API to create a response body with the url of the newly created Incident stream and an HTTP status code of 201 for “Created.” What’s different about Wolverine.HTTP is that it can infer OpenAPI documentation from the signature of that type without requiring you to pollute your code by manually adding [ProducesResponseType] attributes on the method to get a “proper” OpenAPI document for the endpoint.

Moving on, that user id detection from the ClaimsPrincipal looks a little bit ugly to me, and likely to be repetitive. Let’s ameliorate that by introducing Wolverine’s flavor of HTTP middleware and move that code to this class:

// Using the custom type makes it easier
// for the Wolverine code generation to route
// things around. I'm not ashamed.
public record User(Guid Id);

public static class UserDetectionMiddleware
{
    public static (User, ProblemDetails) Load(ClaimsPrincipal principal)
    {
        var userIdClaim = principal.FindFirst("user-id");
        if (userIdClaim != null && Guid.TryParse(userIdClaim.Value, out var id))
        {
            // Everything is good, keep on trucking with this request!
            return (new User(id), WolverineContinue.NoProblems);
        }
        
        // Nope, nope, nope. We got problems, so stop the presses and emit a ProblemDetails response
        // with a 400 status code telling the caller that there's no valid user for this request
        return (new User(Guid.Empty), new ProblemDetails { Detail = "No valid user", Status = 400});
    }
}

Do note the usage of ProblemDetails in that middleware. If there is no user-id claim on the ClaimsPrincipal, we’ll abort the request by writing out the ProblemDetails stating there’s no valid user. This pattern is baked into Wolverine.HTTP to help create one off request validations. We’ll utilize this quite a bit more later.

Next, I need to add that new bit of middleware to our application. As a shortcut, I’m going to just add it to every single Wolverine HTTP endpoint by breaking back into our Program file and adding this line of code:

app.MapWolverineEndpoints(opts =>
{
    // We'll add a little more in a bit...
    
    // Creates a User object in HTTP requests based on
    // the "user-id" claim
    opts.AddMiddleware(typeof(UserDetectionMiddleware));
});

Now, back to our endpoint code and I’ll take advantage of that middleware by changing the method to this:

    [WolverinePost("/api/incidents")]
    public static NewIncidentResponse Post(
        // No [FromBody] stuff necessary
        LogIncident command,
        
        // Service injection is automatic,
        // just like message handlers
        IDocumentSession session,
        
        // This will be created for us through the new user detection
        // middleware
        User user)
    {
        var logged = new IncidentLogged(
            command.CustomerId, 
            command.Contact, 
            command.Description, 
            user.Id);

        var id = session.Events.StartStream<Incident>(logged).Id;

        return new NewIncidentResponse(id);
    }

This is a little bit of a bonus, but let’s also get rid of the need to inject the Marten IDocumentSession service by using a Wolverine “side effect” with this equivalent code:

    [WolverinePost("/api/incidents")]
    public static (NewIncidentResponse, IStartStream) Post(LogIncident command, User user)
    {
        var logged = new IncidentLogged(
            command.CustomerId, 
            command.Contact, 
            command.Description, 
            user.Id);

        var op = MartenOps.StartStream<Incident>(logged);
        
        return (new NewIncidentResponse(op.StreamId), op);
    }

In the code above I’m using the MartenOps.StartStream() method to return a “side effect” that will create a new Marten stream as part of the request instead of directly interacting with the IDocumentSession from Marten. That’s a small thing you might not care for, but it can lead to the elimination of mock objects within your unit tests as you can now write a state-based test directly against the method above like so:

public class LogIncident_handling
{
    [Fact]
    public void handle_the_log_incident_command()
    {
        // This is trivial, but the point is that 
        // we now have a pure function that can be
        // unit tested by pushing inputs in and measuring
        // outputs without any pesky mock object setup
        var contact = new Contact(ContactChannel.Email);
        var theCommand = new LogIncident(BaselineData.Customer1Id, contact, "It's broken");

        var theUser = new User(Guid.NewGuid());

        var (_, stream) = LogIncidentEndpoint.Post(theCommand, theUser);

        // Test the *decision* to emit the correct
        // events and make sure all that pesky left/right
        // hand mapping is correct
        var logged = stream.Events.Single()
            .ShouldBeOfType<IncidentLogged>();
        
        logged.CustomerId.ShouldBe(theCommand.CustomerId);
        logged.Contact.ShouldBe(theCommand.Contact);
        logged.LoggedBy.ShouldBe(theUser.Id);
    }
}

Hey, let’s add some validation too!

We’ve already introduced middleware, so let’s just incorporate the popular Fluent Validation library into our project and let it do some basic validation on the incoming LogIncident command body, and if any validation fails, pull the ripcord and parachute out of the request with a ProblemDetails body and 400 status code that describes the validation errors.

Let’s add that in by first adding some pre-packaged middleware for Wolverine.HTTP with:

dotnet add package WolverineFx.Http.FluentValidation

Next, I have to add the usage of that middleware through this new line of code:

app.MapWolverineEndpoints(opts =>
{
    // Direct Wolverine.HTTP to use Fluent Validation
    // middleware to validate any request bodies where
    // there's a known validator (or many validators)
    opts.UseFluentValidationProblemDetailMiddleware();
    
    // Creates a User object in HTTP requests based on
    // the "user-id" claim
    opts.AddMiddleware(typeof(UserDetectionMiddleware));
});

And add an actual validator for our LogIncident, and in this case that model is just an internal concern of our service, so I’ll just embed that new validator as an inner type of the command type like so:

public record LogIncident(
    Guid CustomerId,
    Contact Contact,
    string Description
)
{
    public class LogIncidentValidator : AbstractValidator<LogIncident>
    {
        // I stole this idea of using inner classes to keep them
        // close to the actual model from *someone* online,
        // but don't remember who
        public LogIncidentValidator()
        {
            RuleFor(x => x.Description).NotEmpty().NotNull();
            RuleFor(x => x.Contact).NotNull();
        }
    }
};

Now, Wolverine does have to “know” about these validators to use them within the endpoint handling, so I’ll need to have these types registered in the application’s IoC container against the right IValidator<T> interface. This is not required, but Wolverine has a (Lamar) helper to find and register these validators within your project and do so in a way that’s most efficient at runtime (i.e., there’s a micro optimization for making these validators have a Singleton life time in the container if Wolverine can see that the types are stateless). I’ll use that little helper in our Program file within the UseWolverine() configuration like so:

builder.Host.UseWolverine(opts =>
{
    // lots more stuff unfortunately, but focus on the line below
    // just for now:-)
    
    // Apply the validation middleware *and* discover and register
    // Fluent Validation validators
    opts.UseFluentValidation();

}

And that’s that. We’ve not got Fluent Validation validation in the request handling for the LogIncident command. In a later section, I’ll explain how Wolverine does this, and try to sell you all on the idea that Wolverine is able to do this more efficiently than other commonly used frameworks *cough* MediatR *cough* that depend on conditional runtime code.

One off validation with “Compound Handlers”

As you might have noticed, the LogIncident command has a CustomerId property that we’re using as is within our HTTP handler. We should never just trust the inputs of a random client, so let’s at least validate that the command refers to a real customer.

Now, typically I like to make Wolverine message handler or HTTP endpoint methods be the “happy path” and handle exception cases and one off validations with a Wolverine feature we inelegantly call “compound handlers.”

I’m going to add a new method to our LogIncidentHandler class like so:

    // Wolverine has some naming conventions for Before/Load
    // or After/AfterAsync, but you can use a more descriptive
    // method name and help Wolverine out with an attribute
    [WolverineBefore]
    public static async Task<ProblemDetails> ValidateCustomer(
        LogIncident command, 
        
        // Method injection works just fine within middleware too
        IDocumentSession session)
    {
        var exists = await session.Query<Customer>().AnyAsync(x => x.Id == command.CustomerId);
        return exists
            ? WolverineContinue.NoProblems
            : new ProblemDetails { Detail = $"Unknown customer id {command.CustomerId}", Status = 400};
    }

Integration Testing

While the individual methods and middleware can all be tested separately, you do want to put everything together with an integration test to prove out whether or not all this magic really works. As I described in an earlier post where we learned how to use Alba to create an integration testing harness for a “critter stack” application, we can write an end to end integration test against the HTTP endpoint like so (this sample doesn’t cover every permutation, but hopefully you get the point):

    [Fact]
    public async Task create_a_new_incident_happy_path()
    {
        // We'll need a user
        var user = new User(Guid.NewGuid());
        
        // Log a new incident first
        var initial = await Scenario(x =>
        {
            var contact = new Contact(ContactChannel.Email);
            x.Post.Json(new LogIncident(BaselineData.Customer1Id, contact, "It's broken")).ToUrl("/api/incidents");
            x.StatusCodeShouldBe(201);
            
            x.WithClaim(new Claim("user-id", user.Id.ToString()));
        });

        var incidentId = initial.ReadAsJson<NewIncidentResponse>().IncidentId;

        using var session = Store.LightweightSession();
        var events = await session.Events.FetchStreamAsync(incidentId);
        var logged = events.First().ShouldBeOfType<IncidentLogged>();

        // This deserves more assertions, but you get the point...
        logged.CustomerId.ShouldBe(BaselineData.Customer1Id);
    }

    [Fact]
    public async Task log_incident_with_invalid_customer()
    {
        // We'll need a user
        var user = new User(Guid.NewGuid());
        
        // Reject the new incident because the Customer for 
        // the command cannot be found
        var initial = await Scenario(x =>
        {
            var contact = new Contact(ContactChannel.Email);
            var nonExistentCustomerId = Guid.NewGuid();
            x.Post.Json(new LogIncident(nonExistentCustomerId, contact, "It's broken")).ToUrl("/api/incidents");
            x.StatusCodeShouldBe(400);
            
            x.WithClaim(new Claim("user-id", user.Id.ToString()));
        });
    }
}

Um, how does this all work?

So far I’ve shown you some “magic” code, and that tends to really upset some folks. I also made some big time claims about how Wolverine is able to be more efficient at runtime (alas, there is a significant “cold start” problem you can easily work around, so don’t get upset if your first ever Wolverine request isn’t snappy).

Wolverine works by using code generation to wrap its handling code around your code. That includes the middleware, and the usage of any IoC services as well. Moreover, do you know what the fastest IoC container is in all the .NET land? I certainly think that Lamar is at least in the game for that one, but nope, the answer is no IoC container at runtime.

One of the advantages of this approach is that we can preview the generated code to unravel the “magic” and explain what Wolverine is doing at runtime. Moreover, we’ve tried to add descriptive comments to the generated code to further explain what and why code is in place.

See more about this in my post Unraveling the Magic in Wolverine.

Here’s the generated code for our LogIncident endpoint (warning, ugly generated code ahead):

// <auto-generated/>
#pragma warning disable
using FluentValidation;
using Microsoft.AspNetCore.Routing;
using System;
using System.Linq;
using Wolverine.Http;
using Wolverine.Http.FluentValidation;
using Wolverine.Marten.Publishing;
using Wolverine.Runtime;

namespace Internal.Generated.WolverineHandlers
{
    // START: POST_api_incidents
    public class POST_api_incidents : Wolverine.Http.HttpHandler
    {
        private readonly Wolverine.Http.WolverineHttpOptions _wolverineHttpOptions;
        private readonly Wolverine.Runtime.IWolverineRuntime _wolverineRuntime;
        private readonly Wolverine.Marten.Publishing.OutboxedSessionFactory _outboxedSessionFactory;
        private readonly FluentValidation.IValidator<Helpdesk.Api.LogIncident> _validator;
        private readonly Wolverine.Http.FluentValidation.IProblemDetailSource<Helpdesk.Api.LogIncident> _problemDetailSource;

        public POST_api_incidents(Wolverine.Http.WolverineHttpOptions wolverineHttpOptions, Wolverine.Runtime.IWolverineRuntime wolverineRuntime, Wolverine.Marten.Publishing.OutboxedSessionFactory outboxedSessionFactory, FluentValidation.IValidator<Helpdesk.Api.LogIncident> validator, Wolverine.Http.FluentValidation.IProblemDetailSource<Helpdesk.Api.LogIncident> problemDetailSource) : base(wolverineHttpOptions)
        {
            _wolverineHttpOptions = wolverineHttpOptions;
            _wolverineRuntime = wolverineRuntime;
            _outboxedSessionFactory = outboxedSessionFactory;
            _validator = validator;
            _problemDetailSource = problemDetailSource;
        }



        public override async System.Threading.Tasks.Task Handle(Microsoft.AspNetCore.Http.HttpContext httpContext)
        {
            var messageContext = new Wolverine.Runtime.MessageContext(_wolverineRuntime);
            // Building the Marten session
            await using var documentSession = _outboxedSessionFactory.OpenSession(messageContext);
            // Reading the request body via JSON deserialization
            var (command, jsonContinue) = await ReadJsonAsync<Helpdesk.Api.LogIncident>(httpContext);
            if (jsonContinue == Wolverine.HandlerContinuation.Stop) return;
            
            // Execute FluentValidation validators
            var result1 = await Wolverine.Http.FluentValidation.Internals.FluentValidationHttpExecutor.ExecuteOne<Helpdesk.Api.LogIncident>(_validator, _problemDetailSource, command).ConfigureAwait(false);

            // Evaluate whether or not the execution should be stopped based on the IResult value
            if (!(result1 is Wolverine.Http.WolverineContinue))
            {
                await result1.ExecuteAsync(httpContext).ConfigureAwait(false);
                return;
            }


            (var user, var problemDetails2) = Helpdesk.Api.UserDetectionMiddleware.Load(httpContext.User);
            // Evaluate whether the processing should stop if there are any problems
            if (!(ReferenceEquals(problemDetails2, Wolverine.Http.WolverineContinue.NoProblems)))
            {
                await WriteProblems(problemDetails2, httpContext).ConfigureAwait(false);
                return;
            }


            var problemDetails3 = await Helpdesk.Api.LogIncidentEndpoint.ValidateCustomer(command, documentSession).ConfigureAwait(false);
            // Evaluate whether the processing should stop if there are any problems
            if (!(ReferenceEquals(problemDetails3, Wolverine.Http.WolverineContinue.NoProblems)))
            {
                await WriteProblems(problemDetails3, httpContext).ConfigureAwait(false);
                return;
            }


            
            // The actual HTTP request handler execution
            (var newIncidentResponse_response, var startStream) = Helpdesk.Api.LogIncidentEndpoint.Post(command, user);

            
            // Placed by Wolverine's ISideEffect policy
            startStream.Execute(documentSession);

            // This response type customizes the HTTP response
            ApplyHttpAware(newIncidentResponse_response, httpContext);
            
            // Commit any outstanding Marten changes
            await documentSession.SaveChangesAsync(httpContext.RequestAborted).ConfigureAwait(false);

            
            // Have to flush outgoing messages just in case Marten did nothing because of https://github.com/JasperFx/wolverine/issues/536
            await messageContext.FlushOutgoingMessagesAsync().ConfigureAwait(false);

            // Writing the response body to JSON because this was the first 'return variable' in the method signature
            await WriteJsonAsync(httpContext, newIncidentResponse_response);
        }

    }

    // END: POST_api_incidents
    
    
}


Summary and What’s Next

The Wolverine.HTTP library was originally built to be a supplement to MVC Core or Minimal API by allowing you to create endpoints that integrated well into Wolverine’s messaging, transactional outbox functionality, and existing transactional middleware. It has since grown into being more of a full fledged alternative for building web services, but with potential for substantially less ceremony and far more testability than MVC Core.

In later posts I’ll talk more about the runtime architecture and how Wolverine squeezes out more performance by eliminating conditional runtime switching, reducing object allocations, and sidestepping the dictionary lookups that are endemic to other “flexible” .NET frameworks like MVC Core.

Wolverine.HTTP has not yet been used with Razor at all, and I’m not sure that will ever happen. Not to worry though, you can happily use Wolverine.HTTP in the same application with MVC Core controllers or even Minimal API endpoints.

OpenAPI support has been a constant challenge with Wolverine.HTTP as the OpenAPI generation in ASP.Net Core is very MVC-centric, but I think we’re in much better shape now.

In the next post, I think we’ll introduce asynchronous messaging with Rabbit MQ. At some point in this series I’m going to talk more about how the “Critter Stack” is well suited for a lower ceremony vertical slice architecture that (hopefully) creates a maintainable and testable codebase without all the typical Clean/Onion Architecture baggage that I could personally do without.

And just for fun…

My “History” with ASP.Net MVC

There’s no useful content in this section, just some navel-gazing. Even though I really haven’t had to use ASP.Net MVC too terribly much, I do have a long history with it:

  1. In the beginning, there was what we now call ASP Classic, and it was good. For that day and time anyway when we would happily code directly in production and before TDD and SOLID and namby-pamby “source control.” (I started my development career in “Shadow IT” if that’s not obvious here). And when we did use source control, it was VSS because on the sly because the official source control in the office was something far, far worse that was COBOL-centric that I don’t think even exists any longer.
  2. Next there was ASP.Net WebForms and it was dreadful. I hated it.
  3. We started collectively learning about Agile and wanted to practice Test Driven Development, and began to hate WebForms even more
  4. Ruby on Rails came out in the middle 00’s and made what later became the ALT.Net community absolutely loathe WebForms even more than we already did
  5. At an MVP Summit on the Microsoft campus, the one and only Scott Guthrie, the Gu himself, showed a very early prototype of ASP.Net MVC to a handful of us and I was intrigued. That continued onward through the official unveiling of MVC at the very first ALT.Net open spaces event in Austin in ’07.
  6. A few collaborators and I decided that early ASP.Net MVC was too high ceremony and went all “Captain Ahab” trying to make an alternative, open source framework called FubuMVC go as an alternative — all while NancyFx, a “yet another Sinatra clone” became far more successful years before Microsoft finally got around to their own inevitable Sinatra clone (Minimal API)
  7. After .NET Core came along and made .NET a helluva lot better ecosystem, I decided that whatever, MVC Core is fine, it’s not going to be the biggest problem on our project, and if the client wants to use it, there’s no need to be upset about it. It’s fine, no really.
  8. MVC Core has gotten some incremental improvements over time that made it lower ceremony than earlier ASP.Net MVC, and that’s worth calling out as a positive
  9. People working with MVC Core started running into the problem of bloated controllers, and started using early MediatR as a way to kind of, sort of manage controller bloat by offloading it into focused command handlers. I mocked that approach mercilessly, but that was partially because of how awful a time I had helping folks do absurdly complicated middleware schemes with MediatR using StructureMap or Lamar (MVC Core + MediatR is probably worthwhile as a forcing function to avoid the controller bloat problems with MVC Core by itself)
  10. I worked on several long-running codebases built with MVC Core based on Clean Architecture templates that were ginormous piles of technical debt, and I absolutely blame MVC Core as a contributing factor for that
  11. I’m back to mildly disliking MVC Core (and I’m outright hostile to Clean/Onion templates). Not that you can’t write maintainable systems with MVC Core, but I think that its idiomatic usage can easily lead to unmaintainable systems. Let’s just say that I don’t think that MVC Core — and especially combined with some kind of Clean/Onion Architecture template as it very commonly is out in the wild — leads folks to the “pit of success” in the long run

My Technical Plans and Aspirations for 2024

Hey, did you know that JasperFx Software is now able to offer formal support plans and consulting for Marten and Wolverine? Not only are we trying to make the “Critter Stack” tools be viable long term options for your shop, we’re also interested in hearing your opinions about the tools and how they should change. We’re also certainly open to help you succeed with your software development projects on a consulting basis whether you’re using any part of the Critter Stack or any other .NET server side tooling.

I’ve written posts like this in early January over the past several years laying out my grand hopes for my OSS work in the new year, and if you’re curious, you can check out my theoretical plans from 2021, 2022, and 2023. I’m always wrong of course, and there’s going to be a few things on my list this year that are repeats from the past couple years. I’m still going to claim my superpower as an OSS developer is having a much longer attention span than the average developer, but that cuts both ways.

But first…

My 2023 in Review

I had a huge year in 2023 by any possible measure. After 15 years of constant effort and a couple hurtful false starts along the way, I started a new company named JasperFx Software LLC as both a software development consultancy and to build a sustainable business model around the “Critter Stack” tools of Marten and Wolverine. Let me stop here and say how much I appreciate our early customers and I’m looking forward to expanding on those relationships in the New Year’s!

Technically speaking, I was most excited — and disappointed a little bit about how long it took — for the Wolverine 1.0 release this summer! That was especially gratifying for me because Wolverine took 5-6 years and a pretty substantial reboot and rename in 2022 to fully gestate into what it is now. Wolverine might not be exploding in download numbers (yet), but it’s attracted a great community of early users and we’ve collectively pushed Wolverine to 1.13 now with a ton of new features and usability improvements that weren’t on my radar a year ago at all.

Personally, my highlights were finally meeting my collaborator and friend Oskar Dudycz in real life at NDC Oslo — which supposed to have happened years earlier but a certain worldwide pandemic delayed that for a few years. I also enjoyed my trip to the KCDC conference last year, and turned that into a road trip with my older son to visit family along the way.

Oh, and this just the other day:

On to…

The Grand Plans for 2024!

My most important goal for 2024 is to reduce my personal stress level that’s been a fallout from spinning up the new company. Wish me luck on that one.

First, let’s start with what’s either heavily in flight, then the work JasperFx is doing for clients in January/February this year:

  • Marten 7.0 is moving along pretty well right now. The biggest chunk of work so far has been the completely revamped LINQ support that improves both the span of supported LINQ use cases and is able to generate much more efficient SQL for nested child collection searching. Besides adding a lot more polish overall, we’re making improvements to Marten’s performance by utilizing newer Npgsql features like data sources, finally building out a native “partial” update model that doesn’t depend on Javascript running in PostgreSQL, and revamping Marten’s retry functionality. And that doesn’t even address improvements to the event store functionality.
  • There’ll also be a Wolverine 2.0 early this year, but I think that will mostly be about integrating Wolverine with Marten 7.0 and probably dropping .NET 6 support.
  • A JasperFx customer has engaged us to build out functionality to be able to utilize and manage new tenant databases inside a “database per tenant” multi-tenancy strategy using both Marten and Wolverine without requiring any downtime.
  • For a different JasperFx customer, we’re finally building in the long planned ability to scale Marten’s event store features to “really big” workloads by being able to adaptively distribute projection work across the running nodes within a cluster instead of today’s “hot/cold” failover approach. That’s been on my list of goals for the New Year for several years running, but it finally happens early in 2024
  • As part of the previous bullet, we’re building in the ability to do zero downtime deployments of changes to event projections. As part of those plans, we’re also aiming for true blue/green deployment capabilities for Marten’s event sourcing feature set.
  • “First class subscriptions” from Marten’s event store through Wolverine’s messaging features

For the last two bullet points, that brings me to JasperFx’s plans for world domination (or at least enough revenue to keep growing).

I know some folks are annoyed at our potential push for an open core model, but using a paid model for advanced features. I understand that, but I think that that option will create a more sustainable environment for the open core model to continue. My personal dividing line is that any feature that is almost automatically going to require us to help users utilize or configure it, or leads to very large transaction throughput absolutely deserves to be paid for.

The details aren’t firmed up by any means, but the “Critter Stack” is moving to an Open-core model where the existing libraries continue under the MIT license while we also offer a new set of functionality for complex usages, advanced monitoring and management, and improved scalability. Tentatively, we’re shamelessly calling this the “CritterStackPro.” The first couple features are all related to the event sourcing scalability and deployment capabilities our largest customer has commissioned that I described up above. I’m very excited to see this all come to fruition after years of planning and discussions.

Beyond that, we’ve got some ideas and plenty of user feedback about what would be valuable for a potential management console for the “Critter Stack” tools.

Other Vaguely Thought Up Aspirations

  • Continue to push Marten & Wolverine to be the best possible technical platform for building event driven architectures
  • I can’t speak to any specifics yet (’cause I don’t know them anyway), but there will be some improved integration recipes for Marten/Wolverine with Hot Chocolate both via user request and through a JasperFx Software customer
  • Add more robust sample applications and tutorials for both Marten and Wolverine to our various websites
  • Oskar already has a new code name for our next “Critter Stack” tool. I’m not saying that will be Marten-like event sourcing support and first class Wolverine support using Sql Server, but I’m not “not saying” that’s what it would be either.
  • I’m still somewhat interested in an optimized serverless mode for both Marten and Wolverine to really leverage AOT compilation, but man, that’s going to take some effort
  • Somehow, some way, get or build out better infrastructure for the kind of automated integration testing we do with Marten and Wolverine

And that’s enough dreaming for now. I’m looking forward to seeing how the Critter Stack tools and our community continues to grow and progress in 2024. Happy New Year’s everyone!

Building a Critter Stack Application: Durable Outbox Messaging and Why You Care!

Hey, did you know that JasperFx Software is ready for formal support plans for Marten and Wolverine? Not only are we trying to make the “Critter Stack” tools be viable long term options for your shop, we’re also interested in hearing your opinions about the tools and how they should change. We’re also certainly open to help you succeed with your software development projects on a consulting basis whether you’re using any part of the Critter Stack or any other .NET server side tooling.

Let’s build a small web service application using the whole “Critter Stack” and their friends, one small step at a time. For right now, the “finished” code is at CritterStackHelpDesk on GitHub.

The posts in this series are:

  1. Event Storming
  2. Marten as Event Store
  3. Marten Projections
  4. Integrating Marten into Our Application
  5. Wolverine as Mediator
  6. Web Service Query Endpoints with Marten
  7. Dealing with Concurrency
  8. Wolverine’s Aggregate Handler Workflow FTW!
  9. Command Line Diagnostics with Oakton
  10. Integration Testing Harness
  11. Marten as Document Database
  12. Asynchronous Processing with Wolverine
  13. Durable Outbox Messaging and Why You Care! (this post)
  14. Wolverine HTTP Endpoints
  15. Easy Unit Testing with Pure Functions
  16. Vertical Slice Architecture
  17. Messaging with Rabbit MQ
  18. The “Stateful Resource” Model
  19. Resiliency

As we layer in new technical concepts from both Wolverine and Marten to build out incident tracking, help desk API, we looked at this message handler in the last post that both saved data, and published a message to an asynchronous, local queue that would act upon the newly saved data at some point.

public static class CategoriseIncidentHandler
{
    public static readonly Guid SystemId = Guid.NewGuid();
     
    [AggregateHandler]
    // The object? as return value will be interpreted
    // by Wolverine as appending one or zero events
    public static async Task<object?> Handle(
        CategoriseIncident command, 
        IncidentDetails existing,
        IMessageBus bus)
    {
        if (existing.Category != command.Category)
        {
            // Send the message to any and all subscribers to this message
            await bus.PublishAsync(
                new TryAssignPriority { IncidentId = existing.Id });
            return new IncidentCategorised
            {
                Category = command.Category,
                UserId = SystemId
            };
        }
 
        // Wolverine will interpret this as "do no work"
        return null;
    }
}

To recap, that message handler is potentially appending an IncidentCategorised event to an Incident event stream and publishing a command message named TryAssignPriority that will trigger a downstream action to try to assign a new priority to our Incident.

This relatively simple message handler (and we’ll make it even simpler in a later post in this series) creates a host of potential problems for our system:

  • In a naive usage of messaging tools, there’s a race condition between the outbound `TryAssignPriority` message being picked up by its handler and the database changes getting committed to the database. I have seen this cause nasty, hard to reproduce bugs through in real life production applications when once in awhile the message is processed before the database changes are made, and the system behaves incorrectly because the expected data is not yet committed by the original command finishing.
  • Maybe the actual message sending fails, but the database changes succeed, so the system is in an inconsistent state.
  • Maybe the outgoing message is happily published successfully, but the database changes fail, so that when the TryAssignPriority message is handled, it’s working against old system state.
  • Event if everything succeeds perfectly, the outgoing message should never actually be published until the transaction is complete.

To be clear, even without the usage of the outbox feature we’re about to use, Wolverine will apply an “in memory outbox” in message handlers such that all the messages published through IMessageBus.PublishAsync()/SendAsync()/etc. will be held in memory until the successful completion of the message handler. That by itself is enough to prevent the race condition between the database changes and the outgoing messages.

At this point, let’s introduce Wolverine’s transactional outbox support that was built specifically to solve or prevent the potential problems I listed up above. In this case, Wolverine has a transactional outbox & inbox support built into its integrations with PostgreSQL and Marten.

To rewind a little bit, in an earlier post where we first introduced the Marten + Wolverine integration, I had added a call to IntegrateWithWolverine() to the Marten configuration in our Program file:

using Wolverine.Marten;
 
var builder = WebApplication.CreateBuilder(args);
 
builder.Services.AddMarten(opts =>
{
    // This would be from your configuration file in typical usage
    opts.Connection(Servers.PostgresConnectionString);
    opts.DatabaseSchemaName = "wolverine_middleware";
})
    // This is the wolverine integration for the outbox/inbox,
    // transactional middleware, saga persistence we don't care about
    // yet
    .IntegrateWithWolverine()
     
    // Just letting Marten build out known database schema elements upfront
    // Helps with Wolverine integration in development
    .ApplyAllDatabaseChangesOnStartup();

Among other things, the call to IntegrateWithWolverine() up above directs Wolverine to use the PostgreSQL database for Marten as the durable storage for incoming and outgoing messages as part of Wolverine’s transactional inbox and outbox. The basic goal of this subsystem is to create consistency (really “eventual consistency“) between database transactions and outgoing messages without having to resort to endlessly painful distributed transactions.

Now, we’ve got another step to make. As of right now, Wolverine makes a determination of whether or not to use the durable outbox storage based on the destination of the outgoing message — with the theory that teams might easily want to mix and match durable messaging and less resource intensive “fire and forget” messaging within the same application. In this help desk service, we’ll make that easy and just say that all message processing in local queues (we set up TryAssignPriority to be handled through a local queue in the previous post) to be durable. In the UseWolverine() configuration, I’ll add this line of code to do that:

builder.Host.UseWolverine(opts =>
{
    // More configuration...

    // Automatic transactional middleware
    opts.Policies.AutoApplyTransactions();
    
    // Opt into the transactional inbox for local 
    // queues
    opts.Policies.UseDurableLocalQueues();
    
    // Opt into the transactional inbox/outbox on all messaging
    // endpoints
    opts.Policies.UseDurableOutboxOnAllSendingEndpoints();

    // Set up from the previous post
    opts.LocalQueueFor<TryAssignPriority>()
        // By default, local queues allow for parallel processing with a maximum
        // parallel count equal to the number of processors on the executing
        // machine, but you can override the queue to be sequential and single file
        .Sequential()

        // Or add more to the maximum parallel count!
        .MaximumParallelMessages(10);
});

I (Jeremy) may very well declare this “endpoint by endpoint” declaration of durability to have been a big mistake because confused some users and vote to change this in a later version of Wolverine.

With this outbox functionality in place, the messaging and transaction workflow behind the scenes of that handler shown above is to:

  1. When the outgoing TryAssignPriority message is published, Wolverine will “route” that message into its internal Envelope structure that includes the message itself and all the necessary metadata and information Wolverine would need to actually send the message later
  2. The outbox integration will append the outgoing message as a pending operation to the current Marten session
  3. The IncidentCategorised event will be appended to the current Marten session
  4. The Marten session is committed (IDocumentSession.SaveChangesAsync()), which will persist the new event and a copy of the outgoing Envelope into the outbox or inbox (scheduled messages or messages to local queues will be persisted in the incoming table) tables in one single, batched database command and by a native PostgreSQL transaction.
  5. Assuming the database transaction succeeds, the outgoing messages are “released” to Wolverine’s outgoing message publishing in memory (we’re coming back to that last point in a bit)
  6. Once Wolverine is able to successfully publish the message to the outgoing transport, it will delete the database table record for that outgoing message.

The 4th point is important I think. The close integration between Marten & Wolverine allows for more efficient processing by combining the database operations to minimize database round trips. In cases where the outgoing message transport is also batched (Azure Service Bus or AWS SQS for example), the database command to delete messages is also optimized for one call using PostgreSQL array support. I guess the main point of bringing this up is just to say there’s been quite a bit of thought and outright micro-optimizations done to this infrastructure.

But what about…?

  • the process is shut down cleanly? Wolverine tries to “drain” all in flight work first, and then “release” that process’s ownership of the persisted messages
  • the process crashes before messages floating around the local queues or outgoing message publishing finishes? Wolverine is able to detect a “dormant node” and reassign the persisted incoming and outgoing messages to be processed by another node. Or in the case of a single node, restart that work when the process is restarted.
  • the Wolverine tables don’t yet exist in the database? Wolverine has similar database management to Marten (it’s all the shared Weasel library doing that behind the scenes) and will happily build out missing tables in its default setting
  • an application using a database per tenant multi-tenancy strategy? Wolverine creates separate inbox or outbox storage in each tenant database. It’s complicated and took quite awhile to build, but it works. If no tenant is specified, the inbox/outbox in a “default” database is used
  • I need to use the outbox approach for consistency outside of a message handler, like when handling an HTTP request that happens to make both database changes and publish messages? That’s a really good question, and arguably one of the best reasons to use Wolverine over other .NET messaging tools because as we’ll see in later posts, that’s perfectly possible and quite easy. There is a recipe for using the Wolverine outbox functionality with MVC Core or Minimal API shown here.

Summary and What’s Next

The outbox (and closely related inbox) support is hugely important inside of any system that uses asynchronous messaging as a way of creating consistency and resiliency. Wolverine’s implementation is significantly different (and honestly more complicated) than typical implementations that depend on just polling from an outbound database table. That’s a positive in some ways because we believe that Wolverine’s approach is more efficient and will lead to greater throughput.

There is also similar inbox/outbox functionality and optimizations for Wolverine with EF Core using either PostgreSQL or Sql Server as the backing storage. In the future, I hope to see the EF Core and Sql Server support improve, but for right now, the Marten integration is getting the most attention and usage. I’d also love to see Wolverine grow to include support for alternative databases, with Azure CosmosDb and AWS Dynamo Db being leading contenders. We’ll see.

As for what’s next, let me figure out what sounds easy for the next post in January. In the meantime, Happy New Year’s everybody!

Wolverine’s HTTP Gets a Lot Better at OpenAPI (Swagger)

Hey, did you know that JasperFx Software is ready for formal support plans for Marten and Wolverine? Not only are we trying to make the “Critter Stack” tools be viable long term options for your shop, we’re also interested in hearing your opinions about the tools and how they should change. We’re also certainly open to help you succeed with your software development projects on a consulting basis whether you’re using any part of the Critter Stack or any other .NET server side tooling. Reach us anytime at sales@jasperfx.net or on Discord!

I just published Wolverine 1.13.0 this evening with some significant improvements (see the release notes here). Beyond the normal scattering of bug fixes (and some significant improvements to the MQTT support in Wolverine for a JasperFx Software client who we’re helping build an IoT system), the main headline is that Wolverine does a substantially better job generating OpenAPI documentation for its HTTP endpoint model.

When I’m building web services of any kind I tend to lean very hard into doing integration testing with Alba, and because of that, I also tend not to use Swashbuckle or an equivalent tool very often during development and that has apparently been a blind spot for me in building Wolverine.HTTP so far. To play out a typical conversation I frequently have with other server side .NET developers talking about tooling for web services, I think:

  1. MVC Core by itself — but this is hugely acerbated by unfortunately popular prescriptive architectural patterns that organize code around NounController / NounService / NounRepository code organization — can easily lead to unmaintainable code in bloated controller classes and plenty of work for software consultants who get brought in later to clean up after the system wildly outgrew the original team’s “Clean Architecture” approach
  2. I’m not convinced that Minimal API is any better for larger applications
  3. The MVC Core controllers delegating to an inner “mediator” tool strategy may help divide the code into more maintainable code, but it adds what I think is an unacceptable level of extra code ceremony. Also acerbated by prescriptive architectures
  4. You should use Wolverine.HTTP! It’s much lower ceremony code than the “controllers + mediator” strategy, but still sets you up for a vertical slice architecture! And it integrates well with Marten or Wolverine messaging!

Other developers: This all sounds great! Pause. Hey, the web services with this thing seem to work just fine, but man, the Swashbuckle/NSwag/Angular client generation is all kinds of not good! I’m going back to “Wolverine as MediatR”.

To which I reply:

But no more of that after today because the Wolverine HTTP OpenAPI generation just took a huge leap forward after the 1.13 release!

Here’s a sample of what I mean. From the Wolverine.HTTP test suite, here’s an endpoint method that uses Marten to load an Invoice document, modify it, then save it:

    [WolverinePost("/invoices/{invoiceId}/pay")]
    public static IMartenOp Pay([Document] Invoice invoice)
    {
        invoice.Paid = true;
        return MartenOps.Store(invoice);
    }

The [Document] attribute tells Wolverine to load the Invoice from Marten, and part of its convention will match on the invoiceId route argument from the route pattern. That failed before in a couple ways:

  1. Swashbuckle can’t be convinced that the Invoice argument isn’t the request body
  2. If you omit an Guid invoiceId argument from the route, Swashbuckle wasn’t seeing invoiceId as a route parameter and didn’t let you specify that in the Swashbuckle page.
  3. Swashbuckle definitely didn’t get that IMartenOp is a specialized Wolverine side effect that shouldn’t be used as the response body.

Now though, that endpoint looks like this in Swashbuckle:

Which is now correct and actually usable! (The 404 is valid because there’s a route argument and that status is returned if the Invoice referred to by the invoiceId route argument does not exist).

To call out some improvements for Wolverine.HTTP users, at least the Swashbuckle generation handles:

  • Route arguments that are used by Wolverine, but not necessarily in the main method signature. So no stupid, unused [FromRoute] string id method parameters
  • Querystring arguments are reflected in the Swashbuckle page
  • [FromHeader] arguments are reflected in Swashbuckle
  • HTTP endpoints that return some kind of tuple correctly show the response body if there is one — and that’s a commonly used and powerful capability of Wolverine’s HTTP endpoints that previously fouled up the OpenAPI generation
  • The usage of [EmptyResponse] correctly sets up the 204 status code behavior with no extraneous 200 or 404 status codes coming in by default
  • Ignoring method injected service parameters in the main method

For a little background, after getting plenty of helpful feedback from Wolverine users, I finally took some more serious time to go investigate the problems and root causes. After digging in much deeper to the AspNetCore and Swashbuckle internals, I came to the conclusion that the OpenAPI internals in AspNetCore are batshit crazy far too hard coded to MVC Core and that Wolverine absolutely had to have its own provider for generating OpenAPI documents off of its own semantic model. Fortunately, AspNetCore and Swashbuckle are both open source, so I could easily get to the source code to reverse engineer what they do under the covers (plus JetBrains Rider is a rock star at disassembling code on the fly). Wolverine.HTTP 1.13 now registers its own strategy for generating the OpenAPI documentation for Wolverine endpoints and keeps the built in MVC Core-centric strategy from applying to the same Wolverine endpoints.

I’m sure there will be other issues over time, but so far, this has addressed every known issue with our OpenAPI generation. I’m hoping this goes a long way toward removing impediments to more users adopting Wolverine.HTTP because as I’ve said before, I think the Wolverine model leads to much lower ceremony code, better testability over all, and potentially to significantly better maintainability of larger systems that today turn into huge messes with MVC Core.

Building a Critter Stack Application: Asynchronous Processing with Wolverine

Hey, did you know that JasperFx Software is ready for formal support plans for Marten and Wolverine? Not only are we trying to make the “Critter Stack” tools be viable long term options for your shop, we’re also interested in hearing your opinions about the tools and how they should change. We’re also certainly open to help you succeed with your software development projects on a consulting basis whether you’re using any part of the Critter Stack or any other .NET server side tooling.

Let’s build a small web service application using the whole “Critter Stack” and their friends, one small step at a time. For right now, the “finished” code is at CritterStackHelpDesk on GitHub.

The posts in this series are:

  1. Event Storming
  2. Marten as Event Store
  3. Marten Projections
  4. Integrating Marten into Our Application
  5. Wolverine as Mediator
  6. Web Service Query Endpoints with Marten
  7. Dealing with Concurrency
  8. Wolverine’s Aggregate Handler Workflow FTW!
  9. Command Line Diagnostics with Oakton
  10. Integration Testing Harness
  11. Marten as Document Database
  12. Asynchronous Processing with Wolverine (this post)
  13. Durable Outbox Messaging and Why You Care!
  14. Wolverine HTTP Endpoints
  15. Easy Unit Testing with Pure Functions
  16. Vertical Slice Architecture
  17. Messaging with Rabbit MQ
  18. The “Stateful Resource” Model
  19. Resiliency

As we continue to add new functionality to our incident tracking, help desk system, we have been using Marten for persistence and Wolverine for command execution within MVC Core controllers (with cameos from Alba for testing support and Oakton for command line utilities).

In the workflow we’ve built out so far for the little system shown below, we’ve created a command called CategorizeIncident that for the moment is only sent to the system through HTTP calls from a user interface.

Let’s say that in our system that we may have some domain logic rules based on customer data that we could use to try to prioritize an incident automatically once the incident is categorized. To that end, let’s create a new command named `TryAssignPriority` like this:

public class TryAssignPriority
{
    public Guid IncidentId { get; set; }
}

We’d like to kick off this work any time an incident is categorized, but we might not want to necessarily do that work within the scope of the web request that’s capturing the CategorizeIncident command. Partially this would be a potential scalability issue to potentially offload work from the web server, partially to make the user interface as responsive as possible by not making it wait for slower web service responses, but mostly because I want an excuse to introduce Wolverine’s ability to asynchronously process work through local, in memory queues.

Most of the code in this post is an intermediate form that I’m using just to introduce concepts in the simplest way I can think of. In later posts I’ll show more idiomatic Wolverine ways to do things to arrive at the final version that is in GitHub.

Alright, now that we’ve got our new command class above, let’s publish that locally through Wolverine by breaking into our earlier CategoriseIncidentHandler that I’ll show here in a “before” state:

public static class CategoriseIncidentHandler
{
    public static readonly Guid SystemId = Guid.NewGuid();
    
    [AggregateHandler]
    public static IEnumerable<object> Handle(CategoriseIncident command, IncidentDetails existing)
    {
        if (existing.Category != command.Category)
        {
            yield return new IncidentCategorised
            {
                Category = command.Category,
                UserId = SystemId
            };
        }
    }
}

In this next version, I’m going to add a single call to Wolverine’s main IMessageBus entry point to publish the new TryAssignPriority command message:

public static class CategoriseIncidentHandler
{
    public static readonly Guid SystemId = Guid.NewGuid();
    
    [AggregateHandler]
    // The object? as return value will be interpreted
    // by Wolverine as appending one or zero events
    public static async Task<object?> Handle(
        CategoriseIncident command, 
        IncidentDetails existing,
        IMessageBus bus)
    {
        if (existing.Category != command.Category)
        {
            // Send the message to any and all subscribers to this message
            await bus.PublishAsync(new TryAssignPriority { IncidentId = existing.Id });
            return new IncidentCategorised
            {
                Category = command.Category,
                UserId = SystemId
            };
        }

        // Wolverine will interpret this as "do no work"
        return null;
    }
}

I didn’t do anything that is necessarily out of order here. We haven’t built a message handler for TryAssignPriority or done anything to register subscribers, but that can come later because the PublishAsync() call up above will quietly do nothing if there are no known subscribers for the message.

For asynchronous messaging veterans out there, I will discuss Wolverine’s support for a transactional outbox for a later post. For now, just know that there’s at the very least an in-memory outbox around any message handler that will not send out any pending published messages until after the original message is successfully handled. If you’re not familiar with the “transactional outbox” pattern, please come back to read the follow up post on that later because you absolutely need to understand that to use asynchronous messaging infrastructure like Wolverine.

Next, let’s just add a skeleton message handler for our TryAssignPriority command message in the root API projection:

public static class TryAssignPriorityHandler
{
    public static void Handle(TryAssignPriority command)
    {
        Console.WriteLine("Hey, somebody wants me to prioritize incident " + command.IncidentId);
    }
}

Switching to the command line (you may need to have the PostgreSQL database running for this next thing to work #sadtrombone), I’m going to call dotnet run -- describe to preview my help desk API a little bit.

Under the section of the textual output with the header “Wolverine Message Routing”, you’ll see the message routing tree for Wolverine’s known message types:

┌─────────────────────────────────┬──────────────────────────────────────────┬──────────────────┐
│ Message Type                    │ Destination                              │ Content Type     │
├─────────────────────────────────┼──────────────────────────────────────────┼──────────────────┤
│ Helpdesk.Api.CategoriseIncident │ local://helpdesk.api.categoriseincident/ │ application/json │
│ Helpdesk.Api.TryAssignPriority  │ local://helpdesk.api.tryassignpriority/  │ application/json │
└─────────────────────────────────┴──────────────────────────────────────────┴──────────────────┘

As you can hopefully see in that table up above, just by the fact that Wolverine “knows” there is a handler in the local application for the TryAssignPriority message type, it’s going to route messages of that type to a local queue where it will be executed later in a separate thread.

Don’t worry, this conventional routing, the parallelization settings, and just about anything you can think of is configurable, but let’s mostly stay with defaults for right now.

Switching to the Wolverine configuration in the Program file, here’s a little taste of some of the ways we could control the exact parameters of the asynchronous processing for this local, in memory queue:

builder.Host.UseWolverine(opts =>
{
    // more configuration...

    // Adding a single Rabbit MQ messaging rule
    opts.PublishMessage<RingAllTheAlarms>()
        .ToRabbitExchange("notifications");

    opts.LocalQueueFor<TryAssignPriority>()
        // By default, local queues allow for parallel processing with a maximum
        // parallel count equal to the number of processors on the executing
        // machine, but you can override the queue to be sequential and single file
        .Sequential()

        // Or add more to the maximum parallel count!
        .MaximumParallelMessages(10);
    
    // Or if so desired, you can route specific messages to 
    // specific local queues when ordering is important
    opts.Policies.DisableConventionalLocalRouting();
    opts.Publish(x =>
    {
        x.Message<TryAssignPriority>();
        x.Message<CategoriseIncident>();

        x.ToLocalQueue("commands").Sequential();
    });
});

Summary and What’s Next

Through its local queues function, Wolverine has very strong support for managing asynchronous work within a local process. Any of Wolverine’s message handling capability is usable within these local queues. You also have complete control over the parallelization of the messages being handled in these local queues.

This functionality does raise a lot of questions that I will try to answer in subsequent posts in this series:

  • For the sake of system consistency, we absolutely have to talk about Wolverine’s transactional outbox support
  • How we can use Wolverine’s integration testing support to test our system even when it is spawning additional messages that may be handled asynchronously
  • Wolverine’s ability to automatically forward captured events in Marten to message handlers for side effects
  • How to utilize Wolverine’s “special sauce” to craft message handlers as pure functions that are more easily unit tested than what we have so far
  • Wolverine’s built in Open Telemetry support to trace the asynchronous work end to end
  • Wolverine’s error handling policies to make our system as resilient as possible

Thanks for reading! I’ve been pleasantly surprised how well this series has been received so far. I think this will be the last entry until after Christmas, but I think I will write at least 7-8 more just to keep introducing bits of Critter Stack capabilities in small bites. In the meantime, Merry Christmas and Happy Holidays to you all!

Building a Critter Stack Application: Marten as Document Database

Hey, did you know that JasperFx Software is ready for formal support plans for Marten and Wolverine? Not only are we trying to make the “Critter Stack” tools be viable long term options for your shop, we’re also interested in hearing your opinions about the tools and how they should change. We’re also certainly open to help you succeed with your software development projects on a consulting basis whether you’re using any part of the Critter Stack or any other .NET server side tooling.

Let’s build a small web service application using the whole “Critter Stack” and their friends, one small step at a time. For right now, the “finished” code is at CritterStackHelpDesk on GitHub.

The posts in this series are:

  1. Event Storming
  2. Marten as Event Store
  3. Marten Projections
  4. Integrating Marten into Our Application
  5. Wolverine as Mediator
  6. Web Service Query Endpoints with Marten
  7. Dealing with Concurrency
  8. Wolverine’s Aggregate Handler Workflow FTW!
  9. Command Line Diagnostics with Oakton
  10. Integration Testing Harness
  11. Marten as Document Database (this post)
  12. Asynchronous Processing with Wolverine
  13. Durable Outbox Messaging and Why You Care!
  14. Wolverine HTTP Endpoints
  15. Easy Unit Testing with Pure Functions
  16. Vertical Slice Architecture
  17. Messaging with Rabbit MQ
  18. The “Stateful Resource” Model
  19. Resiliency

So far, we’ve been completely focused on using Marten as an Event Store. While the Marten team is very committed to the event sourcing feature set, it’s pretty likely that you’ll have other data persistence needs in your system that won’t fit the event sourcing paradigm. Not to worry though, because Marten also has a very robust “PostgreSQL as Document Database” feature set that’s perfect for low friction data persistence outside of the event storage. We’ve even used it in earlier posts as Marten projections utilize Marten’s document database features when projections are running Inline or Async (i.e., not Live).

Since we’ve already got Marten integrated into our help desk application at this point, let’s just start with a document to represent customers:

public class Customer
{
    public Guid Id { get; set; }

    // We'll use this later for some "logic" about how incidents
    // can be automatically prioritized
    public Dictionary<IncidentCategory, IncidentPriority> Priorities { get; set; }
        = new();
    
    public string? Region { get; set; }
    
    public ContractDuration Duration { get; set; } 
}

public record ContractDuration(DateOnly Start, DateOnly End);

To be honest, I’m guessing at what a Customer might involve in the end, but it’s okay that I don’t know that upfront per se because as we’ll see soon, Marten makes it very easy to evolve your persisted documents.

Having built the integration test harness for our application in the last post, let’s drop right into an integration test that persists a new Customer document object, and reloads a copy from the persisted data:

public class using_customer_document : IntegrationContext
{
    public using_customer_document(AppFixture fixture) : base(fixture)
    {
    }

    [Fact]
    public async Task persist_and_load_customer_data()
    {
        var customer = new Customer
        {
            Duration = new ContractDuration(new DateOnly(2023, 12, 1), new DateOnly(2024, 12, 1)),
            Region = "West Coast",
            Priorities = new Dictionary<IncidentCategory, IncidentPriority>
            {
                { IncidentCategory.Database, IncidentPriority.High }
            }
        };
        
        // As a convenience just because you'll use it so often in tests,
        // I made a property named "Store" on the base class for quick access to
        // the DocumentStore for the application
        // ALWAYS remember to dispose any sessions you open in tests!
        await using var session = Store.LightweightSession();
        
        // Tell Marten to save the new document
        session.Store(customer);

        // commit any pending changes
        await session.SaveChangesAsync();

        // Marten is assigning an Id for you when one doesn't already
        // exist, so that's where that value comes from
        var copy = await session.LoadAsync<Customer>(customer.Id);
        
        // Just proving to you that it's not the same object
        copy.ShouldNotBeSameAs(customer);
        
        copy.Duration.ShouldBe(customer.Duration);
    }
}

As long as the configured database for our help desk API is available, the test above will happily pass. I’d like to draw your attention to a couple things about that test above:

  • Notice that I didn’t have to make any changes to our application’s AddMarten() configuration in the Program file first because Marten is able to create storage for the new Customer document type on the fly when it first encounters it with its default settings
  • Marten is able to infer that the Id property of the new Customer type is the identity (that can be overridden), and when you add a new Customer document to the session that has an empty Guid as its Id, Marten will quickly assign and set a sequential Guid value for its identity. If you’re wondering, Marten can do this even if the property is scoped as private.
  • The Store() method is effectively an “upsert,” that takes advantage of PostgreSQL’s very efficient, built in upsert syntax. Marten does also support Insert and Update operations, but Store is just an easy default

Behind the scenes, Marten is just serializing our document to JSON and storing that data in a PostgreSQL JSONB column type that will allow for efficient querying within the JSON body later (if you’re immediately asking “why isn’t this thing supporting Sql Server?!?, it’s because only PostgreSQL has the JSONB type). If your document type can be round-tripped by either the venerable Newtonsoft.Json library or the newer System.Text.Json library, that document type can be persisted by Marten with zero explicit mapping.

In many cases, Marten’s approach to object persistence can lead to far less friction and boilerplate code than the equivalent functionality using EF Core, the .NET developer tool of choice. Moreover, using Marten requires a lot fewer database migrations as you change and evolve your document structure, giving developers far more ability to iterate over the shape of their persisted types as opposed to an ORM + Relational Database combination.

And of course, this is .NET, so Marten does come with LINQ support, so we can do queries like this:

        var results = await session.Query<Customer>()
            .Where(x => x.Region == "West Coast")
            .OrderByDescending(x => x.Duration.End)
            .ToListAsync();

As you’ll already know if you happen to follow me on Mastodon, we’re hopefully nearing the end of some very substantial improvements to the LINQ support for the forthcoming Marten v7 release.

While the document database feature set in Marten is pretty deep, the last thing I want to show in this post is that yes, you can create indexes within the JSON body for faster querying as needed. This time, I am going to the AddMarten() configuration in the Program file and add a little bit of code to index the Customer document on its Region field:

builder.Services.AddMarten(opts =>
{
    // other configuration...

    // This will create a btree index within the JSONB data
    opts.Schema.For<Customer>().Index(x => x.Region);
});

Summary and What’s Next

Once upon a time, Marten started with a pressing need to have a reliable, ACID-compliant document database feature set, and we originally chose PostgreSQL because of its unique JSON feature set. Almost on a lark, I added a nascent event sourcing capability before the original Marten 1.0 release. To my surprise, the event sourcing feature set is the main driver of Marten adoption by far, but Marten still has its original feature set to make the rock solid PostgreSQL database engine function as a document database for .NET developers.

Even in a system using event sourcing, there’s almost always some kind of relatively static reference data that’s better suited for Marten’s document database feature set or even going back to using PostgreSQL as the outstanding relational database engine that it is.

In the next post, now that we also know how to store and retrieve customer documents with Marten, we’re going to introduce Wolverine’s “compound handler” capability and see how that can help us factor our code into being very testable.

Building a Critter Stack Application: Integration Testing Harness

Hey, did you know that JasperFx Software is ready for formal support plans for Marten and Wolverine? Not only are we trying to make the “Critter Stack” tools be viable long term options for your shop, we’re also interested in hearing your opinions about the tools and how they should change.

The older parts of the JasperFx / Critter Stack projects are named after itty bitty small towns in SW Missouri, including Alba.

Let’s build a small web service application using the whole “Critter Stack” and their friends, one small step at a time. For right now, the “finished” code is at CritterStackHelpDesk on GitHub.

The posts in this series are:

  1. Event Storming
  2. Marten as Event Store
  3. Marten Projections
  4. Integrating Marten into Our Application
  5. Wolverine as Mediator
  6. Web Service Query Endpoints with Marten
  7. Dealing with Concurrency
  8. Wolverine’s Aggregate Handler Workflow FTW!
  9. Command Line Diagnostics with Oakton
  10. Integration Testing Harness (this post)
  11. Marten as Document Database
  12. Asynchronous Processing with Wolverine
  13. Durable Outbox Messaging and Why You Care!
  14. Wolverine HTTP Endpoints
  15. Easy Unit Testing with Pure Functions
  16. Vertical Slice Architecture
  17. Messaging with Rabbit MQ
  18. The “Stateful Resource” Model
  19. Resiliency

Before I go on with anything else in this series, I think we should establish some automated testing infrastructure for our incident tracking, help desk service. While we’re absolutely going to talk about how to structure code with Wolverine to make isolated unit testing as easy as possible for our domain logic, there are some elements of your system’s behavior that are best tested with automated integration tests that use the system’s infrastructure.

In this post I’m going to show you how I like to set up an integration testing harness for a “Critter Stack” service. I’m going to use xUnit.Net in this post, and while the mechanics would be a little different, I think the basic concepts should be easily transferable to other testing libraries like NUnit or MSTest. I’m also going to bring in the Alba library that we’ll use for testing HTTP calls through our system in memory, but in this first step, all you need to understand is that Alba is helping to set up the system under test in our testing harness.

Heads up a little bit, I’m skipping to the “finished” state of the help desk API code in this post, so there’s some Marten and Wolverine concepts sneaking in that haven’t been introduced until now.

First, let’s start our new testing project with:

dotnet new xunit

Then add some additional Nuget references:

dotnet add package Shouldly
dotnet add package Alba

That gives us a skeleton of the testing project. Before going on, we need to add a project reference from our new testing project to the entry point project of our help desk API. As we are worried about integration testing right now, we’re going to want the testing project to be able to start the system under test project up by calling the normal Program.Main() entrypoint so that we’re running the application the way that the system is normally configured — give or take a few overrides.

Let’s stop and talk about this a little bit because I think this is an important point. I think integration tests are more “valid” (i.e. less prone to false positives or false negatives) as they more closely reflect the actual system. I don’t want completely separate bootstrapping for the test harness that may or may not reflect the application’s production bootstrapping (don’t blow that point off, I’ve seen countless teams do partial IoC configuration for testing that can vary quite a bit from the application’s configuration).

So if you’ll accept my argument that we should be bootstrapping the system under test with its own Program.Main() entry point, our next step is to add this code to the main service to enable the test project to access that entry point:

using System.Runtime.CompilerServices;

// You have to do this in order to reference the Program
// entry point in the test harness
[assembly:InternalsVisibleTo("Helpdesk.Api.Tests")]

Switching finally to our testing project, I like to create a class I usually call AppFixture that manages the lifetime of the system under test running in our test project like so:

public class AppFixture : IAsyncLifetime
{
    public IAlbaHost Host { get; private set; }

    // This is a one time initialization of the
    // system under test before the first usage
    public async Task InitializeAsync()
    {
        // Sorry folks, but this is absolutely necessary if you 
        // use Oakton for command line processing and want to 
        // use WebApplicationFactory and/or Alba for integration testing
        OaktonEnvironment.AutoStartHost = true;

        // This is bootstrapping the actual application using
        // its implied Program.Main() set up
        // This is using a library named "Alba". See https://jasperfx.github.io/alba for more information
        Host = await AlbaHost.For<Program>(x =>
        {
            x.ConfigureServices(services =>
            {
                // We'll be using Rabbit MQ messaging later...
                services.DisableAllExternalWolverineTransports();
                
                // We're going to establish some baseline data
                // for testing
                services.InitializeMartenWith<BaselineData>();
            });
        }, new AuthenticationStub());
    }

    public Task DisposeAsync()
    {
        if (Host != null)
        {
            return Host.DisposeAsync().AsTask();
        }

        return Task.CompletedTask;
    }
}

A few notes about the code above:

  • Alba is using the WebApplicationFactory under the covers to bootstrap our help desk API service using the in memory TestServer in place of Kestrel. WebApplicationFactory does allow us to modify the IoC service registrations for our system and override parts of the system’s normal configuration
  • In this case, I’m telling Wolverine to effectively stub out all external transports. In later posts we’ll use Rabbit MQ for example to publish messages to an external process, but in this test harness we’re going to turn that off and simple have Wolverine be able to “catch” the outgoing messages in our tests. See Wolverine’s test automation support documentation for more information about this.
  • More on this later, but Marten has a built in facility to establish baseline data sets that can be used in test automation to effectively rewind the database to an initial state with one command
  • The DisposeAsync() method is very important. If you want to make your integration tests be repeatable and run smoothly as you iterate, you need the tests to clean up after themselves and not leave locks on resources like ports or files that could stop the next test run from functioning correctly
  • Pay attention to the `OaktonEnvironment.AutoStartHost = true;` call, that’s 100% necessary if your application is using Oakton for command parsing. Sorry.
  • As will be inevitably necessary, I’m using Alba’s facility for stubbing out web authentication that allows us to both sidestep pesky authentication infrastucture in functional testing while also happily letting us pass along user claims as test inputs in individual tests
  • Bootstrapping the IHost for your application can be expensive, so I prefer to share that host across tests whenever possible, and I generally rely on having individual tests establish their inputs at beginning of each test. See the xUnit.Net documentation on sharing fixtures between tests for more context about the xUnit mechanics.

For the Marten baseline data, right now I’m just making sure there’s at least one valid Customer document that we’ll need later:

public class BaselineData : IInitialData
{
    public static Guid Customer1Id { get; } = Guid.NewGuid();
    
    public async Task Populate(IDocumentStore store, CancellationToken cancellation)
    {
        await using var session = store.LightweightSession();
        session.Store(new Customer
        {
            Id = Customer1Id,
            Region = "West Cost",
            Duration = new ContractDuration(DateOnly.FromDateTime(DateTime.Today.Subtract(100.Days())), DateOnly.FromDateTime(DateTime.Today.Add(100.Days())))
        });

        await session.SaveChangesAsync(cancellation);
    }
}

To simplify the usage a little bit, I like to have a base class for integration tests that I like to call IntegrationContext:

[Collection("integration")]
public abstract class IntegrationContext : IAsyncLifetime
{
    private readonly AppFixture _fixture;

    protected IntegrationContext(AppFixture fixture)
    {
        _fixture = fixture;
    }
    
    // more....

    public IAlbaHost Host => _fixture.Host;

    public IDocumentStore Store => _fixture.Host.Services.GetRequiredService<IDocumentStore>();

    async Task IAsyncLifetime.InitializeAsync()
    {
        // Using Marten, wipe out all data and reset the state
        // back to exactly what we described in BaselineData
        await Store.Advanced.ResetAllData();
    }

    // This is required because of the IAsyncLifetime 
    // interface. Note that I do *not* tear down database
    // state after the test. That's purposeful
    public Task DisposeAsync()
    {
        return Task.CompletedTask;
    }

    // This is just delegating to Alba to run HTTP requests
    // end to end
    public async Task<IScenarioResult> Scenario(Action<Scenario> configure)
    {
        return await Host.Scenario(configure);
    }

    // This method allows us to make HTTP calls into our system
    // in memory with Alba, but do so within Wolverine's test support
    // for message tracking to both record outgoing messages and to ensure
    // that any cascaded work spawned by the initial command is completed
    // before passing control back to the calling test
    protected async Task<(ITrackedSession, IScenarioResult)> TrackedHttpCall(Action<Scenario> configuration)
    {
        IScenarioResult result = null;

        // The outer part is tying into Wolverine's test support
        // to "wait" for all detected message activity to complete
        var tracked = await Host.ExecuteAndWaitAsync(async () =>
        {
            // The inner part here is actually making an HTTP request
            // to the system under test with Alba
            result = await Host.Scenario(configuration);
        });

        return (tracked, result);
    }
}

The first thing I want to draw your attention to is the call to await Store.Advanced.ResetAllData(); in the InitializeAsync() method that will be called before each of our integration tests executing. In my approach, I strongly prefer to reset the state of the database before each test in order to start from a known system state. I’m also assuming that each test if necessary, will add additional state to the system’s Marten database as necessary for the test. This philosophically is what I’ve long called “Self-Contained Tests.” I also think it’s important to have the tests leave the database state alone after a test run so that if you are running tests one at a time you can use the left over database state to help troubleshoot why a test might have failed.

Other folks will try to spin up a separate database (maybe with TestContainers) per test or even a completely separate IHost per test, but I think that the cost of doing it that way is just too slow. I’d rather reset the system between tests and not incur the cost of recycling database containers and/or the system’s IHost. This comes at the cost of forcing your test suite to run tests in serial order, but I also think that xUnit.Net is not the best possible tool at parallel test runs, so I’m not sure you lose out on anything there.

And now for an actual test. We have an HTTP endpoint in our system we built early on that can process a LogIncident command, and create a new event stream for this new Incident with a single IncidentLogged event. I’ve skipped ahead a little bit and added a requirement that we capture a user id from an expected Claim on the ClaimsPrincipal for the current request that you’ll see reflected in the test below:

public class log_incident : IntegrationContext
{
    public log_incident(AppFixture fixture) : base(fixture)
    {
    }

    [Fact]
    public async Task create_a_new_incident()
    {
        // We'll need a user
        var user = new User(Guid.NewGuid());
        
        // Log a new incident by calling the HTTP
        // endpoint in our system
        var initial = await Scenario(x =>
        {
            var contact = new Contact(ContactChannel.Email);
            x.Post.Json(new LogIncident(BaselineData.Customer1Id, contact, "It's broken")).ToUrl("/api/incidents");
            x.StatusCodeShouldBe(201);
            
            x.WithClaim(new Claim("user-id", user.Id.ToString()));
        });

        var incidentId = initial.ReadAsJson<NewIncidentResponse>().IncidentId;

        using var session = Store.LightweightSession();
        var events = await session.Events.FetchStreamAsync(incidentId);
        var logged = events.First().ShouldBeOfType<IncidentLogged>();

        // This deserves more assertions, but you get the point...
        logged.CustomerId.ShouldBe(BaselineData.Customer1Id);
    }
}

Summary and What’s Next

The “Critter Stack” core team and our community care very deeply about effective testing, so we’ve invested from the very beginning in making integration testing as easy as possible with both Marten and Wolverine.

Alba is another little library from the JasperFx family that just makes it easier to write integration tests at the HTTP layer. Alba is perfect for doing integration testing of your web services. I definitely find it advantageous to be able to quickly bootstrap a web service project and run tests completely in memory on demand. That’s a much easier and quicker feedback cycle than trying to deploy the service and write tests that remotely interact with the web service through HTTP. And I shouldn’t even have to mention how absurdly slow it is in comparison to try to test the same web service functionality through the actual user interface with something like Selenium.

From the Marten side of things, PostgreSQL has a pretty small Docker image size, so it’s pretty painless to spin up on development boxes. Especially contrasted with situations where development teams share a centralized development database (shudder, hope not many folks still do that), having an isolated database for each developer that they can also tear down and rebuild at will certainly helps make it a lot easier to succeed with automated integration testing.

I think that document databases in general are a lot easier to deal with in automated testing than using a relational database with an ORM as the persistence tooling as it’s much less friction in setting up database schemas or to tear down database state. Marten goes a step farther than most persistence tools by having built in APIs to tear down database state or reset to baseline data sets in between tests.

We’ll dig deeper into Wolverine’s integration testing support later in this series with message handler testing, testing handlers that in turn spawn other messages, and dealing with external messaging in tests.

I think the next post is just going to be a quick survey of “Marten as Document Database” before I get back to Wolverine’s HTTP endpoint model.

Building a Critter Stack Application: Command Line Tools with Oakton

Hey, did you know that JasperFx Software is ready for formal support plans for Marten and Wolverine? Not only are we trying to make the “Critter Stack” tools be viable long term options for your shop, we’re also interested in hearing your opinions about the tools and how they should change. We’re also certainly open to help you succeed with your software development projects on a consulting basis whether you’re using any part of the Critter Stack or any other .NET server side tooling.

Let’s build a small web service application using the whole “Critter Stack” and their friends, one small step at a time. For right now, the “finished” code is at CritterStackHelpDesk on GitHub.

The posts in this series are:

  1. Event Storming
  2. Marten as Event Store
  3. Marten Projections
  4. Integrating Marten into Our Application
  5. Wolverine as Mediator
  6. Web Service Query Endpoints with Marten
  7. Dealing with Concurrency
  8. Wolverine’s Aggregate Handler Workflow FTW!
  9. Command Line Diagnostics with Oakton (this post)
  10. Integration Testing Harness
  11. Marten as Document Database
  12. Asynchronous Processing with Wolverine
  13. Durable Outbox Messaging and Why You Care!
  14. Wolverine HTTP Endpoints
  15. Easy Unit Testing with Pure Functions
  16. Vertical Slice Architecture
  17. Messaging with Rabbit MQ
  18. The “Stateful Resource” Model
  19. Resiliency

Hey folks, I’m deviating a little bit from the planned order and taking a side trip while we’re finishing up a bug fix release to address some OpenAPI generation hiccups before I go on to Wolverine HTTP endpoints.

Admittedly, Wolverine and to a lesser extent Marten have a bit of a “magic” conventional approach. They also depend on external configuration items, external infrastructural tools like databases or message brokers that require their own configuration, and there’s always the possibility of assembly mismatches from users doing who knows what with their Nuget dependency tree.

To help unwind potential problems with diagnostic tools and to facilitate environment setup, the “Critter Stack” uses the Oakton library to integrate command line utilities right into your application.

Applying Oakton to Your Application

To get started, I’m going right back to the Program entry point of our incident tracking help desk application and adding just a couple lines of code. First, Oakton is a dependency of Wolverine, so there’s no additional dependency to add, but we’ll add a using statement:

using Oakton;

This is optional, but we’ll possibly want the extra diagnostics, so I’ll add this line of code near the top:

// This opts Oakton into trying to discover diagnostics 
// extensions in other assemblies. Various Critter Stack
// libraries expose extra diagnostics, so we want this
builder.Host.ApplyOaktonExtensions();

and finally, I’m going to drop down to the last line of Program and replace the typical app.Run(); code with the command line parsing with Oakton:

// This is important for Wolverine/Marten diagnostics 
// and environment management
return await app.RunOaktonCommands(args);

Do note that it’s important to return the exit code of the command line runner up above. If you choose to use Oakton commands in a build script, returning a non zero exit code signals the caller that the command failed.

Command Line Mechanics

Next, I’m going to open a command prompt to the root directory of the HelpDesk.Api project, and use this to get a preview of the command line options we now have:

dotnet run -- help

That should render some help text like this:

  Alias           Description                                                                                                             
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
  check-env       Execute all environment checks against the application                                                                  
  codegen         Utilities for working with JasperFx.CodeGeneration and JasperFx.RuntimeCompiler                                         
  db-apply        Applies all outstanding changes to the database(s) based on the current configuration                                   
  db-assert       Assert that the existing database(s) matches the current configuration                                                  
  db-dump         Dumps the entire DDL for the configured Marten database                                                                 
  db-patch        Evaluates the current configuration against the database and writes a patch and drop file if there are any differences  
  describe        Writes out a description of your running application to either the console or a file                                    
  help            List all the available commands                                                                                         
  marten-apply    Applies all outstanding changes to the database based on the current configuration                                      
  marten-assert   Assert that the existing database matches the current Marten configuration                                              
  marten-dump     Dumps the entire DDL for the configured Marten database                                                                 
  marten-patch    Evaluates the current configuration against the database and writes a patch and drop file if there are any differences  
  projections     Marten's asynchronous projection and projection rebuilds                                                                
  resources       Check, setup, or teardown stateful resources of this system                                                             
  run             Start and run this .Net application                                                                                     
  storage         Administer the Wolverine message storage                                                                                

So that’s a lot, but let’s just start by explaining the basics of the command line for .NET applications. You can both pass arguments and flags to the dotnet application itself, and also to the application’s Program.Main(params string[] args) command. The key thing to know is that dotnet arguments and flags are segregated from the application’s arguments and flags by a double dash “–” separator. So for example the command, dotnet run --framework net8.0 -- codegen write is sending the framework flag to dotnet run, and the codegen write arguments to the application itself.

Stateful Resource Setup

Skipping a little bit to the end state of our help desk API project, we’ll have dependencies on:

  • Marten schema objects in the PostgreSQL database
  • Wolverine schema objects in PostgreSQL database (for the transactional inbox/outbox we’ll introduce later in this series)
  • Rabbit MQ exchanges for Wolverine to broadcast to later

One of the guiding philosophies of the Critter Stack is to minimize the “Time to Login Screen” (hat tip to Chad Myers) quality of your codebase. What this means is that we really want a new developer to our system (or a developer coming back after a long, well deserved vacation) to do a clean clone of our codebase, and very quickly be able to run the application and any integration tests end to end. To that end, Oakton exposes its “Stateful Resource” model as an adapter for tools like Marten and Wolverine to set up their resources to match their configuration.

Pretend just for a minute that you have all the necessary rights and permissions to configure database schemas and Rabbit MQ exchanges, queues, and bindings on whatever your Rabbit MQ broker is for development. Assuming that, you can have your copy of the help desk API completely up and ready to run through these steps at the command prompt starting at wherever you want the code to be:

git clone https://github.com/JasperFx/CritterStackHelpDesk.git
cd CritterStackHelpDesk
docker compose up -d
cd HelpDesk.Api
dotnet run -- resources setup

At the end of those calls, you should see this output:

The dotnet run -- resources setup command is able to do Marten database migrations for its event store storage and any document types it knows about upfront, the Wolverine envelope storage tables we’ll configure later, and the known Rabbit MQ exchange where we’ll configure for broadcasting integration events later.

The resources command has other options as shown below from dotnet run -- help resources:

You may need to pause a little bit between the call to docker compose and dotnet run to let Docker catch up!

Environment Checks

Years ago I worked on an early .NET system that still had a lot of COM dependencies that needed to be correctly registered outside of our application and used a shared database that was indifferently maintained as was common way back then. Needless to say, our deployments were chaotic as we never knew what shape the server was in when we deployed. We finally beat our deployment woes by implementing “environment tests” to our deployment scripts that would test out the environment dependencies (is the COM server there? can we connect to the database? is the expected XML file there?) and fail fast with descriptive messages when the server was in a crap state as we tried to deploy.

To that end, Oakton has its environment check model that both Marten and Wolverine utilize. In our help desk application, we already have a Marten dependency, so we know the application will not function correctly if either the database is unavailable or the connection string in the configuration just happens to be wrong or there’s a security set up issue or you get the picture.

So, picking up our application with every bit of infrastructure purposely turned off, I’ll run this command:

dotnet run -- check-env

and the result is a huge blob of exception text and the command will fail — allowing you to abort a build script that might be delegating to this command:

Next, I’m going to turn on all the infrastructure (and set up everything to match our application’s configuration with the second command) with a quick call to:

docker compose up -d
dotnet run -- resources setup

Now, I can run the environment checks again and get a green bill of health for our system:

Oakton’s environment check model predates the new .NET IHealthCheck model. Oakton will also support that model soon, and you can track that work here.

“Describe” Our System

Oakton’s describe command can give you some insights into your application, and tools like Marten or Wolverine can expose extensions to this model for further output. By typing this command at the project root:

dotnet run -- describe

We’ll get some basic information about our system like this preview of the configuration:

The loaded assemblies because you will occasionally get burned by unexpected Nuget behavior pulling in the wrong versions:

And sigh, because folks have frequently had some trouble understanding how Wolverine does its automatic handler discovery, we have this preview:

And quite a bit more information including:

  • Wolverine messaging endpoints
  • Wolverine’s local queues
  • Wolverine message routing
  • Wolverine exception handling policy configuration

Summary and What’s Next

Oakton is yet another command line parsing tool in .NET, of which there are at least dozens that are perfectly competent. What makes Oakton special though is its ability to add command line tools directly to the entry point of your application where you already have all your infrastructure configuration available. The main point I hope you take away from this is that the command line tooling in the “Critter Stack” can help your team development faster through the diagnostics and environment management features.

The “Critter Stack” is heavily utilizing Oakton’s extensibility model for:

  1. The static description of the application configuration that may frequently be helpful for troubleshooting or just understanding your system
  2. Stateful resource management of development dependencies like databases and message brokers. So far this is supported for Marten, both PostgreSQL and Sql Server dependencies of Wolverine, Rabbit MQ, Kafka, Azure Service Bus, and AWS SQS
  3. Environment checks to test out the validity of your system and its ability to connect to external resources during deployment or during development
  4. Any other utility you care to add to your system like resetting a baseline database state, adding users, or anything you care to do through Oakton’s command extensibility

As for what’s next, you’ll have to let me see when some bug fix releases get in place before I promise what exactly is going to be next in this series. I expect this series to at least go to 15-20 entries as I introduce more Wolverine scenarios, messaging, and quite a bit about automated testing. And also, I take requests!

If you’re curious, the JasperFx GitHub organization was originally conceived of as the reboot of the previous FubuMVC ecosystem, with the main project being “Jasper” and the smaller ancillary tools ripped out of the flotsam and jetsam of StructureMap and FubuMVC arranged around what was then called “Jasper,” which was named for my hometown. The smaller tools like Oakton, Alba, and Lamar are named after other small towns close to the titular Jasper, MO. As Marten took off and became by far and away the most important tool in our stable, we adopted the “Critter Stack” naming them as we pulled out Weasel into its own library and completely rebooted and renamed “Jasper” as Wolverine to be a natural complement to Marten.

And lastly, I’m not even sure that Oakton, MO will even show up on maps because it’s effectively a Methodist Church, a cemetery, the ruins of the general store, and a couple farm houses at a cross roads. In Missouri at least, towns cease to exist when they lose their post office. The area I grew up in is littered with former towns that fizzled out as the farm economy changed and folks moved to bigger towns later.

Building a Critter Stack Application: Wolverine’s Aggregate Handler Workflow FTW!

TL;DR: The full critter stack combo can make CQRS command handler code much simpler and easier to test than any other framework on the planet. Fight me.

Hey, did you know that JasperFx Software is ready for formal support plans for Marten and Wolverine? Not only are we trying to make the “Critter Stack” tools be viable long term options for your shop, we’re also interested in hearing your opinions about the tools and how they should change. We’re also certainly open to help you succeed with your software development projects on a consulting basis whether you’re using any part of the Critter Stack or any other .NET server side tooling.

Let’s build a small web service application using the whole “Critter Stack” and their friends, one small step at a time. For right now, the “finished” code is at CritterStackHelpDesk on GitHub.

The posts in this series are:

  1. Event Storming
  2. Marten as Event Store
  3. Marten Projections
  4. Integrating Marten into Our Application
  5. Wolverine as Mediator
  6. Web Service Query Endpoints with Marten
  7. Dealing with Concurrency
  8. Wolverine’s Aggregate Handler Workflow FTW! (this post)
  9. Command Line Diagnostics with Oakton
  10. Integration Testing Harness
  11. Marten as Document Database
  12. Asynchronous Processing with Wolverine
  13. Durable Outbox Messaging and Why You Care!
  14. Wolverine HTTP Endpoints
  15. Easy Unit Testing with Pure Functions
  16. Vertical Slice Architecture
  17. Messaging with Rabbit MQ
  18. The “Stateful Resource” Model
  19. Resiliency

This series has been written partially in response to some constructive criticism that my writings on the “Critter Stack” suffered from introducing too many libraries or concepts all at once. As a reaction to that, this series is trying to only introduce one new capability or library at a time — which brought on some constructive criticism from someone else that the series isn’t making it obvious why anyone should care about the “Critter Stack” in the first place. So especially for Rob Conery, I give you:

Last time out we talked using Marten’s facilities for optimistic concurrency or exclusive locking to protect our system from inconsistencies due to concurrent commands being processed against the same incident event stream. In the process of that post, I showed the code for a command handler for the CategoriseIncident command shown below that I purposely wrote in a long hand form as explicitly as possible to avoid introducing too many new concepts at once:

public static class LongHandCategoriseIncidentHandler
{
    public static async Task Handle(
        CategoriseIncident command, 
        IDocumentSession session, 
        CancellationToken cancellationToken)
    {
        var stream = await session
            .Events
            .FetchForWriting<IncidentDetails>(command.Id, cancellationToken);

        // Don't worry, we're going to clean this up later
        if (stream.Aggregate == null)
        {
            throw new ArgumentOutOfRangeException(nameof(command), "Unknown incident id " + command.Id);
        }
        
        // We need to validate whether this command actually 
        // should do anything
        if (stream.Aggregate.Category != command.Category)
        {
            var categorised = new IncidentCategorised
            {
                Category = command.Category,
                UserId = SystemId
            };

            stream.AppendOne(categorised);
            
            await session.SaveChangesAsync(cancellationToken);
        }
    }

Hopefully that code is relatively easy to follow, but it’s still pretty busy and there’s a mixture of business logic and fiddling with infrastructure code that’s not particularly helpful when the code inevitably gets more complicated than that as the requirements grow. As we’ll learn about later in this series, both Marten and Wolverine have some built in tooling to enable effective automated integration testing and do so much more effectively than just about any other tool out there. All the same though, you just don’t want to be testing the business logic by trudging through integration tests if you don’t have to (see my only rule of testing).

So let’s definitely look at how Wolverine plays nicely with Marten using its aggregate handler workflow recipe to simplify our handler for easier unit testing and just flat out cleaner code.

First off, I’m going to add the WolverineFx.Marten Nuget to our application:

dotnet add package WolverineFx.Marten

Next, break into our application’s Program file and add one call to the Marten configuration to incorporate some Wolverine goodness into Marten in our application:

builder.Services.AddMarten(opts =>
{
    // Existing Marten configuration...
})
    // This is a mild optimization
    .UseLightweightSessions()

    // Use this directive to add Wolverine transactional middleware for Marten
    // and the Wolverine transactional outbox support as well
    .IntegrateWithWolverine();

And now, let’s rewrite our CategoriseIncident command handler with a completely equivalent implementation using the “aggregate handler workflow” recipe:

public static class CategoriseIncidentHandler
{
    // Kinda faked, don't pay any attention to this please!
    public static readonly Guid SystemId = Guid.Parse("4773f679-dcf2-4f99-bc2d-ce196815dd29");

    // This Wolverine handler appends an IncidentCategorised event to an event stream
    // for the related IncidentDetails aggregate referred to by the CategoriseIncident.IncidentId
    // value from the command
    [AggregateHandler]
    public static IEnumerable<object> Handle(CategoriseIncident command, IncidentDetails existing)
    {
        if (existing.Category != command.Category)
        {
            // This event will be appended to the incident
            // stream after this method is called
            yield return new IncidentCategorised
            {
                Category = command.Category,
                UserId = SystemId
            };
        }
    }
}

In the handler method above, the presence of the[AggregateHandler]attribute directs Wolverine to wrap some middleware around the execution of our Handle() method that:

  • “Knows” the aggregate type in question is the second argument to the handler method, so in this case, IncidentDetails
  • Scans the CategoriseIncident type looking for a property that identifies the IncidentDetails (which will make it utilize the Id property in this case, but the docs spell this convention in detail)
  • Does all the work to delegate and coordinate work in the logical command flow between the Marten infrastructure and our little bitty Handle() method

To visualize this, Wolverine is generating its own internal message handler for CategoriseIncident that has this simplified workflow:

And as a preview to a topic I’ll dive into in much more detail in a later post, here’s part of the (admittedly ugly in the way that only auto-generated code can be) C# code that Wolverine generates around our handler method:

public override async System.Threading.Tasks.Task HandleAsync(Wolverine.Runtime.MessageContext context, System.Threading.CancellationToken cancellation)
{
    // The actual message body
    var categoriseIncident = (Helpdesk.Api.CategoriseIncident)context.Envelope.Message;

    await using var documentSession = _outboxedSessionFactory.OpenSession(context);
    var eventStore = documentSession.Events;
    
    // Loading Marten aggregate
    var eventStream = await eventStore.FetchForWriting<Helpdesk.Api.IncidentDetails>(categoriseIncident.Id, categoriseIncident.Version, cancellation).ConfigureAwait(false);

    
    // The actual message execution
    var outgoing1 = Helpdesk.Api.CategoriseIncidentHandler.Handle(categoriseIncident, eventStream.Aggregate);

    if (outgoing1 != null)
    {
        
        // Capturing any possible events returned from the command handlers
        eventStream.AppendMany(outgoing1);

    }

    await documentSession.SaveChangesAsync(cancellation).ConfigureAwait(false);
}

And lastly, we’ve now reduced our CategoriseIncident command handler to the point where the code that we are actually having to write is a pure function, meaning that it’s a simple matter of inputs and outputs with no dependency on any kind of stateful infrastructure. You absolutely care about isolating any kind of business logic into pure functions because that code becomes much easier to unit test.

And to prove that last statement, here’s what the unit tests for our Handle(CategoriseIncident, IncidentDetails) could look like using xUnit.Net and Shouldly:

public class CategoriseIncidentTests
{
    [Fact]
    public void raise_categorized_event_if_changed()
    {
        // Arrange
        var command = new CategoriseIncident
        {
            Category = IncidentCategory.Database
        };

        var details = new IncidentDetails(
            Guid.NewGuid(), 
            Guid.NewGuid(), 
            IncidentStatus.Closed, 
            new IncidentNote[0],
            IncidentCategory.Hardware);

        // Act
        var events = CategoriseIncidentEndpoint.Post(command, details);

        // Assert
        var categorised = events.Single().ShouldBeOfType<IncidentCategorised>();
        categorised
            .Category.ShouldBe(IncidentCategory.Database);
    }

    [Fact]
    public void do_not_raise_event_if_the_category_would_not_change()
    {
        // Arrange
        var command = new CategoriseIncident
        {
            Category = IncidentCategory.Database
        };

        var details = new IncidentDetails(Guid.NewGuid(), Guid.NewGuid(), IncidentStatus.Closed, new IncidentNote[0],
            IncidentCategory.Database);

        // Act
        var events = CategoriseIncidentEndpoint.Post(command, details);
        
        // Assert no events were appended
        events.ShouldBeEmpty();
    }
}

In the unit test code above, we were able to exercise the decision about what events (if any) should be appended to the incident event stream without any dependency whatsoever on any kind of infrastructure. The easiest kind of unit test to write and to read later is a test that has a clear relationship between the test inputs and outputs with minimal noise code for setting up state — and that’s exactly what we have up above. No message mock object set up, no need to setup database state, nothing. Just, “here’s the existing state and this command, now tell me what events should be appended.”

Summary and What’s Next

The full Critter Stack “aggregate handler workflow” recipe leads to very low ceremony code to implement command handlers within a CQRS style architecture. This recipe also leads to a code structure where your business logic is relatively easy to test with fast running unit testing. And we arrived at that point without having to watch umpteen hours of “Clean Architecture” YouTube snake oil videos, introducing a ton of “Ports and Adapter” style abstractions to clutter up our code, or scattering our code for the single CategoriseIncident message handler across 3-4 “Onion Architecture” projects within a massive .NET solution.

This approach was heavily inspired by the Decider pattern that originated for Event Sourcing within the F# community. But whereas the F# approach uses language tricks (and I don’t mean that pejoratively here), Wolverine is getting to a lower ceremony approach by doing that runtime code generation around our code.

If you look back to the sequence diagram up above that tries to explain the control flow, Wolverine is purposely using Jim Shore’s idea of the “A-Frame Architecture” (it’s not really an architectural style despite the name, so don’t even try to do an apples to apples comparison between it and something more prescriptive like the Clean Architecture). In this approach, Wolverine is purposely decoupling the Marten infrastructure away from the CategoriseIncident handler logic that is implementing the business logic that “decides” what to do next by mediating between Marten and the handler. The “A-Frame” name comes from visualizing that mediation like this (Wolverine calls into the infrastructure services like Marten and the business logic so the domain logic doesn’t have to):

Now, there’s a lot more stuff that our command handlers may very well need to implement, including:

  • Message input validation
  • Instrumentation and observability
  • Error handling and resiliency protections ’cause it’s an imperfect world!
  • Publishing the new events to some other internal message handler that will take additional actions after our first command has “decided” what to do next
  • Publishing the new events as some kind of external message to another process
  • Enrolling in a transactional outbox of some sort or another to keep the system in a consistent state — and you really need to care about this capability!!!

And oh, yeah, do all that with minimal code ceremony, be testable with unit tests as much as possible, and be feasible to do automated integration testing when we have to.

We’ll get to all the items in that list above in this series, but I think in the next post I’d like to introduce Wolverine’s HTTP handler recipe and build out more aggregate command handlers, but this time with an HTTP endpoint. Until next time…