Using Explicit Code for Marten Projections

A very important part of any event sourcing architecture is actually being able to interpret the raw events representing the current (or past) state of the system. That’s where Marten’s “Projection” subsystem comes into play as a way to compound a stream of events into a stateful object representing the whole state.

Most of the examples you’ll find of Marten projections will show you one of the aggregation recipes that heavily lean on conventional method signatures with Marten doing some “magic” around those method names, like this simple “self-aggregating” document type:

public record TodoCreated(Guid TodoId, string Description);
public record TodoUpdated(Guid TodoId, string Description);

public class Todo
{
    public Guid Id { get; set; }

    public string Description { get; set; } = null!;

    public static Todo Create(TodoCreated @event) => new()
    {
        Id = @event.TodoId,
        Description = @event.Description,
    };

    public void Apply(TodoUpdated @event)
    {
        Description = @event.Description;
    }
}

Notice the Apply() and Create() methods in the Todo class above. Those are following a naming convention that Marten uses to “know” how to update a Todo document with new information from events.

I (and by “I” I’m clearly taking responsibility for any problems with this approach) went down this path with Marten V4 as a way to make some performance optimizations at runtime. This approach goes okay if you stay well within the well lit path (create, update, maybe delete the aggregate document), but can break down when folks get “fancy” with things like soft deletes. Or all too frequently, this approach can confuse users when the problem domain gets more complex.

There’s an escape hatch though. We can toss aside all the conventional magic and the corresponding runtime magic that Marten does for these projections and just write some explicit code.

Using Marten’s “CustomProjection” recipe — which is just a way to use explicit code to do aggregations of event data — we can write the same functionality as above with this equivalent:

public record TodoCreated(Guid TodoId, string Description);
public record TodoUpdated(Guid TodoId, string Description);

public class Todo
{
    public Guid Id { get; set; }
    public string Description { get; set; } = null!;
}

// Need to inherit from CustomProjection 
public class TodoProjection: CustomProjection<Todo, Guid>
{
    public TodoProjection()
    {
        // This is kinda meh to me, but this tells
        // Marten how to do the grouping of events to
        // aggregated Todo documents by the stream id
        Slicer = new ByStreamId<Todo>();


        // The code below is only valuable as an optimization
        // if this projection is running in Marten's async
        // daemon to help the daemon filter candidate events faster
        IncludeType<TodoCreated>();
        IncludeType<TodoUpdated>();
    }

    public override ValueTask ApplyChangesAsync(DocumentSessionBase session, EventSlice<Todo, Guid> slice, CancellationToken cancellation,
        ProjectionLifecycle lifecycle = ProjectionLifecycle.Inline)
    {
        var aggregate = slice.Aggregate;
        foreach (var e in slice.AllData())
        {
            switch (e)
            {
                case TodoCreated created:
                    aggregate ??= new Todo { Id = slice.Id, Description = created.Description };
                    break;
                case TodoUpdated updated:
                    aggregate ??= new Todo { Id = slice.Id };
                    aggregate.Description = updated.Description;
                    break;
            }
        }
        
        // This is an "upsert", so no silly EF Core "is this new or an existing document?"
        // if/then logic here
        session.Store(aggregate);

        return new ValueTask();
    }
}

Putting aside the admitted clumsiness of the “slicing” junk, our projection code is just a switch statement. In hindsight, the newer C# switch expression syntax was just barely coming out when I designed the conventional approach. If I had it to do again, I think I would have focused harder on promoting the explicit logic and bypassed the whole conventions + runtime code generation thing for aggregations. Oh well.

For right now though, just know that you’ve got an escape hatch with Marten projections to “just write some code” any time the conventional approach causes you the slightest bit of grief.

It’s Critter Stack “Release on Friday” Party!

Hey, did you know that JasperFx Software is ready for formal support plans for Marten and Wolverine? Not only are we trying to make the “Critter Stack” tools be viable long term options for your shop, we’re also interested in hearing your opinions about the tools and how they should change. We’re also certainly open to help you succeed with your software development projects on a consulting basis whether you’re using any part of the Critter Stack or any other .NET server side tooling.

A lot of pull requests and bug fixes just happened to land today for both Marten and Wolverine. In order, we’ve got:

Marten 7.0.0 Beta 5

Marten 7.0.0 Beta 5 is actually quite a big release and a major step forward on the road to the final V7 release. Besides some bug fixes, I think the big highlights are:

  • Marten finally gets the long awaited “Partial Update” model that only depends on native PosgreSQL features! Huge addition from Babu. If you’re coming to Marten from MongoDb, or only would if Marten had the ability to modify documents without first having to load the whole thing, well now you can! No PLv8 extension necessary!
  • We pushed through a new low level execution model that’s more parsimonious about how long database connections are kept open that should help applications using Marten scale to more concurrent transactions. This should also help folks using Marten in conjunction with Hot Chocolate as now IQuerySession could be used in multiple threads in parallel.
  • Marten now uses Polly internally for retries on transient errors, and the “retry” functionality actually works now (it didn’t actually do anything useful before, as I shamefully refuse to make eye contact with you).
  • Several fixes around full text indexes that were blocking some folks

Wolverine 1.16.0

Wolverine 1.16.0 came out today with a couple additions and fixes related to MQTT or Rabbit MQ message publishing to topics. As an example, here’s some new functionality with Rabbit MQ message publishing:

You can specify publishing rules for messages by supplying the logic to determine the topic name from the message itself. Let’s say that we have an interface that several of our message types implement like so:

public interface ITenantMessage
{
    string TenantId { get; }
}

Let’s say that any message that implements that interface, we want published to the topic for that messages TenantId. We can implement that rule like so:

using var host = await Host.CreateDefaultBuilder()
    .UseWolverine((context, opts) =>
    {
        opts.UseRabbitMq();

        // Publish any message that implements ITenantMessage to 
        // a Rabbit MQ "Topic" exchange named "tenant.messages"
        opts.PublishMessagesToRabbitMqExchange<ITenantMessage>("tenant.messages",m => $"{m.GetType().Name.ToLower()}/{m.TenantId}")
            
            // Specify or configure sending through Wolverine for all
            // messages through this Exchange
            .BufferedInMemory();
    })
    .StartAsync();

Wolverine 2.0 Alpha 1

Knock on wood, if the GitHub Action & Nuget gods all agree, there will be a Wolverine 2.0 alpha 1 set of Nugets available that’s just Wolverine 1.16, but targeting the very latest Marten 7 betas as somebody asks me just about every single day when that’s going to be ready.

Enjoy! And don’t tell me about any problems with these releases until Monday!

Summary

I had a very off week as I struggled with a cold, a busy personal life, and way more Zoom meetings than I normally have. All the same, getting to spit out these three releases today makes me feel like Bill Murray here:

And again, new bug reports can wait for Monday!

Building a Critter Stack Application: Resiliency

Hey, did you know that JasperFx Software is ready for formal support plans for Marten and Wolverine? Not only are we trying to make the “Critter Stack” tools be viable long term options for your shop, we’re also interested in hearing your opinions about the tools and how they should change. We’re also certainly open to help you succeed with your software development projects on a consulting basis whether you’re using any part of the Critter Stack or any other .NET server side tooling.

Let’s build a small web service application using the whole “Critter Stack” and their friends, one small step at a time. For right now, the “finished” code is at CritterStackHelpDesk on GitHub.

The posts in this series are:

  1. Event Storming
  2. Marten as Event Store
  3. Marten Projections
  4. Integrating Marten into Our Application
  5. Wolverine as Mediator
  6. Web Service Query Endpoints with Marten
  7. Dealing with Concurrency
  8. Wolverine’s Aggregate Handler Workflow FTW!
  9. Command Line Diagnostics with Oakton
  10. Integration Testing Harness
  11. Marten as Document Database
  12. Asynchronous Processing with Wolverine
  13. Durable Outbox Messaging and Why You Care!
  14. Wolverine HTTP Endpoints
  15. Easy Unit Testing with Pure Functions
  16. Vertical Slice Architecture
  17. Messaging with Rabbit MQ
  18. The “Stateful Resource” Model
  19. Resiliency (this post)

Sometimes, things go wrong in production. For any number of reasons. But all the same, we want to:

  • Protect the integrity of our system state
  • Not lose any ongoing work
  • Try not to require manual interventions to put things right in the system
  • Keep the system from going down even when something is overloaded

Fortunately, Wolverine comes with quite a few facilities for adding adaptive and selective resiliency to our systems — especially when doing asynchronous processing.

First off, we’re using Marten in our incident tracking, help desk system to read and persist data to a PostgreSQL database. When handling messages, Wolverine could easily encounter transient (read: random and not necessarily systematic) exceptions related to network hiccups or timeout errors if the database happens to be too busy at that very time. Let’s tell Wolverine to apply a little exponential backoff (close enough for government work) and retry a command that hits one of these transient database errors a limited number of times like this within the call to UseWolverine() within our Program file:

    // Let's build in some durability for transient errors
    opts.OnException<NpgsqlException>().Or<MartenCommandException>()
        .RetryWithCooldown(50.Milliseconds(), 100.Milliseconds(), 250.Milliseconds());

The retries may happily catch the system at a later time when it’s not as busy, so the transient error doesn’t reoccur and the message can succeed. If we get successive failures, we wait longer before retries. This retry policy effectively throttles a Wolverine system and may give a distressed subsystem within your architecture (in this case the PostgreSQL database) a chance to recover.

Other times you may have a handler encounter an exception that tells us the message in question is invalid somehow, and could never be handled. There’s absolutely no reason to retry that message, so instead, let’s tell Wolverine to instead discard that message immediately (and not even bother to move it to a dead letter queue):

    // Log the bad message sure, but otherwise throw away this message because
    // it can never be processed
    opts.OnException<InvalidInputThatCouldNeverBeProcessedException>()
        .Discard();

I’ve done a few integration projects now where some kind of downstream web service was prone to being completely down. Let’s pretend that we’re only calling that web service through a message handler (my preference whenever possible for exactly this failure scenario) and can tell from an exception that the web service is absolutely unavailable and no other messages could possibly go through until that service is fixed.

Wolverine can do that as well, like so:

    // Shut down the listener for whatever queue experienced this exception
    // for 5 minutes, and put the message back on the queue
    opts.OnException<MakeBelieveSubsystemIsDownException>()
        .PauseThenRequeue(5.Minutes());

And finally, Wolverine also has a circuit breaker functionality to shut down processing on a queue if there are too many errors in a certain time. This feature certainly applies to messages coming in from external messages from Rabbit MQ or Azure Service Bus or AWS SQS, but can also apply to database backed local queues. For the help desk system, I’m going to add a circuit breaker to the local queue for processing the TryAssignPriority command to pause all local processing on the current node if a certain threshold of message processing is failing:

    opts.LocalQueueFor<TryAssignPriority>()
        // By default, local queues allow for parallel processing with a maximum
        // parallel count equal to the number of processors on the executing
        // machine, but you can override the queue to be sequential and single file
        .Sequential()

        // Or add more to the maximum parallel count!
        .MaximumParallelMessages(10)

        // Pause processing on this local queue for 1 minute if there's
        // more than 20% failures for a period of 2 minutes
        .CircuitBreaker(cb =>
        {
            cb.PauseTime = 1.Minutes();
            cb.SamplingPeriod = 2.Minutes();
            cb.FailurePercentageThreshold = 20;
            
            // Definitely worry about this type of exception
            cb.Include<TimeoutException>();
            
            // Don't worry about this type of exception
            cb.Exclude<InvalidInputThatCouldNeverBeProcessedException>();
        });

And don’t worry, Wolverine won’t lose any additional messages published to that queue. They’ll just sit in the database until the current node picks back up on this local queue or another running node is able to steal the work from the database and continue.

Summary and What’s Next

I only gave some highlights here, but Wolverine has some more capabilities for error handling. I think these policies are probably something you adapt over time as you learn more about how your system and its dependencies behave. Throwing more descriptive exceptions from your own code is definitely beneficial as well for these kinds of error handling policies.

I’m almost done with this series. I think the next post or two — and it won’t come until next week — will be all about logging, auditing, metrics, and Open Telemetry integration.

Building a Critter Stack Application: The “Stateful Resource” Model

Hey, did you know that JasperFx Software is ready for formal support plans for Marten and Wolverine? Not only are we trying to make the “Critter Stack” tools be viable long term options for your shop, we’re also interested in hearing your opinions about the tools and how they should change. We’re also certainly open to help you succeed with your software development projects on a consulting basis whether you’re using any part of the Critter Stack or any other .NET server side tooling.

Let’s build a small web service application using the whole “Critter Stack” and their friends, one small step at a time. For right now, the “finished” code is at CritterStackHelpDesk on GitHub.

The posts in this series are:

  1. Event Storming
  2. Marten as Event Store
  3. Marten Projections
  4. Integrating Marten into Our Application
  5. Wolverine as Mediator
  6. Web Service Query Endpoints with Marten
  7. Dealing with Concurrency
  8. Wolverine’s Aggregate Handler Workflow FTW!
  9. Command Line Diagnostics with Oakton
  10. Integration Testing Harness
  11. Marten as Document Database
  12. Asynchronous Processing with Wolverine
  13. Durable Outbox Messaging and Why You Care!
  14. Wolverine HTTP Endpoints
  15. Easy Unit Testing with Pure Functions
  16. Vertical Slice Architecture
  17. Messaging with Rabbit MQ
  18. The “Stateful Resource” Model (this post)
  19. Resiliency

I’ve personally spent quite a bit of time helping teams and organizations deal with older, legacy codebases where it might easily take a couple days of working painstakingly through the instructions in a large Wiki page of some sort in order to make their codebase work on a local development environment. That’s indicative of a high friction environment, and definitely not what we’d ideally like to have for our own teams.

Thinking about the external dependencies of our incident tracking, help desk api we’ve utilized:

  1. Marten for persistence, which requires our system to need PostgreSQL database schema objects
  2. Wolverine’s PostgreSQL-backed transactional outbox support, which also requires its own set of PostgreSQL database schema objects
  3. Rabbit MQ for asynchronous messaging, which requires queues, exchanges, and bindings to be set up in our message broker for the application to work

That’s a bit of stuff that needs to be configured within the Rabbit MQ or PostgreSQL infrastructure around our service in order to run our integration tests or the application itself for local testing.

Instead of the error prone, painstaking manual set up laboriously laid out in a Wiki page somewhere where you can’t remember where it is, let’s leverage the Critter Stack’s “Stateful Resource” model to quickly set our system up ready to run in development.

Building on our existing application configuration, I’m going to add a couple more lines of code to our system’s Program file:

// Depending on your DevOps setup and policies,
// you may or may not actually want this enabled
// in production installations, but some folks do
if (builder.Environment.IsDevelopment())
{
    // This will direct our application to set up
    // all known "stateful resources" at application bootstrapping
    // time
    builder.Services.AddResourceSetupOnStartup();
}

And that’s that. If you’re using the integration test harness like we did in an earlier post, or just starting up the application normally, the application will check for the existence of any of the following, and try to build out anything that’s missing from:

  • The known Marten document tables and all the database objects to support Marten’s event sourcing
  • The necessary tables and functions for Wolverine’s transactional inbox, outbox, and scheduled message tables (I’ll add a post later on those)
  • The known Rabbit MQ exchanges, queues, and bindings

Your application will have to have administrative privileges over all the resources for any of this to work of course, but you would have that at development time at least.

With this capability in place, the procedure for a new developer getting started with our codebase is to:

  1. Does a clean git clone of our codebase on to his local box
  2. Runs docker compose up to start up all the necessary infrastructure they need to run the system or the system’s integration tests locally
  3. Just run the integration tests or start the system and go!

Easy-peasy.

But wait, there’s more! Assuming you have Oakton set up as your command line like we did in an earlier post, you’ve got some command line tooling that can help as well.

If you omit the call to builder.Services.AddResourceSetupOnStartup();, you could still go to the command line and use this command just once to set everything up:

dotnet run -- resources setup

To check on the status of any or all of the resources, you can use:

dotnet run -- resources check

which for the HelpDesk.API, gives you this:

If you want to tear down all the existing data — and at least attempt to purge any Rabbit MQ queues of all messages — you can use:

dotnet run -- resources clear

There’s a few other options you can read about in the Oakton documentation for the Stateful Resource model, but for right now, type dotnet run -- help resources and you can see Oakton’s built in help for the resources command that runs down the supported usage:

Summary and What’s Next

The Critter Stack is trying really hard to create a productive, low friction development ecosystem for your projects. One of the ways it tries to make that happen is by being able to set up infrastructural dependencies automatically at runtime so a developer and just “clone n’ go” without the excruciating pain of the multi-page Wiki getting started instructions so painfully common in legacy codebases.

This stateful resource model is also supported for Kafka transport (which is also local development friendly) and the cloud native Azure Service Bus transport and AWS SQS transport (Wolverine + AWS SQS does work with LocalStack just fine). In the cloud native cases, the credentials from the Wolverine application will have to have the necessary rights to create queues, topics, and subscriptions. In the case of the cloud native transports, there is an option to prefix all the names of the queues, topics, and subscriptions to still create an isolated environment per developer for a better local development story even when relying on cloud native technologies.

I think I’ll add another post to this series where I switch the messaging to one of the cloud native approaches.

As for what’s next in this increasingly long series, I think we still have logging, open telemetry and metrics, resiliency, and maybe a post on Wolverine’s middleware support. That list is somewhat driven by recency bias around questions I’ve been asked here or there about Wolverine.

Building a Critter Stack Application: Messaging with Rabbit MQ

Hey, did you know that JasperFx Software is ready for formal support plans for Marten and Wolverine? Not only are we trying to make the “Critter Stack” tools be viable long term options for your shop, we’re also interested in hearing your opinions about the tools and how they should change. We’re also certainly open to help you succeed with your software development projects on a consulting basis whether you’re using any part of the Critter Stack or any other .NET server side tooling.

Let’s build a small web service application using the whole “Critter Stack” and their friends, one small step at a time. For right now, the “finished” code is at CritterStackHelpDesk on GitHub.

The posts in this series are:

  1. Event Storming
  2. Marten as Event Store
  3. Marten Projections
  4. Integrating Marten into Our Application
  5. Wolverine as Mediator
  6. Web Service Query Endpoints with Marten
  7. Dealing with Concurrency
  8. Wolverine’s Aggregate Handler Workflow FTW!
  9. Command Line Diagnostics with Oakton
  10. Integration Testing Harness
  11. Marten as Document Database
  12. Asynchronous Processing with Wolverine
  13. Durable Outbox Messaging and Why You Care!
  14. Wolverine HTTP Endpoints
  15. Easy Unit Testing with Pure Functions
  16. Vertical Slice Architecture
  17. Messaging with Rabbit MQ (this post)
  18. The “Stateful Resource” Model
  19. Resiliency

To this point in the series, everything has happened within the context of our single HelpDesk.API project. We’ve utilized HTTP endpoints, Wolverine as a mediator, and sent messages through Wolverine’s local queueing features. Today, let’s add Rabbit MQ to the mix as a super, local development-friendly option for distributed processing and just barely dip our toes into Wolverine’s asynchronous messaging support.

As a reminder, here’s a diagram of our incident tracking, help desk system:

In our case, we’re going to create a separate service to handle outgoing emails and SMS messaging I’ve inevitably named the “NotificationService.” For the communication between the Help Desk API and the Notification Service, we’re going to use a Rabbit MQ queue to send RingAllTheAlarms messages from our Help Desk API to the downstream Notification Service, where that will formulate an email body or SMS message or who knows what according to our agent’s personal preferences.

I’ve heard a couple derivations over the years of Zawinski’s Law, stating that every system will eventually grow until it can read mail (or contain a half-assed implementation of LISP). My corollary to that is that every enterprise system will inevitably grow to include a separate service for sending notifications to users.

Earlier, we had build a message handler that potentially sent a RingAllTheAlarms message if an incident was assigned a critical priority:

    [AggregateHandler]
    public static (Events, OutgoingMessages) Handle(
        TryAssignPriority command, 
        IncidentDetails details,
        Customer customer)
    {
        var events = new Events();
        var messages = new OutgoingMessages();

        if (details.Category.HasValue && customer.Priorities.TryGetValue(details.Category.Value, out var priority))
        {
            if (details.Priority != priority)
            {
                events.Add(new IncidentPrioritised(priority, command.UserId));

                if (priority == IncidentPriority.Critical)
                {
                    messages.Add(new RingAllTheAlarms(command.IncidentId));
                }
            }
        }

        return (events, messages);
    }

When our system tries to publish that RingAllTheAlarms message, Wolverine tries to route that message to a subscribing endpoint (local queues are also considered to be endpoints by Wolverine), and publishes the message to each subscriber — or does nothing if there are no known subscribers for that message type.

Let’s first create our new Notification Service from scratch, with a quick call to:

dotnet new console

After that, I admittedly took a short cut and just added a project reference to our Help Desk API project because it’s late at night as I write this and I’m lazy by nature. In real usage you probably at least start with a shared library just to define the message types that are exchanged between two or more processes:

To be clear, Wolverine does not require you to use shared types for the message bodies between Wolverine applications, but that frequently turns out to be the easiest mechanism to get started and it can easily be sufficient in many situations.

Back to our new Notification Service. I’m going to add a reference to Wolverine’s Rabbit MQ transport library (Wolverine.RabbitMQ) with:

dotnet add package WolverineFx.RabbitMQ

With that in place, the entire (faked up) Notification Service code is this:

using Helpdesk.Api;
using Microsoft.Extensions.Hosting;
using Oakton;
using Wolverine;
using Wolverine.RabbitMQ;

return await Host.CreateDefaultBuilder()
    .UseWolverine(opts =>
    {
        // Connect to Rabbit MQ
        // The default like this expects to connect to a Rabbit MQ
        // broker running in the localhost at the default Rabbit MQ
        // port
        opts.UseRabbitMq();

        // Tell Wolverine to listen for incoming messages
        // from a Rabbit MQ queue 
        opts.ListenToRabbitQueue("notifications");
    }).RunOaktonCommands(args);


// Just to see that there is a message handler for the RingAllTheAlarms
// message
public static class RingAllTheAlarmsHandler
{
    public static void Handle(RingAllTheAlarms message)
    {
        Console.WriteLine("I'm going to scream out an alert about incident " + message.IncidentId);
    }
}

Moving back to our Help Desk API project, I’m going to add a reference to the WolverineFx.RabbitMQ Nuget, and add this code to define the outgoing subscription for the RingAllTheAlarms message:

builder.Host.UseWolverine(opts =>
{
    // Other configuration...
    
    // Opt into the transactional inbox/outbox on all messaging
    // endpoints
    opts.Policies.UseDurableOutboxOnAllSendingEndpoints();
    
    // Connecting to a local Rabbit MQ broker
    // at the default port
    opts.UseRabbitMq();

    // Adding a single Rabbit MQ messaging rule
    opts.PublishMessage<RingAllTheAlarms>()
        .ToRabbitExchange("notifications");

    // Other configuration...
});

I’m going to very highly recommend that you read up a little bit on Rabbit MQ’s model of exchanges, queues, and bindings before you try to use it in anger because every message broker seems to have subtly different behavior. Just for this post though, you’ll see that the Help Desk API is publishing to a Rabbit MQ exchange named “notifications” and the Notification Service is listening to a queue named “notifications”. To fully connect the two services through Rabbit MQ, you’d need to add a binding from the “notifications” exchange to the “notifications” queue. You can certainly do that through any Rabbit MQ management mechanism, but you could also define that binding in Wolverine itself and let Wolverine put that altogether for you at runtime much like Wolverine and Marten can for their database schema dependencies.

Let’s revisit the Notification Service code and make it set up a little bit more for us in the Wolverine setup to automatically build the right Rabbit MQ exchange, queue, and binding between our applications like so:

return await Host.CreateDefaultBuilder()
    .UseWolverine(opts =>
    {
        opts.UseRabbitMq()
            // Make it build out any missing exchanges, queues, or bindings that
            // the system knows about as ncessary
            .AutoProvision()
            
            // This is just to make Wolverine help us out to configure Rabbit MQ end to end
            // This isn't mandatory, but it might help you be more productive at development 
            // time
            .BindExchange("notifications").ToQueue("notifications", "notification_binding");

        // Tell Wolverine to listen for incoming messages
        // from a Rabbit MQ queue 
        opts.ListenToRabbitQueue("notifications");
    }).RunOaktonCommands(args);

And that’s actually that, we’re completely ready to go assuming there’s a Rabbit MQ broker running on our local development box — which I usually do just through docker compose (here’s the docker-compose.yaml file from this sample application).

One thing to note for folks seeing this who are coming from a MassTransit or NServiceBus background, Wolverine does not need you to specify any kind of connectivity between message handlers and listening endpoints. That might become an “opt in” feature some day, but there’s nothing like that in Wolverine today.

Summary and What’s Next

I just barely exposed a little bit of what Wolverine can while using Rabbit MQ as a messaging transport. There’s a ton of levers and knobs to adjust for increased throughput or for more strict message ordering. There’s also a conventional routing capability that might be a good default for getting started.

As far as when you should use asynchronous messaging, my thinking is that you should pretty well always use asynchronous messaging between two processing unless you have to have the inline response from the downstream system. Otherwise, I think that using asynchronous messaging techniques helps to decouple systems from each other temporally, and gives you more tools for creating robust and resilient systems through error handling policies.

And speaking of “resiliency”, I think that will be the subject of one of the remaining posts in this series.

Quick Update on Marten 7.0 (and Wolverine 2.0)

There’s a new Marten 7.0 beta 4 release out today with a new round of bug fixes and some performance enhancements. We’re getting closer to getting a 7.0 release out, so I thought I’d update the world a bit on what’s remaining. I’d also love to give folks a chance to weigh in on some of the outstanding work that may or may not make the cut for 7.0 or slide to later. Due to some commitments to clients, I’m hoping to have the release out by early February at the latest, but we’ll see.

A Wolverine 2.0 release will follow shortly, but that’s going to be almost completely about upgrading Wolverine to use the latest Marten and Weasel dependencies and shouldn’t result in any breaking changes.

What’s In Flight or Outstanding

There’s several medium sized efforts either in flight, or yet to come. User feedback is certainly welcome:

  • Low level database execution improvements. We’re doing a lot of work to integrate relatively newer ADO.Net features from Npgsql that will help us wring out a little better performance. As part of that work, we’re going to replace our homegrown resiliency feature (IRetryPolicy) with a more efficient and likely more effective approach using Polly baked into Marten. I was hesitant to take on Polly before because of its tendency to be a diamond dependency issue, but I think we’ve changed our minds about the risk/reward equation here. I think we’ll also get a little performance and scalability boost by using Polly’s static Lambda approach in place of our current approach. The reality is that while you probably shouldn’t be too consumed with micro-optimizations in application development, it’s much more valuable in infrastructure code like Marten to be as performant as possible.
  • Open Telemetry support baked in. I think this is a low hanging fruit issue that might be a great place for anyone to jump in. Please feel free to weigh in on the possible approaches we’ve outlined.
  • Better scalability for asynchronous projections and the ability to deploy projection and event changes with less or even zero downtime compared to the current Marten. I’ll refer you to a longer discussion for feedback on possible directions. That discussion also touches on topics around event data migrations and archival strategies.
  • Enabling built in support for strong typed identifiers. This is far more work than I personally think it’s worth, but plenty of folks tell us that it’s a must have feature even to the point where they tell us they won’t use Marten until this exists. This kind of thing is what drives me personally to make disparaging remarks about the DDD community’s seeming love of code ceremony. Grr.
  • “Partial” document updates with native PostgreSQL features. We’ve had this functionality for years, but it depends on the PLv8 extension to PostgreSQL that’s continuously harder to use, especially on the cloud. I think this could be a big win, especially for users coming from MongoDb
  • Dynamic Tenant Database Discovery — customer request, and that means it goes to a the top of the priority list. Weird how it works that way.
  • What else folks? I don’t want the release to drag on forever, but there’s plenty of other things to do

LINQ Improvements

From my perspective, the effective rewrite of the LINQ provider support for V7 is the single biggest change and improvement for Marten 7. As always, I’m hopeful that this shores up Marten’s technical foundation for years to come. I’d sum that work up as:

  • Glass Half Full: the new LINQ support covers a lot more scenarios that were missing previously, and especially improves both the number of supported use cases and the efficiency of the generated SQL for querying within child collections in many cases. Moreover, the new LINQ support should be better about telling you when it can’t support something instead of doing erroneous searches, and should be in much better shape for when we need to add new permutations to the support from user requests later.
  • Glass Half Empty: It took a long, long time to get this done and it was quite an opportunity cost for me personally. We also got a large GitHub sponsorship for this work, and while I was and am very grateful for that, I’m also feeling guilty about how long it took to finish that work.

And that folks is the life of a semi-successful OSS author in one nutshell.

If you’re curious, here’s a write up on GitHub about the new LINQ internals that was meant for Marten contributors.

Building a Critter Stack Application: Vertical Slice Architecture

Hey, did you know that JasperFx Software is ready for formal support plans for Marten and Wolverine? Not only are we trying to make the “Critter Stack” tools be viable long term options for your shop, we’re also interested in hearing your opinions about the tools and how they should change. We’re also certainly open to help you succeed with your software development projects on a consulting basis whether you’re using any part of the Critter Stack or any other .NET server side tooling.

Let’s build a small web service application using the whole “Critter Stack” and their friends, one small step at a time. For right now, the “finished” code is at CritterStackHelpDesk on GitHub.

The posts in this series are:

  1. Event Storming
  2. Marten as Event Store
  3. Marten Projections
  4. Integrating Marten into Our Application
  5. Wolverine as Mediator
  6. Web Service Query Endpoints with Marten
  7. Dealing with Concurrency
  8. Wolverine’s Aggregate Handler Workflow FTW!
  9. Command Line Diagnostics with Oakton
  10. Integration Testing Harness
  11. Marten as Document Database
  12. Asynchronous Processing with Wolverine
  13. Durable Outbox Messaging and Why You Care!
  14. Wolverine HTTP Endpoints
  15. Easy Unit Testing with Pure Functions
  16. Vertical Slice Architecture (this post)
  17. Messaging with Rabbit MQ
  18. The “Stateful Resource” Model
  19. Resiliency

I’m taking a short detour in this series today as I prepare to do my “Contrarian Architecture” talk at the CodeMash 2024 conference today. In that talk (here’s a version from NDC Oslo 2023), I’m going to spend some time more or less bashing stereotypical usages of the Clean or Onion Architecture prescriptive approach.

While there’s nothing to prevent you from using either Wolverine or Marten within a typical Clean Architecture style code organization, the “Critter Stack” plays well within a lower code ceremony vertical slice architecture that I personally prefer.

First though, let’s talk about why I don’t like about the stereotypical Code/Onion Architecture approach you commonly find in enterprise .NET systems. With this common mode of code organization, the incident tracking help desk service we have been building in this series might be organized something like:

Class NameProject
IncidentControllerHelpDesk.API
IncidentServiceHelpDesk.ServiceLayer
IncidentHelpDesk.Domain
IncidentRepositoryHelpDesk.Data
Don’t laugh because a lot of people do this

This kind of code structure is primarily organized around the “nouns” of the system and reliant on the formal layering prescriptions to try to create a healthy separation of concerns. It’s probably perfectly fine for pure CRUD applications, but breaks down very badly over time for more workflow centric applications.

I despise this form of code organization in very large systems because:

  1. It scatters closely related code throughout the codebase
  2. You typically don’t spend a lot of time trying to reason about an entire layer at a time. Instead, you’re largely worried about the behavior of one single use case and the logical flow through the entire stack for that one use case
  3. The code layout tells you very little about what the application does as it’s primarily focused around technical concerns (hat tip to David Whitney for that insight)
  4. It’s high ceremony. Lots of layers, interfaces, and just a lot of stuff
  5. Abstractions around the low level persistence infrastructure can very easily lead you to poorly performing code and can make it much harder later to understand why code is performing poorly in production

Shifting to the Idiomatic Wolverine Approach

Let’s say that we’re sitting around a fire boasting of our victories in software development (that’s a lie, I’m telling horror stories about the worst systems I’ve ever seen) and you ask me “Jeremy, what is best in code?”

And I’d respond:

  • Low ceremony code that’s easy to read and write
  • Closely related code is close together
  • Unrelated code is separated
  • Code is organized around the “verbs” of the system, which in the case of Wolverine probably means the commands
  • The code structure by itself gives some insight into what the system actually does

Taking our LogIncident command, I’m going to put every drop of code related to that command in a single file called “LogIncident.cs”:

public record LogIncident(
    Guid CustomerId,
    Contact Contact,
    string Description
)
{
    public class LogIncidentValidator : AbstractValidator<LogIncident>
    {
        // I stole this idea of using inner classes to keep them
        // close to the actual model from *someone* online,
        // but don't remember who
        public LogIncidentValidator()
        {
            RuleFor(x => x.Description).NotEmpty().NotNull();
            RuleFor(x => x.Contact).NotNull();
        }
    }
};

public record NewIncidentResponse(Guid IncidentId) 
    : CreationResponse("/api/incidents/" + IncidentId);

public static class LogIncidentEndpoint
{
    [WolverineBefore]
    public static async Task<ProblemDetails> ValidateCustomer(
        LogIncident command, 
        
        // Method injection works just fine within middleware too
        IDocumentSession session)
    {
        var exists = await session.Query<Customer>().AnyAsync(x => x.Id == command.CustomerId);
        return exists
            ? WolverineContinue.NoProblems
            : new ProblemDetails { Detail = $"Unknown customer id {command.CustomerId}", Status = 400};
    }
    
    [WolverinePost("/api/incidents")]
    public static (NewIncidentResponse, IStartStream) Post(LogIncident command, User user)
    {
        var logged = new IncidentLogged(
            command.CustomerId, 
            command.Contact, 
            command.Description, 
            user.Id);

        var op = MartenOps.StartStream<Incident>(logged);
        
        return (new NewIncidentResponse(op.StreamId), op);
    }

}

Every single bit of code related to handling this operation in our system is in one file that we can read top to bottom. A few significant points about this code:

  • I think it’s working out well in other Wolverine systems to largely name the files based on command names or the request body models for HTTP endpoints. At least with systems being built with a CQRS approach. Using the command name allows the system to be more self descriptive when you’re just browsing the codebase for the first time
  • The behavioral logic is still isolated to the Post() method, and even though there is some direct data access in the same class in its LoadAsync() method, the Post() method is a pure function that can be unit tested without any mocks
  • There’s also no code unrelated to LogIncident anywhere in this file, so you bypass the problem you get in noun-centric code organizations where you have to train your brain to ignore a lot of unrelated code in an IncidentService that has nothing to do with the particular operation you’re working on at any one time
  • I’m not bothering to wrap any kind of repository abstraction around Marten’s IDocumentSession in this code sample. That’s not to say that I wouldn’t do so in the case of something more complicated, and especially if there’s some kind of complex set of data queries that would need to be reused in other commands
  • You can clearly see the cause and effect between the command input and any outcomes of that command. I think this is an important discussion all by itself because it can easily be hard to reason about that same kind of cause and effect in systems that split responsibilities within a single use case across different areas of the code and even across different projects or components. Codebases that are hard to reason about are very prone to regression errors down the line — and that’s the voice of painful experience talking.

I certainly wouldn’t use this “single file” approach on larger, more complex use cases, but it’s working out well for early Wolverine adopters so far. Since much of my criticism of Clean/Onion Architecture approaches is really about using prescriptive rules too literally, I would also say that I would deviate from this “single file” approach any time it was valuable to reuse code across commands or queries or just when the message handling for a single message gets complex enough to need or want other files to separate responsibilities just within that one use case.

Summary and What’s Next

Wolverine is optimized for a “Vertical Slice Architecture” code organization approach. Both Marten and Wolverine are meant to require as little code ceremony as they can, and that also makes the vertical slice architecture and even the single file approach I showed here be feasible.

More on vertical slice architecture:

I’m not 100% sure what I’ll tackle next in this series, but roughly I’m still planning:

  • The “stateful resource” model in the Critter Stack for infrastructure resource setup and teardown we use to provide that “it just works” experience
  • External messaging with Rabbit MQ
  • Wolverine’s resiliency and error handling capabilities
  • Logging, observability, Open Telemetry, and metrics from Wolverine
  • Subscribing to Marten events

Building a Critter Stack Application: Easy Unit Testing with Pure Functions

Hey, did you know that JasperFx Software is ready for formal support plans for Marten and Wolverine? Not only are we trying to make the “Critter Stack” tools be viable long term options for your shop, we’re also interested in hearing your opinions about the tools and how they should change. We’re also certainly open to help you succeed with your software development projects on a consulting basis whether you’re using any part of the Critter Stack or any other .NET server side tooling.

Let’s build a small web service application using the whole “Critter Stack” and their friends, one small step at a time. For right now, the “finished” code is at CritterStackHelpDesk on GitHub.

The posts in this series are:

  1. Event Storming
  2. Marten as Event Store
  3. Marten Projections
  4. Integrating Marten into Our Application
  5. Wolverine as Mediator
  6. Web Service Query Endpoints with Marten
  7. Dealing with Concurrency
  8. Wolverine’s Aggregate Handler Workflow FTW!
  9. Command Line Diagnostics with Oakton
  10. Integration Testing Harness
  11. Marten as Document Database
  12. Asynchronous Processing with Wolverine
  13. Durable Outbox Messaging and Why You Care!
  14. Wolverine HTTP Endpoints
  15. Easy Unit Testing with Pure Functions (this post)
  16. Vertical Slice Architecture
  17. Messaging with Rabbit MQ
  18. The “Stateful Resource” Model
  19. Resiliency

Let’s start this post by making a bold statement that I’ll probably regret, but still spend the rest of this post trying to back up:

Remembering the basic flow of our incident tracking, help desk service in this series, we’ve got this workflow:

Starting in the middle with the “Categorize Incident”, our system’s workflow is something like:

  1. A technician will send a request to change the category of the incident
  2. If the system determines that the request will be changing the category, the system will append a new event to mark that state, and also publish a new command message to try to assign a priority to the incident automatically based on the customer data
  3. When the system handles that new “Try Assign Priority” command, it will look at the customer’s settings, and likewise append another event to record the change of priority for the incident. If the incident changes, it will also publish a message to an external “Notification Service” — but for this post, let’s just worry about whether we’re correctly publishing the right message

In an earlier post, I showed this version of a message handler for the CategoriseIncident command:

public static class CategoriseIncidentHandler
{
    public static readonly Guid SystemId = Guid.NewGuid();
      
    [AggregateHandler]
    // The object? as return value will be interpreted
    // by Wolverine as appending one or zero events
    public static async Task<object?> Handle(
        CategoriseIncident command, 
        IncidentDetails existing,
        IMessageBus bus)
    {
        if (existing.Category != command.Category)
        {
            // Send the message to any and all subscribers to this message
            await bus.PublishAsync(
                new TryAssignPriority { IncidentId = existing.Id });
            return new IncidentCategorised
            {
                Category = command.Category,
                UserId = SystemId
            };
        }
  
        // Wolverine will interpret this as "do no work"
        return null;
    }
}

Notice that this handler is injecting the Wolverine IMessageBus service into the handler method. We could test this code as is with a “fake” for IMessageBus just to verify whether the expected outgoing message for TryAssignPriority goes out or not. Helpfully, Wolverine even supplies a “spy” version of IMessageBus called TestMessageContext that can be used in unit tests as a stand in just to record what the outgoing messages were.

My strong preference though is to use Wolverine’s concept of cascading messages to write a pure function such that the behavioral logic can be tested without any mocks, stubs, or other fakes. In the sample code above, we had been using Wolverine as “just” a “Mediator” within an MVC Core controller. This time around, let’s ditch the unnecessary “Mediator” ceremony and use a Wolverine HTTP endpoint for the same functionality. In this case we can write the same functionality as a pure function like so:

public static class CategoriseIncidentEndpoint
{
    [WolverinePost("/api/incidents/categorise"), AggregateHandler]
    public static (Events, OutgoingMessages) Post(
        CategoriseIncident command, 
        IncidentDetails existing, 
        User user)
    {
        var events = new Events();
        var messages = new OutgoingMessages();
        
        if (existing.Category != command.Category)
        {
            // Append a new event to the incident
            // stream
            events += new IncidentCategorised
            {
                Category = command.Category,
                UserId = user.Id
            };

            // Send a command message to try to assign the priority
            messages.Add(new TryAssignPriority
            {
                IncidentId = existing.Id
            });
        }

        return (events, messages);
    }
}

In the endpoint above, we’re “pushing” all of the required inputs for our business logic in the Post() method that makes a decision about what state changes should be captured and what additional actions should be done through outgoing, cascaded messages.

A couple notes about this code:

  • It’s using the aggregate handler workflow we introduced in an earlier post to “push” the IncidentDetails aggregate for the incident stream into the method. We’ll need this information to “decide” what to do next
  • The Events type is a Wolverine construct that tells Wolverine “hey, the objects in this collection are meant to be appended as events to the event stream for this aggregate.”
  • Likewise, the OutgoingMessages type is a Wolverine construct that — wait for it — tells Wolverine that the objects contained in that collection should be published as cascading messages after the database transaction succeeds
  • The Marten + Wolverine transactional middleware is calling Marten’s IDocumentSession.SaveChangesAsync() to commit the logical transaction, and also dealing with the transaction outbox mechanics for the cascading messages from the OutgoingMessages collection.

Alright, with all that said, let’s look at what a unit test for a CategoriseIncident command message that results in the category being changed:

    [Fact]
    public void raise_categorized_event_if_changed()
    {
        var command = new CategoriseIncident
        {
            Category = IncidentCategory.Database
        };

        var details = new IncidentDetails(
            Guid.NewGuid(), 
            Guid.NewGuid(), 
            IncidentStatus.Closed, 
            Array.Empty<IncidentNote>(),
            IncidentCategory.Hardware);

        var user = new User(Guid.NewGuid());
        var (events, messages) = CategoriseIncidentEndpoint.Post(command, details, user);

        // There should be one appended event
        var categorised = events.Single()
            .ShouldBeOfType<IncidentCategorised>();
        
        categorised
            .Category.ShouldBe(IncidentCategory.Database);
        
        categorised.UserId.ShouldBe(user.Id);

        // And there should be a single outgoing message
        var message = messages.Single()
            .ShouldBeOfType<TryAssignPriority>();
        
        message.IncidentId.ShouldBe(details.Id);
        message.UserId.ShouldBe(user.Id);

    }

In real life, I’d probably opt to break that unit test into a BDD-like context and individual tests to assert the expected event(s) being appended and the expected outgoing messages, but this is conceptually easier and I didn’t sleep well last night, so this is what you get!

Let’s move on to the message handler for the TryAssignPriority message, and also make this a pure function so we can easily test the behavior:

public static class TryAssignPriorityHandler
{
    // Wolverine will call this method before the "real" Handler method,
    // and it can "magically" connect that the Customer object should be delivered
    // to the Handle() method at runtime
    public static Task<Customer?> LoadAsync(IncidentDetails details, IDocumentSession session)
    {
        return session.LoadAsync<Customer>(details.CustomerId);
    }

    // There's some database lookup at runtime, but I've isolated that above, so the
    // behavioral logic that "decides" what to do is a pure function below. 
    [AggregateHandler]
    public static (Events, OutgoingMessages) Handle(
        TryAssignPriority command, 
        IncidentDetails details,
        Customer customer)
    {
        var events = new Events();
        var messages = new OutgoingMessages();

        if (details.Category.HasValue && customer.Priorities.TryGetValue(details.Category.Value, out var priority))
        {
            if (details.Priority != priority)
            {
                events.Add(new IncidentPrioritised(priority, command.UserId));

                if (priority == IncidentPriority.Critical)
                {
                    messages.Add(new RingAllTheAlarms(command.IncidentId));
                }
            }
        }

        return (events, messages);
    }
}

I’d ask you to notice the LoadAsync() method above. It’s part of the logical handler workflow, but Wolverine is letting us keep that separate from the main “decider” message Handle() method. We’d have to test the entire handler with an integration test eventually, but we can happily write fast running, fine grained unit tests on the expected behavior by just “pushing” inputs into the Handle() method and measuring the events and outgoing messages just by checking the return values.

Summary and What’s Next

Wolverine’s approach has always been driven by the desire to make your application code as testable as possible. Originally that meant to just keep the framework (Wolverine itself) out of your application code as much as possible. Later on, the Wolverine community was influenced by more Functional Programming techniques and Jim Shore’s paper on Testing without Mocks.

Specifically, Wolverine embraced the idea of the “A-Frame Architecture”, with Wolverine itself in the role of the mediator/controller/conductor coordinates between infrastructural concerns like Marten and your own business logic code in message handlers or HTTP endpoint methods without creating a direct coupling between you behavioral logic code and your infrastructure:

If you take advantage of Wolverine features like cascading messages, side effects, and compound handlers to decompose your system in a more FP-esque way while letting Wolverine handle the coordination, you can arrive at much more testable code.

I said earlier that I’d get to Rabbit MQ messaging soon, and I’ll get around to that soon. To fit in with one of my CodeMash 2024 talks on this Friday, I might take a little side trip into how the “Critter Stack” plays well inside of a low ceremony vertical slice architecture as I get ready to absolutely blast away at the “Clean/Onion Architecture” this week.

Building a Critter Stack Application: Wolverine HTTP Endpoints

Hey, did you know that JasperFx Software is ready for formal support plans for Marten and Wolverine? Not only are we trying to make the “Critter Stack” tools be viable long term options for your shop, we’re also interested in hearing your opinions about the tools and how they should change. We’re also certainly open to help you succeed with your software development projects on a consulting basis whether you’re using any part of the Critter Stack or any other .NET server side tooling.

Let’s build a small web service application using the whole “Critter Stack” and their friends, one small step at a time. For right now, the “finished” code is at CritterStackHelpDesk on GitHub.

The posts in this series are:

  1. Event Storming
  2. Marten as Event Store
  3. Marten Projections
  4. Integrating Marten into Our Application
  5. Wolverine as Mediator
  6. Web Service Query Endpoints with Marten
  7. Dealing with Concurrency
  8. Wolverine’s Aggregate Handler Workflow FTW!
  9. Command Line Diagnostics with Oakton
  10. Integration Testing Harness
  11. Marten as Document Database
  12. Asynchronous Processing with Wolverine
  13. Durable Outbox Messaging and Why You Care!
  14. Wolverine HTTP Endpoints (this post)
  15. Easy Unit Testing with Pure Functions
  16. Vertical Slice Architecture
  17. Messaging with Rabbit MQ
  18. The “Stateful Resource” Model
  19. Resiliency

Heretofore in this series, I’ve been using ASP.Net MVC Core controllers anytime we’ve had to build HTTP endpoints for our incident tracking, help desk system in order to introduce new concepts a little more slowly.

If you would, let’s refer back to an earlier incarnation of an HTTP endpoint to handle our LogIncident command from an earlier post in this series:

public class IncidentController : ControllerBase
{
    private readonly IDocumentSession _session;
 
    public IncidentController(IDocumentSession session)
    {
        _session = session;
    }
 
    [HttpPost("/api/incidents")]
    public async Task<IResult> Log(
        [FromBody] LogIncident command
        )
    {
        var userId = currentUserId();
        var logged = new IncidentLogged(command.CustomerId, command.Contact, command.Description, userId);
 
        var incidentId = _session.Events.StartStream(logged).Id;
        await _session.SaveChangesAsync(HttpContext.RequestAborted);
 
        return Results.Created("/incidents/" + incidentId, incidentId);
    }
 
    private Guid currentUserId()
    {
        // let's say that we do something here that "finds" the
        // user id as a Guid from the ClaimsPrincipal
        var userIdClaim = User.FindFirst("user-id");
        if (userIdClaim != null && Guid.TryParse(userIdClaim.Value, out var id))
        {
            return id;
        }
 
        throw new UnauthorizedAccessException("No user");
    }
}

Just to be clear as possible here, the Wolverine HTTP endpoints feature introduced in this post can be mixed and matched with MVC Core and/or Minimal API or even FastEndpoints within the same application and routing tree. I think the ASP.Net team deserves some serious credit for making that last sentence a fact.

Today though, let’s use Wolverine HTTP endpoints and rewrite that controller method above the “Wolverine way.” To get started, add a Nuget reference to the help desk service like so:

dotnet add package WolverineFx.Http

Next, let’s break into our Program file and add Wolverine endpoints to our routing tree near the bottom of the file like so:

app.MapWolverineEndpoints(opts =>
{
    // We'll add a little more in a bit...
});

// Just to show where the above code is within the context
// of the Program file...
return await app.RunOaktonCommands(args);

Now, let’s make our first cut at a Wolverine HTTP endpoint for the LogIncident command, but I’m purposely going to do it without introducing a lot of new concepts, so please bear with me a bit:

public record NewIncidentResponse(Guid IncidentId) 
    : CreationResponse("/api/incidents/" + IncidentId);

public static class LogIncidentEndpoint
{
    [WolverinePost("/api/incidents")]
    public static NewIncidentResponse Post(
        // No [FromBody] stuff necessary
        LogIncident command,
        
        // Service injection is automatic,
        // just like message handlers
        IDocumentSession session,
        
        // You can take in an argument for HttpContext
        // or immediate members of HttpContext
        // as method arguments
        ClaimsPrincipal principal)
    {
        // Some ugly code to find the user id
        // within a claim for the currently authenticated
        // user
        Guid userId = Guid.Empty;
        var userIdClaim = principal.FindFirst("user-id");
        if (userIdClaim != null && Guid.TryParse(userIdClaim.Value, out var claimValue))
        {
            userId = claimValue;
        }
        
        var logged = new IncidentLogged(command.CustomerId, command.Contact, command.Description, userId);

        var id = session.Events.StartStream<Incident>(logged).Id;

        return new NewIncidentResponse(id);
    }
}

Here’s a few salient facts about the code above to explain what it’s doing:

  • The [WolverinePost] attribute tells Wolverine that hey, this method is an HTTP handler, and Wolverine will discover this method and add it to the application’s endpoint routing tree at bootstrapping time.
  • Just like Wolverine message handlers, the endpoint methods are flexible and Wolverine generates code around your code to mediate between the raw HttpContext for the request and your code
  • We have already enabled Marten transactional middleware for our message handlers in an earlier post, and that happily applies to Wolverine HTTP endpoints as well. That helps make our endpoint method be just a synchronous method with the transactional middleware dealing with the ugly asynchronous stuff for us.
  • You can “inject” HttpContext and its immediate children into the method signatures as I did with the ClaimsPrincipal up above
  • Method injection is automatic without any silly [FromServices] attributes, and that’s what’s happening with the IDocumentSession argument
  • The LogIncident parameter is assumed to be the HTTP request body due to being the first argument, and it will be deserialized from the incoming JSON in the request body just like you’d probably expect
  • The NewIncidentResponse type is roughly the equivalent to using Results.Created() in Minimal API to create a response body with the url of the newly created Incident stream and an HTTP status code of 201 for “Created.” What’s different about Wolverine.HTTP is that it can infer OpenAPI documentation from the signature of that type without requiring you to pollute your code by manually adding [ProducesResponseType] attributes on the method to get a “proper” OpenAPI document for the endpoint.

Moving on, that user id detection from the ClaimsPrincipal looks a little bit ugly to me, and likely to be repetitive. Let’s ameliorate that by introducing Wolverine’s flavor of HTTP middleware and move that code to this class:

// Using the custom type makes it easier
// for the Wolverine code generation to route
// things around. I'm not ashamed.
public record User(Guid Id);

public static class UserDetectionMiddleware
{
    public static (User, ProblemDetails) Load(ClaimsPrincipal principal)
    {
        var userIdClaim = principal.FindFirst("user-id");
        if (userIdClaim != null && Guid.TryParse(userIdClaim.Value, out var id))
        {
            // Everything is good, keep on trucking with this request!
            return (new User(id), WolverineContinue.NoProblems);
        }
        
        // Nope, nope, nope. We got problems, so stop the presses and emit a ProblemDetails response
        // with a 400 status code telling the caller that there's no valid user for this request
        return (new User(Guid.Empty), new ProblemDetails { Detail = "No valid user", Status = 400});
    }
}

Do note the usage of ProblemDetails in that middleware. If there is no user-id claim on the ClaimsPrincipal, we’ll abort the request by writing out the ProblemDetails stating there’s no valid user. This pattern is baked into Wolverine.HTTP to help create one off request validations. We’ll utilize this quite a bit more later.

Next, I need to add that new bit of middleware to our application. As a shortcut, I’m going to just add it to every single Wolverine HTTP endpoint by breaking back into our Program file and adding this line of code:

app.MapWolverineEndpoints(opts =>
{
    // We'll add a little more in a bit...
    
    // Creates a User object in HTTP requests based on
    // the "user-id" claim
    opts.AddMiddleware(typeof(UserDetectionMiddleware));
});

Now, back to our endpoint code and I’ll take advantage of that middleware by changing the method to this:

    [WolverinePost("/api/incidents")]
    public static NewIncidentResponse Post(
        // No [FromBody] stuff necessary
        LogIncident command,
        
        // Service injection is automatic,
        // just like message handlers
        IDocumentSession session,
        
        // This will be created for us through the new user detection
        // middleware
        User user)
    {
        var logged = new IncidentLogged(
            command.CustomerId, 
            command.Contact, 
            command.Description, 
            user.Id);

        var id = session.Events.StartStream<Incident>(logged).Id;

        return new NewIncidentResponse(id);
    }

This is a little bit of a bonus, but let’s also get rid of the need to inject the Marten IDocumentSession service by using a Wolverine “side effect” with this equivalent code:

    [WolverinePost("/api/incidents")]
    public static (NewIncidentResponse, IStartStream) Post(LogIncident command, User user)
    {
        var logged = new IncidentLogged(
            command.CustomerId, 
            command.Contact, 
            command.Description, 
            user.Id);

        var op = MartenOps.StartStream<Incident>(logged);
        
        return (new NewIncidentResponse(op.StreamId), op);
    }

In the code above I’m using the MartenOps.StartStream() method to return a “side effect” that will create a new Marten stream as part of the request instead of directly interacting with the IDocumentSession from Marten. That’s a small thing you might not care for, but it can lead to the elimination of mock objects within your unit tests as you can now write a state-based test directly against the method above like so:

public class LogIncident_handling
{
    [Fact]
    public void handle_the_log_incident_command()
    {
        // This is trivial, but the point is that 
        // we now have a pure function that can be
        // unit tested by pushing inputs in and measuring
        // outputs without any pesky mock object setup
        var contact = new Contact(ContactChannel.Email);
        var theCommand = new LogIncident(BaselineData.Customer1Id, contact, "It's broken");

        var theUser = new User(Guid.NewGuid());

        var (_, stream) = LogIncidentEndpoint.Post(theCommand, theUser);

        // Test the *decision* to emit the correct
        // events and make sure all that pesky left/right
        // hand mapping is correct
        var logged = stream.Events.Single()
            .ShouldBeOfType<IncidentLogged>();
        
        logged.CustomerId.ShouldBe(theCommand.CustomerId);
        logged.Contact.ShouldBe(theCommand.Contact);
        logged.LoggedBy.ShouldBe(theUser.Id);
    }
}

Hey, let’s add some validation too!

We’ve already introduced middleware, so let’s just incorporate the popular Fluent Validation library into our project and let it do some basic validation on the incoming LogIncident command body, and if any validation fails, pull the ripcord and parachute out of the request with a ProblemDetails body and 400 status code that describes the validation errors.

Let’s add that in by first adding some pre-packaged middleware for Wolverine.HTTP with:

dotnet add package WolverineFx.Http.FluentValidation

Next, I have to add the usage of that middleware through this new line of code:

app.MapWolverineEndpoints(opts =>
{
    // Direct Wolverine.HTTP to use Fluent Validation
    // middleware to validate any request bodies where
    // there's a known validator (or many validators)
    opts.UseFluentValidationProblemDetailMiddleware();
    
    // Creates a User object in HTTP requests based on
    // the "user-id" claim
    opts.AddMiddleware(typeof(UserDetectionMiddleware));
});

And add an actual validator for our LogIncident, and in this case that model is just an internal concern of our service, so I’ll just embed that new validator as an inner type of the command type like so:

public record LogIncident(
    Guid CustomerId,
    Contact Contact,
    string Description
)
{
    public class LogIncidentValidator : AbstractValidator<LogIncident>
    {
        // I stole this idea of using inner classes to keep them
        // close to the actual model from *someone* online,
        // but don't remember who
        public LogIncidentValidator()
        {
            RuleFor(x => x.Description).NotEmpty().NotNull();
            RuleFor(x => x.Contact).NotNull();
        }
    }
};

Now, Wolverine does have to “know” about these validators to use them within the endpoint handling, so I’ll need to have these types registered in the application’s IoC container against the right IValidator<T> interface. This is not required, but Wolverine has a (Lamar) helper to find and register these validators within your project and do so in a way that’s most efficient at runtime (i.e., there’s a micro optimization for making these validators have a Singleton life time in the container if Wolverine can see that the types are stateless). I’ll use that little helper in our Program file within the UseWolverine() configuration like so:

builder.Host.UseWolverine(opts =>
{
    // lots more stuff unfortunately, but focus on the line below
    // just for now:-)
    
    // Apply the validation middleware *and* discover and register
    // Fluent Validation validators
    opts.UseFluentValidation();

}

And that’s that. We’ve not got Fluent Validation validation in the request handling for the LogIncident command. In a later section, I’ll explain how Wolverine does this, and try to sell you all on the idea that Wolverine is able to do this more efficiently than other commonly used frameworks *cough* MediatR *cough* that depend on conditional runtime code.

One off validation with “Compound Handlers”

As you might have noticed, the LogIncident command has a CustomerId property that we’re using as is within our HTTP handler. We should never just trust the inputs of a random client, so let’s at least validate that the command refers to a real customer.

Now, typically I like to make Wolverine message handler or HTTP endpoint methods be the “happy path” and handle exception cases and one off validations with a Wolverine feature we inelegantly call “compound handlers.”

I’m going to add a new method to our LogIncidentHandler class like so:

    // Wolverine has some naming conventions for Before/Load
    // or After/AfterAsync, but you can use a more descriptive
    // method name and help Wolverine out with an attribute
    [WolverineBefore]
    public static async Task<ProblemDetails> ValidateCustomer(
        LogIncident command, 
        
        // Method injection works just fine within middleware too
        IDocumentSession session)
    {
        var exists = await session.Query<Customer>().AnyAsync(x => x.Id == command.CustomerId);
        return exists
            ? WolverineContinue.NoProblems
            : new ProblemDetails { Detail = $"Unknown customer id {command.CustomerId}", Status = 400};
    }

Integration Testing

While the individual methods and middleware can all be tested separately, you do want to put everything together with an integration test to prove out whether or not all this magic really works. As I described in an earlier post where we learned how to use Alba to create an integration testing harness for a “critter stack” application, we can write an end to end integration test against the HTTP endpoint like so (this sample doesn’t cover every permutation, but hopefully you get the point):

    [Fact]
    public async Task create_a_new_incident_happy_path()
    {
        // We'll need a user
        var user = new User(Guid.NewGuid());
        
        // Log a new incident first
        var initial = await Scenario(x =>
        {
            var contact = new Contact(ContactChannel.Email);
            x.Post.Json(new LogIncident(BaselineData.Customer1Id, contact, "It's broken")).ToUrl("/api/incidents");
            x.StatusCodeShouldBe(201);
            
            x.WithClaim(new Claim("user-id", user.Id.ToString()));
        });

        var incidentId = initial.ReadAsJson<NewIncidentResponse>().IncidentId;

        using var session = Store.LightweightSession();
        var events = await session.Events.FetchStreamAsync(incidentId);
        var logged = events.First().ShouldBeOfType<IncidentLogged>();

        // This deserves more assertions, but you get the point...
        logged.CustomerId.ShouldBe(BaselineData.Customer1Id);
    }

    [Fact]
    public async Task log_incident_with_invalid_customer()
    {
        // We'll need a user
        var user = new User(Guid.NewGuid());
        
        // Reject the new incident because the Customer for 
        // the command cannot be found
        var initial = await Scenario(x =>
        {
            var contact = new Contact(ContactChannel.Email);
            var nonExistentCustomerId = Guid.NewGuid();
            x.Post.Json(new LogIncident(nonExistentCustomerId, contact, "It's broken")).ToUrl("/api/incidents");
            x.StatusCodeShouldBe(400);
            
            x.WithClaim(new Claim("user-id", user.Id.ToString()));
        });
    }
}

Um, how does this all work?

So far I’ve shown you some “magic” code, and that tends to really upset some folks. I also made some big time claims about how Wolverine is able to be more efficient at runtime (alas, there is a significant “cold start” problem you can easily work around, so don’t get upset if your first ever Wolverine request isn’t snappy).

Wolverine works by using code generation to wrap its handling code around your code. That includes the middleware, and the usage of any IoC services as well. Moreover, do you know what the fastest IoC container is in all the .NET land? I certainly think that Lamar is at least in the game for that one, but nope, the answer is no IoC container at runtime.

One of the advantages of this approach is that we can preview the generated code to unravel the “magic” and explain what Wolverine is doing at runtime. Moreover, we’ve tried to add descriptive comments to the generated code to further explain what and why code is in place.

See more about this in my post Unraveling the Magic in Wolverine.

Here’s the generated code for our LogIncident endpoint (warning, ugly generated code ahead):

// <auto-generated/>
#pragma warning disable
using FluentValidation;
using Microsoft.AspNetCore.Routing;
using System;
using System.Linq;
using Wolverine.Http;
using Wolverine.Http.FluentValidation;
using Wolverine.Marten.Publishing;
using Wolverine.Runtime;

namespace Internal.Generated.WolverineHandlers
{
    // START: POST_api_incidents
    public class POST_api_incidents : Wolverine.Http.HttpHandler
    {
        private readonly Wolverine.Http.WolverineHttpOptions _wolverineHttpOptions;
        private readonly Wolverine.Runtime.IWolverineRuntime _wolverineRuntime;
        private readonly Wolverine.Marten.Publishing.OutboxedSessionFactory _outboxedSessionFactory;
        private readonly FluentValidation.IValidator<Helpdesk.Api.LogIncident> _validator;
        private readonly Wolverine.Http.FluentValidation.IProblemDetailSource<Helpdesk.Api.LogIncident> _problemDetailSource;

        public POST_api_incidents(Wolverine.Http.WolverineHttpOptions wolverineHttpOptions, Wolverine.Runtime.IWolverineRuntime wolverineRuntime, Wolverine.Marten.Publishing.OutboxedSessionFactory outboxedSessionFactory, FluentValidation.IValidator<Helpdesk.Api.LogIncident> validator, Wolverine.Http.FluentValidation.IProblemDetailSource<Helpdesk.Api.LogIncident> problemDetailSource) : base(wolverineHttpOptions)
        {
            _wolverineHttpOptions = wolverineHttpOptions;
            _wolverineRuntime = wolverineRuntime;
            _outboxedSessionFactory = outboxedSessionFactory;
            _validator = validator;
            _problemDetailSource = problemDetailSource;
        }



        public override async System.Threading.Tasks.Task Handle(Microsoft.AspNetCore.Http.HttpContext httpContext)
        {
            var messageContext = new Wolverine.Runtime.MessageContext(_wolverineRuntime);
            // Building the Marten session
            await using var documentSession = _outboxedSessionFactory.OpenSession(messageContext);
            // Reading the request body via JSON deserialization
            var (command, jsonContinue) = await ReadJsonAsync<Helpdesk.Api.LogIncident>(httpContext);
            if (jsonContinue == Wolverine.HandlerContinuation.Stop) return;
            
            // Execute FluentValidation validators
            var result1 = await Wolverine.Http.FluentValidation.Internals.FluentValidationHttpExecutor.ExecuteOne<Helpdesk.Api.LogIncident>(_validator, _problemDetailSource, command).ConfigureAwait(false);

            // Evaluate whether or not the execution should be stopped based on the IResult value
            if (!(result1 is Wolverine.Http.WolverineContinue))
            {
                await result1.ExecuteAsync(httpContext).ConfigureAwait(false);
                return;
            }


            (var user, var problemDetails2) = Helpdesk.Api.UserDetectionMiddleware.Load(httpContext.User);
            // Evaluate whether the processing should stop if there are any problems
            if (!(ReferenceEquals(problemDetails2, Wolverine.Http.WolverineContinue.NoProblems)))
            {
                await WriteProblems(problemDetails2, httpContext).ConfigureAwait(false);
                return;
            }


            var problemDetails3 = await Helpdesk.Api.LogIncidentEndpoint.ValidateCustomer(command, documentSession).ConfigureAwait(false);
            // Evaluate whether the processing should stop if there are any problems
            if (!(ReferenceEquals(problemDetails3, Wolverine.Http.WolverineContinue.NoProblems)))
            {
                await WriteProblems(problemDetails3, httpContext).ConfigureAwait(false);
                return;
            }


            
            // The actual HTTP request handler execution
            (var newIncidentResponse_response, var startStream) = Helpdesk.Api.LogIncidentEndpoint.Post(command, user);

            
            // Placed by Wolverine's ISideEffect policy
            startStream.Execute(documentSession);

            // This response type customizes the HTTP response
            ApplyHttpAware(newIncidentResponse_response, httpContext);
            
            // Commit any outstanding Marten changes
            await documentSession.SaveChangesAsync(httpContext.RequestAborted).ConfigureAwait(false);

            
            // Have to flush outgoing messages just in case Marten did nothing because of https://github.com/JasperFx/wolverine/issues/536
            await messageContext.FlushOutgoingMessagesAsync().ConfigureAwait(false);

            // Writing the response body to JSON because this was the first 'return variable' in the method signature
            await WriteJsonAsync(httpContext, newIncidentResponse_response);
        }

    }

    // END: POST_api_incidents
    
    
}


Summary and What’s Next

The Wolverine.HTTP library was originally built to be a supplement to MVC Core or Minimal API by allowing you to create endpoints that integrated well into Wolverine’s messaging, transactional outbox functionality, and existing transactional middleware. It has since grown into being more of a full fledged alternative for building web services, but with potential for substantially less ceremony and far more testability than MVC Core.

In later posts I’ll talk more about the runtime architecture and how Wolverine squeezes out more performance by eliminating conditional runtime switching, reducing object allocations, and sidestepping the dictionary lookups that are endemic to other “flexible” .NET frameworks like MVC Core.

Wolverine.HTTP has not yet been used with Razor at all, and I’m not sure that will ever happen. Not to worry though, you can happily use Wolverine.HTTP in the same application with MVC Core controllers or even Minimal API endpoints.

OpenAPI support has been a constant challenge with Wolverine.HTTP as the OpenAPI generation in ASP.Net Core is very MVC-centric, but I think we’re in much better shape now.

In the next post, I think we’ll introduce asynchronous messaging with Rabbit MQ. At some point in this series I’m going to talk more about how the “Critter Stack” is well suited for a lower ceremony vertical slice architecture that (hopefully) creates a maintainable and testable codebase without all the typical Clean/Onion Architecture baggage that I could personally do without.

And just for fun…

My “History” with ASP.Net MVC

There’s no useful content in this section, just some navel-gazing. Even though I really haven’t had to use ASP.Net MVC too terribly much, I do have a long history with it:

  1. In the beginning, there was what we now call ASP Classic, and it was good. For that day and time anyway when we would happily code directly in production and before TDD and SOLID and namby-pamby “source control.” (I started my development career in “Shadow IT” if that’s not obvious here). And when we did use source control, it was VSS because on the sly because the official source control in the office was something far, far worse that was COBOL-centric that I don’t think even exists any longer.
  2. Next there was ASP.Net WebForms and it was dreadful. I hated it.
  3. We started collectively learning about Agile and wanted to practice Test Driven Development, and began to hate WebForms even more
  4. Ruby on Rails came out in the middle 00’s and made what later became the ALT.Net community absolutely loathe WebForms even more than we already did
  5. At an MVP Summit on the Microsoft campus, the one and only Scott Guthrie, the Gu himself, showed a very early prototype of ASP.Net MVC to a handful of us and I was intrigued. That continued onward through the official unveiling of MVC at the very first ALT.Net open spaces event in Austin in ’07.
  6. A few collaborators and I decided that early ASP.Net MVC was too high ceremony and went all “Captain Ahab” trying to make an alternative, open source framework called FubuMVC go as an alternative — all while NancyFx, a “yet another Sinatra clone” became far more successful years before Microsoft finally got around to their own inevitable Sinatra clone (Minimal API)
  7. After .NET Core came along and made .NET a helluva lot better ecosystem, I decided that whatever, MVC Core is fine, it’s not going to be the biggest problem on our project, and if the client wants to use it, there’s no need to be upset about it. It’s fine, no really.
  8. MVC Core has gotten some incremental improvements over time that made it lower ceremony than earlier ASP.Net MVC, and that’s worth calling out as a positive
  9. People working with MVC Core started running into the problem of bloated controllers, and started using early MediatR as a way to kind of, sort of manage controller bloat by offloading it into focused command handlers. I mocked that approach mercilessly, but that was partially because of how awful a time I had helping folks do absurdly complicated middleware schemes with MediatR using StructureMap or Lamar (MVC Core + MediatR is probably worthwhile as a forcing function to avoid the controller bloat problems with MVC Core by itself)
  10. I worked on several long-running codebases built with MVC Core based on Clean Architecture templates that were ginormous piles of technical debt, and I absolutely blame MVC Core as a contributing factor for that
  11. I’m back to mildly disliking MVC Core (and I’m outright hostile to Clean/Onion templates). Not that you can’t write maintainable systems with MVC Core, but I think that its idiomatic usage can easily lead to unmaintainable systems. Let’s just say that I don’t think that MVC Core — and especially combined with some kind of Clean/Onion Architecture template as it very commonly is out in the wild — leads folks to the “pit of success” in the long run

See you at CodeMash 2024!

Hey, did you know that JasperFx Software is ready for formal support plans for Marten and Wolverine? Not only are we trying to make the “Critter Stack” tools be viable long term options for your shop, we’re also interested in hearing your opinions about the tools and how they should change.

Hey folks, I’ll be making my first CodeMash appearance since before the pandemic. I’m happy to talk your ears off about the Critter Stack tools, but also just to connect with the technical community and learn more about what other folks are doing these days. See you there this week!

I’m giving a pair of talks this time out:

A Contrarian View of Software Architecture

In this talk, Jeremy will cast some aspersions on some of the industry best practices that teams adopt in order to create maintainable software, but can ironically be the very cause of debilitating technical debt. Jeremy will also attempt to explain a vision of how to sidestep these problems and other alternatives for codebase organization. And all with many pop culture references that are too old for his college age son to recognize.

CQRS with Event Sourcing using the “Critter Stack”

The “Critter Stack” tools (Marten and Wolverine) combine to form a very low ceremony approach to building software using a CQRS architectural approach combined with event sourcing for the persistence. In this talk, Jeremy will show how to use the Critter Stack to build a small web service. In particular, this talk tries to prove that the Critter Stack leads to simple code that is well suited for both fine grained unit testing of the business rules and efficient automated integration testing of the whole application. He’ll also show you how the Critter Stack fits very well into a “Vertical Slice Architecture” that sidesteps the technical complexity of the “Clean Architecture” approaches that Jeremy is going to ruthlessly mock in his first talk.