Plans for Marten V7 and Beyond

As you might know, the Marten core team is in the process of building a new business around the “Critter Stack” tools (Marten + Wolverine and a couple smaller things). I think we’ll shortly be able to offer formal paid support contracts through JasperFx Software, and we’re absolutely open for business for any kind of immediate consulting on your software initiatives.

Call this a follow up from the Critter Stack roadmap from back in March. Everything takes longer than you wish it would, but at least Wolverine went 1.0 and I’m happy with how that’s been received so far and its uptake.

In the immediate future, we’re kicking out a Marten 6.1 release with a new health check integration and some bug fixes. Shortly afterward, we hope to get another 6.1.1 release out with as many bug fixes as we can address quickly to clear the way for the next big release. With that out of the way, let’s get on to the big initiatives for Marten!

Marten 7 Roadmap

Marten 6 was a lot of bug fixing and accommodating the latest version of Npgsql, our dependency for low level communication with PostgreSQL. Marten 7 is us getting on track for our strategic initiatives. Right now the goal is to move to an “open core” model for Marten where the current library and its capabilities stay open and free, but we will be building more advanced features for bigger workloads in commercial add on projects.

For the open core part of Marten, we’re aiming for:

  • Significant improvements to the LINQ provider support for better performing SQL and knocking down an uncomfortably long list of backlogged LINQ related issues and user requests. You can read more about that in Marten Linq Provider Improvements. Honestly, I think this is likely more work than the rest of the issues combined.
  • Maybe adding Strong Typed Identifiers as a logical follow on to the LINQ improvements. It’s been a frequent request, and I can see some significant opportunities for integration with Wolverine later
  • First class subscriptions in the event sourcing. This is going to be simplistic model for you to build your own persistent subscriptions to Marten events that’s a little more performant and robust than the current “IProjection for subscriptions” approach. More on this in the next section.
  • A way to parallelize the asynchronous projections in Marten for improved scalability based on Oskar’s strategy described in Scaling Marten.
  • .NET 8 integration. No idea what if anything that’s going to do to us.
  • Incorporating Npgsql 8. Also no idea if that’s going to be full of unwanted surprises or a walk in the park yet
  • A lightweight option for partial updates of Marten documents. We support this through our PLv8 add on, but that’s likely being deprecated and this comes up quite a bit from people moving to Marten from MongoDb

Critter Stack Enterprise-y Edition

Now, for the fun part, the long planned, long dreamed of commercial add ons for true enterprise Critter Stack usage. We have initial plans to build an all new library using both Marten and Wolverine that will be available under a commercial subscription.

Projection Scalability

Most of the biggest issues with scaling Marten to bigger systems are related to the asynchronous projection support. The first goal is scalability through distributing projection work across all the running nodes within your application. Wolverine already has leader election and “agent assignment” functionality that was built with this specific need in mind. To make that a little more clear, let’s say that you’re using Marten with a database per tenant multi-tenancy strategy. With “Critter Stack Enterprise” (place holder until we get a better name), the projection work might be distributed across nodes by tenant something like this:

The “leader agent” would help redistribute work as nodes come online or offline.

Improving scalability by distributing load across nodes is a big step, but there’s more tricks to play with projection throughput that would be part of this work.

Zero Downtime, Blue/Green Deployments

With the new projection daemon alternative, we will also be introducing a new “blue/green deployment” scheme where you will be able to change existing projections, introduce all new projections, or introduce new event signatures without having to have a potentially long downtime for rebuilding projections the way you might have to with Marten today. I feel like we have some solid ideas for how to finally pull this off.

More Robust Subscription Recipes

I don’t have many specifics here, but I think there’s an opportunity to also support more robust subscription offerings out of the Marten events using existing or new Wolverine capabilities. I also think we can offer stricter ordering and delivery guarantees with the Marten + Wolverine combination than we ever could with Marten alone. And frankly, I think we can do something more robust than what our obvious competitor tools do today.

Additional Event Store Throughput Improvements

Some other ideas we’re kicking around:

  • Introduce 2nd level caching into the aggregation projections
  • Elastic scalability for rebuilding projections
  • Hot/cold event store archiving that could improve both performance and scalability
  • Optional usage of higher performance serializers in the event store. That mostly knocks out LINQ querying for the event data

Other Stuff

We have many more ideas, but I think that the biggest theme is going to be ratcheting up the scalability of the event sourcing functionality and CQRS usage in general. There’s also a possibility of taking Marten’s event store functionality into cross-platform usage this year as well.

Thoughts? Requests? Wanna run jump in line to hire JasperFx Software?

Wolverine’s Improved Azure Service Bus Support

Wolverine 1.6.0 came out today, and one of the main themes was a series of improvements to the Azure Service Bus integration with Wolverine. In addition to the basic support Wolverine already had support for messaging with Azure Service Bus queues, topics, and subscriptions, you can now use native scheduled delivery, session identifiers for FIFO delivery, and expanded options for conventional routing topology.

First though, to get started with Azure Service Bus and Wolverine, install the WolverineFx.AzureServiceBus with the Nuget mechanism of your choice:

dotnet add package WolverineFx.AzureServiceBus

Next, you’ll add just a little bit to your Wolverine bootstrapping like this:

using var host = await Host.CreateDefaultBuilder()
    .UseWolverine((context, opts) =>
    {
        // One way or another, you're probably pulling the Azure Service Bus
        // connection string out of configuration
        var azureServiceBusConnectionString = context
            .Configuration
            .GetConnectionString("azure-service-bus");

        // Connect to the broker in the simplest possible way
        opts.UseAzureServiceBus(azureServiceBusConnectionString)
            .AutoProvision()
            .UseConventionalRouting();
    }).StartAsync();

Native Message Scheduling

You can now use native Azure Service Bus scheduled delivery within Wolverine without any explicit configuration beyond what you already do to connect to Azure Service Bus. Putting that into perspective, if you have a message type name ValidateInvoiceIsNotLate that is routed to an Azure Service Bus queue or subscription, you can use this feature:

public async Task SendScheduledMessage(IMessageContext bus, Guid invoiceId)
{
    var message = new public async Task SendScheduledMessage(IMessageContext bus, Guid invoiceId)
{
    var message = new ValidateInvoiceIsNotLate
    {
        InvoiceId = invoiceId
    };

    // Schedule the message to be processed in a certain amount
    // of time
    await bus.ScheduleAsync(message, 30.Days());

    // Schedule the message to be processed at a certain time
    await bus.ScheduleAsync(message, DateTimeOffset.Now.AddDays(30));
}
    {
        InvoiceId = invoiceId
    };

    // Schedule the message to be processed in a certain amount
    // of time
    await bus.ScheduleAsync(message, 30.Days());

    // Schedule the message to be processed at a certain time
    await bus.ScheduleAsync(message, DateTimeOffset.Now.AddDays(30));
}

That would also apply to scheduled retry error handling if the endpoint is also Inline:

using var host = Host.CreateDefaultBuilder()
    .UseWolverine(opts =>
    {
        opts.Policies.OnException<TimeoutException>()
            // Just retry the message again on the
            // first failure
            .RetryOnce()

            // On the 2nd failure, put the message back into the
            // incoming queue to be retried later
            .Then.Requeue()

            // On the 3rd failure, retry the message again after a configurable
            // cool-off period. This schedules the message
            .Then.ScheduleRetry(15.Seconds())

            // On the next failure, move the message to the dead letter queue
            .Then.MoveToErrorQueue();

    }).StartAsync();

Topic & Subscription Conventions

The original conventional routing with Azure Service Bus just sent and listened to queues named after the message type within the application. Wolverine 1.6 adds an additional routing convention to publish outgoing messages to topics and listen for known handled messages with topics and subscriptions. In all cases, you can customize the convention naming and any element of the Wolverine listening, sending, or any of the effected Azure Service Bus topics or subscriptions.

The syntax for this option is shown below:

opts.UseAzureServiceBusTesting()
    .UseTopicAndSubscriptionConventionalRouting(convention =>
    {
        // Optionally control every aspect of the convention and
        // its applicability to types
        // as well as overriding any listener, sender, topic, or subscription
        // options
    })

    .AutoProvision()
    .AutoPurgeOnStartup();

Session Identifiers and FIFO Delivery

You can now take advantage of sessions and first-in, first out queues in Azure Service Bus with Wolverine. To tell Wolverine that an Azure Service Bus queue or subscription should require sessions, you have this syntax shown in an internal test:

_host = await Host.CreateDefaultBuilder()
    .UseWolverine(opts =>
    {
        opts.UseAzureServiceBusTesting()
            .AutoProvision().AutoPurgeOnStartup();

        opts.ListenToAzureServiceBusQueue("send_and_receive");
        opts.PublishMessage<AsbMessage1>().ToAzureServiceBusQueue("send_and_receive");

        opts.ListenToAzureServiceBusQueue("fifo1")
            
            // Require session identifiers with this queue
            .RequireSessions()
            
            // This controls the Wolverine handling to force it to process
            // messages sequentially
            .Sequential();
        
        opts.PublishMessage<AsbMessage2>()
            .ToAzureServiceBusQueue("fifo1");

        opts.PublishMessage<AsbMessage3>().ToAzureServiceBusTopic("asb3");
        opts.ListenToAzureServiceBusSubscription("asb3")
            .FromTopic("asb3")
            
            // Require sessions on this subscription
            .RequireSessions(1)
            
            .ProcessInline();
    }).StartAsync();

Wolverine is using the “group-id” nomenclature from the AMPQ standard, but for Azure Service Bus, this is directly mapped to the SessionId property on the Azure Service Bus client internally.

To publish messages to Azure Service Bus with a session id, you will need to of course supply the session id:

// bus is an IMessageBus
await bus.SendAsync(new AsbMessage3("Red"), new DeliveryOptions { GroupId = "2" });
await bus.SendAsync(new AsbMessage3("Green"), new DeliveryOptions { GroupId = "2" });
await bus.SendAsync(new AsbMessage3("Refactor"), new DeliveryOptions { GroupId = "2" });

You can also send messages with session identifiers through cascading messages as shown in a fake message handler below:

public static IEnumerable<object> Handle(IncomingMessage message)
{
    yield return new Message1().WithGroupId("one");
    yield return new Message2().WithGroupId("one");

    yield return new Message3().ScheduleToGroup("one", 5.Minutes());

    // Long hand
    yield return new Message4().WithDeliveryOptions(new DeliveryOptions
    {
        GroupId = "one"
    });
}

Using Sql Server as a Message Queue with Wolverine

Wolverine 1.4.0 was released last week (and a smaller 1.5.0, with a medium sized 1.6.0 coming Monday). The biggest new feature was a brand new option to use Microsoft Sql Server (or Azure Sql) as a durable message transport with Wolverine.

Let’s say your system is already using Sql Server for persistence, you need some durable, asynchronous messaging, and wouldn’t it be nice to not have to introduce any new infrastructure into the mix? Assuming you’ve decided to also use Wolverine, you can get started with this approach by adding the WolverineFx.SqlServer Nuget to your application:

dotnet add package WolverineFx.SqlServer

Here’s a sample application bootstrapping that shows the inclusion and configuration of Sql Server-backed queueing:

using var host = await Host.CreateDefaultBuilder()
    .UseWolverine((context, opts) =>
    {
        var connectionString = context
            .Configuration
            .GetConnectionString("sqlserver");
        
        // This adds both Sql Server backed
        // transactional inbox/outbox support
        // and the messaging transport support
        opts
           .UseSqlServerPersistenceAndTransport(connectionString, "myapp")
            
            // Tell Wolverine to build out all necessary queue or scheduled message
            // tables on demand as needed
            .AutoProvision()
            
            // Option that may be helpful in testing, but probably bad
            // in production!
            .AutoPurgeOnStartup();

        // Use this extension method to create subscriber rules
        opts.PublishAllMessages()
            .ToSqlServerQueue("outbound");

        // Use this to set up queue listeners
        opts.ListenToSqlServerQueue("inbound")

            // Optional circuit breaker usage
            .CircuitBreaker(cb =>
            {
                // fine tune the circuit breaker
                // policies here
            })
            
            // Optionally specify how many messages to 
            // fetch into the listener at any one time
            .MaximumMessagesToReceive(50);
    }).StartAsync();

The Sql Server transport is pretty simple, it basically just supports named queues right now. Here’s a couple useful properties of the transport that will hopefully make it more useful to you:

  • Scheduled message delivery is absolutely supported with the Sql Server Transport, and some care was taken to optimize the database load and throughput when using this feature
  • Sql Server backed queues can be either “buffered in memory” (Wolverine’s message batching) or be “durable” meaning that the queues are integrated into both the transactional inbox and outbox for durable systems
  • Wolverine can build database tables as necessary for the queue much like it does today for the transactional inbox and outbox. Moreover, the configured queue tables are also part of the stateful resource model in the Critter Stack world that provide quite a bit of command line management directly into your application.
  • The Sql Server backed queues support Wolverine’s circuit breaker functionality on listeners

This feature is something that folks have asked about in the past, but I’ve always been reticent to try because databases don’t make for great, 1st class queueing mechanisms. That being said, I’m working with a JasperFx Software client who wanted a more robust local queueing mechanism that could handle much more throughput for scheduled messaging, and thus, the new Sql Server Transport was born.

There will be a full fledged PostgreSQL backed queue at some point, and it might be a little more robust even based on some preliminary work from a Wolverine contributor, but that’s probably not an immediate priority.

Understanding Endpoints in Wolverine Messaging

In Wolverine terminology, an “Endpoint” is the configuration time model for any location or mechanism where Wolverine sends or receives messages, including local Wolverine queues within your application. Think of external resource like a Rabbit MQ exchange or an Amazon SQS queue. The Async API specification refers to this as a channel, and Wolverine may very well change its nomenclature in the future to be consistent with Async API. While there are somewhat different configuration options for a Rabbit MQ exchange versus an Azure Service Bus queue, there are some common elements. For the sake of this post (which is mostly ripped out of the Wolverine documentation), endpoints in Wolverine are processed on one of three modes:

  1. Inline – messages are sent immediately, and processed sequentially. While you can parallelize the listeners for better throughput, this is your most likely choice if message delivery order matters to you
  2. Buffered – kind of a batched, in memory mode
  3. Durable – batched in memory, but also backed every step of the way by Wolverine’s transactional inbox/outbox support

Choosing between these three modes is a matter of balancing throughput and delivery guarantees. With that, here’s a deeper dive into the three modes. Do note though, that not every transport type can support all three modes

Inline Endpoints

Wolverine endpoints come in three basic flavors, with the first being Inline endpoints:

// Configuring a Wolverine application to listen to
// an Azure Service Bus queue with the "Inline" mode
opts.ListenToAzureServiceBusQueue(queueName, q => q.Options.AutoDeleteOnIdle = 5.Minutes()).ProcessInline();

With inline endpoints, as the name implies, calling IMessageBus.SendAsync() immediately sends the message to the external message broker. Likewise, messages received from an external message queue are processed inline before Wolverine acknowledges to the message broker that the message is received.

Inline Endpoints

In the absence of a durable inbox/outbox, using inline endpoints is “safer” in terms of guaranteed delivery. As you might think, using inline agents can bottle neck the message processing, but that can be alleviated by opting into parallel listeners.

Buffered Endpoints

In the second Buffered option, messages are queued locally between the actual external broker and the Wolverine handlers or senders.

To opt into buffering, you use this syntax:

// I overrode the buffering limits just to show
// that they exist for "back pressure"
opts.ListenToAzureServiceBusQueue("incoming")
    .BufferedInMemory(new BufferingLimits(1000, 200));

At runtime, you have a local TPL Dataflow queue between the Wolverine callers and the broker:

Buffered Endpoints

On the listening side, buffered endpoints do support back pressure (of sorts) where Wolverine will stop the actual message listener if too many messages are queued in memory to avoid chewing up your application memory. In transports like Amazon SQS that only support batched message sending or receiving, Buffered is the default mode as that facilitates message batching.

Buffered message sending and receiving can lead to higher throughput, and should be considered for cases where messages are ephemeral or expire and throughput is more important than delivery guarantees. The downside is that messages in the in memory queues can be lost in the case of the application shutting down unexpectedly — but Wolverine tries to “drain” the in memory queues on normal application shutdown.

Durable Endpoints

Durable endpoints behave like buffered endpoints, but also use the durable inbox/outbox message storage to create much stronger guarantees about message delivery and processing. You will need to use Durable endpoints in order to truly take advantage of the persistent outbox mechanism in Wolverine. To opt into making an endpoint durable, use this syntax:

// I overrode the buffering limits just to show
// that they exist for "back pressure"
opts.ListenToAzureServiceBusQueue("incoming")
    .UseDurableInbox(new BufferingLimits(1000, 200));

opts.PublishAllMessages().ToAzureServiceBusQueue("outgoing")
    .UseDurableOutbox();

Or use policies to do this in one fell swoop (which may not be what you actually want, but you could do this!):

opts.Policies.UseDurableOutboxOnAllSendingEndpoints();

As shown below, the Durable endpoint option adds an extra step to the Buffered behavior to add database storage of the incoming and outgoing messages:

Durable Endpoints

Outgoing messages are deleted in the durable outbox upon successful sending acknowledgements from the external broker. Likewise, incoming messages are also deleted from the durable inbox upon successful message execution.

The Durable endpoint option makes Wolverine’s local queueing robust enough to use for cases where you need guaranteed processing of messages, but don’t want to use an external broker.

Explicitly Route Messages with Wolverine

TL;DR: Wolverine message handler signatures can lead to easier unit testing code than comparable “IHandler of T” frameworks.

Most of the time I think you can just allow Wolverine to handle message routing for you with some simple configured rules or conventions. However, once in awhile you’ll need to override those rules and tell Wolverine exactly where those messages should go.

I’ve been working (playing) with Midjourney quite a bit lately trying to make images for the JasperFx Software website. You can try it out to generate images for free, but the image generation gets a lower priority than their paying customers. Using that as an example, let’s say that we were using Wolverine to build our own Midjourney clone. At some point, there’s maybe an asynchronous message handler like this one that takes a request to generate a new image based on the user’s prompt, but routes the actual work to either a higher or lower priority queue based on whether the user is a premium customer:

    public record GenerateImage(string Prompt, Guid ImageId);

    public record ImageRequest(string Prompt, string CustomerId);

    public record ImageGenerated(Guid Id, byte[] Image);

    public class Customer
    {
        public string Id { get; set; }
        public bool PremiumMembership { get; set; }
    }

    public class ImageSaga : Saga
    {
        public Guid Id { get; set; }
        
        public string CustomerId { get; set; }

        public Task Handle(ImageGenerated generated)
        {
            // look up the customer, figure out how to send the
            // image to their client.
            throw new NotImplementedException("Not done yet:)");
            
            MarkCompleted();
        }
    }
    
    public static class GenerateImageHandler
    {
        // I'm assuming the usage of Marten middleware here
        // to handle transactions and the outbox mechanics
        public static async Task HandleAsync(
            ImageRequest request, 
            IDocumentSession session, 
            IMessageBus messageBus,
            CancellationToken cancellationToken)
        {
            var customer = await session
                .LoadAsync<Customer>(request.CustomerId, cancellationToken);

            // I'm starting a new saga to track the state of the 
            // image when we get the callback from the downstream
            // image generation service
            var imageSaga = new ImageSaga();
            session.Insert(imageSaga);

            var outgoing = new GenerateImage(request.Prompt, imageSaga.Id);
            if (customer.PremiumMembership)
            {
                // Send the message to a named endpoint we've configured for the faster
                // processing
                await messageBus.EndpointFor("premium-processing")
                    .SendAsync(outgoing);
            }
            else
            {
                // Send the message to a named endpoint we've configured for slower
                // processing
                await messageBus.EndpointFor("basic-processing")
                    .SendAsync(outgoing);
            }
        }
    }

A couple notes on the code above:

  • I’m assuming the usage of Marten for persistence (of course), with the auto transactional middleware policy applied
  • I’ve configured a PostgreSQL backed outbox for Wolverine
  • It’s likely a slow process, so I’m assuming there’s going to be an asynchronous callback from the actual image generator later. I’m leveraging Wolverine’s stateful saga support to track the customer of the original image for processing later

Wolverine V1.3 dropped today with a little improvement for exactly this scenario (based on some usage by a JasperFx client) so you can use cascading messages instead of having to deal directly with the IMessageBus service. Let’s rewrite the explicit code up above, but this time try to turn the actual routing logic into a pure function that could be easy to unit test:

    public static class GenerateImageHandler
    {
        // Using Wolverine's compound handlers to remove all the asynchronous
        // junk from the main Handle() method
        public static Task<Customer> LoadAsync(
            ImageRequest request, 
            IDocumentSession session,
            CancellationToken cancellationToken)
        {
            return session.LoadAsync<Customer>(request.CustomerId, cancellationToken);
        }
        
        
        public static (RoutedToEndpointMessage<GenerateImage>, ImageSaga) Handle(
            ImageRequest request, 
            Customer customer)
        {

            // I'm starting a new saga to track the state of the 
            // image when we get the callback from the downstream
            // image generation service
            var imageSaga = new ImageSaga
            {
                // I need to assign the image id in memory
                // to make this all work
                Id = CombGuidIdGeneration.NewGuid()
            };

            var outgoing = new GenerateImage(request.Prompt, imageSaga.Id);
            var destination = customer.PremiumMembership ? "premium-processing" : "basic-processing";
            
            return (outgoing.ToEndpoint(destination), imageSaga);
        }
    }

The handler above is the equivalent in functionality to the earlier version. It’s not really that much less code, but I think it’s a bit more declarative. What’s most important to me is the potential for unit testing the decision about where the customer requests go as shown in this fake test:

    [Fact]
    public void should_send_the_request_to_premium_processing_for_premium_customers()
    {
        var request = new ImageRequest("a wolverine ice skating in the country side", "alice");
        var customer = new Customer
        {
            Id = "alice",
            PremiumMembership = true
        };

        var (command, image) = GenerateImageHandler.Handle(request, customer);
        
        command.EndpointName.ShouldBe("premium-processing");
        command.Message.Prompt.ShouldBe(request.Prompt);
        command.Message.ImageId.ShouldBe(image.Id);
        
        image.CustomerId.ShouldBe(request.CustomerId);
    }

What I’m hoping you take away from that code sample is that testing the logic part of the ImageRequest message processing turns into a simple state-based test — meaning that you’re just pushing in the known inputs and measuring the values returned by the method. You’d still need to pair this unit test with a full integration test, but at least you’d know that the routing logic is correct before you wrestle with potential integration issues.

A-Frame Architecture with Wolverine

I’m weaseling into making a second blog post about a code sample that I mostly stole from just to meet my unofficial goal of 2-3 posts a week promoting the Critter Stack.

Last week I wrote a blog post ostensibly about Marten’s compiled query feature that also included this code sample that I adapted from Oskar’s excellent post on vertical slices:

using DailyAvailability = System.Collections.Generic.IReadOnlyList<Booking.RoomReservations.GettingRoomTypeAvailability.DailyRoomTypeAvailability>;
 
namespace Booking.RoomReservations.ReservingRoom;
 
public record ReserveRoomRequest(
    RoomType RoomType,
    DateOnly From,
    DateOnly To,
    string GuestId,
    int NumberOfPeople
);
 
public static class ReserveRoomEndpoint
{
    // More on this in a second...
    public static async Task<DailyAvailability> LoadAsync(
        ReserveRoomRequest request,
        IDocumentSession session)
    {
        // Look up the availability of this room type during the requested period
        return (await session.QueryAsync(new GetRoomTypeAvailabilityForPeriod(request))).ToList();
    }
 
    [WolverinePost("/api/reservations")]
    public static (CreationResponse, StartStream<RoomReservation>) Post(
        ReserveRoomRequest command,
        DailyAvailability dailyAvailability)
    {
        // Make sure there is availability for every day
        if (dailyAvailability.Any(x => x.AvailableRooms == 0))
        {
            throw new InvalidOperationException("Not enough available rooms!");
        }
 
        var reservationId = CombGuidIdGeneration.NewGuid().ToString();
 
        // I copied this, but I'd probably eliminate the record usage in favor
        // of init only properties so you can make the potentially error prone
        // mapping easier to troubleshoot in the future
        // That folks is the voice of experience talking
        var reserved = new RoomReserved(
            reservationId,
            null,
            command.RoomType,
            command.From,
            command.To,
            command.GuestId,
            command.NumberOfPeople,
            ReservationSource.Api,
            DateTimeOffset.UtcNow
        );
 
        return (
            // This would be the response body, and this also helps Wolverine
            // to create OpenAPI metadata for the endpoint
            new CreationResponse($"/api/reservations/{reservationId}"),
             
            // This return value is recognized by Wolverine as a "side effect"
            // that will be processed as part of a Marten transaction
            new StartStream<RoomReservation>(reservationId, reserved)
        );
    }
}

The original intent of that code sample was to show off how the full “critter stack” (Marten & Wolverine together) enables relatively low ceremony code that also promotes a high degree of testability. And does all of that without requiring developers to invest a lot of time in complicated , prescriptive architectures like a typical Clean Architecture structure.

Specifically today though, I want to zoom in on “testability” and talk about how Wolverine explicitly encourages code that exhibits what Jim Shore famously called the “A Frame Architecture” in its message handlers, but does so with functional decomposition rather than oodles of abstractions and layers.

Using the “A-Frame Architecture”, you roughly want to divide your code into three sets of functionality:

  1. The domain logic for your system, which I would say includes “deciding” what actions to take next.
  2. Infrastructural service providers
  3. Conductor or mediator code that invokes both the infrastructure and domain logic code to decouple the domain logic from infrastructure code

In the message handler above for the `ReserveRoomRequest` command, Wolverine itself is acting as the “glue” around the methods of the HTTP handler code above that keeps the domain logic (the ReserveRoomEndpoint.Post() method that “decides” what event should be captured) and the raw Marten infrastructure to load existing data and persist changes back to the database.

To illustrate that in action, here’s the full generated code that Wolverine compiles to actually handle the full HTTP request (with some explanatory annotations I made by hand):

    public class POST_api_reservations : Wolverine.Http.HttpHandler
    {
        private readonly Wolverine.Http.WolverineHttpOptions _wolverineHttpOptions;
        private readonly Marten.ISessionFactory _sessionFactory;

        public POST_api_reservations(Wolverine.Http.WolverineHttpOptions wolverineHttpOptions, Marten.ISessionFactory sessionFactory) : base(wolverineHttpOptions)
        {
            _wolverineHttpOptions = wolverineHttpOptions;
            _sessionFactory = sessionFactory;
        }



        public override async System.Threading.Tasks.Task Handle(Microsoft.AspNetCore.Http.HttpContext httpContext)
        {
            await using var documentSession = _sessionFactory.OpenSession();
            var (command, jsonContinue) = await ReadJsonAsync<Booking.RoomReservations.ReservingRoom.ReserveRoomRequest>(httpContext);
            if (jsonContinue == Wolverine.HandlerContinuation.Stop) return;

            // Wolverine has a convention to call methods named
            // "LoadAsync()" before the main endpoint method, and
            // to pipe data returned from this "Before" method
            // to the parameter inputs of the main method
            // as that actually makes sense
            var dailyRoomTypeAvailabilityIReadOnlyList = await Booking.RoomReservations.ReservingRoom.ReserveRoomEndpoint.LoadAsync(command, documentSession).ConfigureAwait(false);

            // Call the "real" HTTP handler method. 
            // The first value is the HTTP response body
            // The second value is a "side effect" that
            // will be part of the transaction around this
            (var creationResponse, var startStream) = Booking.RoomReservations.ReservingRoom.ReserveRoomEndpoint.Post(command, dailyRoomTypeAvailabilityIReadOnlyList);
            
            // Placed by Wolverine's ISideEffect policy
            startStream.Execute(documentSession);

            // This little ugly code helps get the correct
            // status code for creation for those of you 
            // who can't be satisfied by using 200 for everything ((Wolverine.Http.IHttpAware)creationResponse).Apply(httpContext);
            
            // Commit any outstanding Marten changes
            await documentSession.SaveChangesAsync(httpContext.RequestAborted).ConfigureAwait(false);

            // Write the response body as JSON
            await WriteJsonAsync(httpContext, creationResponse);
        }

    }

Wolverine by itself as acting as the mediator between the infrastructure concerns (loading & persisting data) and the business logic function which in Wolverine world becomes a pure function that are typically much easier to unit test than code that has direct coupling to infrastructure concerns — even if that coupling is through abstractions.

Testing wise, if I were actually building a real endpoint like that shown above, I would choose to:

  1. Unit test the Post() method itself by “pushing” inputs to it through the room availability and command data, then assert the expected outcome on the event published through the StartStream<Reservation> value returned by that method. That’s pure state-based testing for the easiest possible unit testing. As an aside, I would claim that this method is an example of the Decider pattern for testable event sourcing business logic code.
  2. I don’t think I’d bother testing the LoadAsync() method by itself, but instead I’d opt to use something like Alba to write an end to end test at the HTTP layer to prove out the entire workflow, but only after the unit tests for the Post() method are all passing.

Responsibility Driven Design

While the “A-Frame Architecture” metaphor is a relatively recent influence upon my design thinking, I’ve long been a proponent of Responsibility Driven Design (RDD) as explained by Rebecca Wirfs-Brock’s excellent A Brief Tour of Responsibility Driven Design. Don’t dismiss that paper because of its age, because the basic concepts and strategies for identifying different responsibilities in your system as a prerequisite for designing or structuring code put forth in that paper are absolutely useful even today.

Applying Responsibility Driven Development to the sample HTTP endpoint code above, I would say that:

  • The Marten IDocumentSession is a “service provider”
  • The Wolverine generated code acts as a “coordinator”
  • The Post() method is responsible for “deciding” what events should be emitted and persisted. One of the most helpful pieces of advice in RDD is to sometimes treat “deciding” to do an action as a separate responsibility from actually carrying out the action. That can lead to better isolating the decision making logic away from infrastructural concerns for easier testing

It’s also old as hell for software, but one of my personal favorite articles I ever wrote was Object Role Stereotypes for MSDN Magazine way back in 2008.

Wolverine has some new tricks to reduce boilerplate code in HTTP endpoints

Wolverine 1.2.0 rolled out this morning with some enhancements for HTTP endpoints. In the realm of HTTP endpoints, Wolverine’s raison d’être is to finally deliver a development experience to .NET developers that requires very low code ceremony, maximizes testability, and does all of that with good performance. Between some feedback from early adopters and some repetitive boilerplate code I saw doing a code review for a client last week (woot, I’ve actually got paying clients now!), Wolverine.Http got a couple new tricks to speed you up.

First off,

Here’s a common pattern in HTTP service development. Based on a route argument, you first load some kind of entity from persistence. If the data is not found, return a status code 404 that means the resource was not found, but otherwise continue working against that entity data you just loaded. Here’s a short hand way of doing that now with Wolverine “compound handlers“:

public record UpdateRequest(string Name, bool IsComplete);

public static class UpdateEndpoint
{
    // Find required Todo entity for the route handler below
    public static Task<Todo?> LoadAsync(int id, IDocumentSession session) 
        => session.LoadAsync<Todo>(id);
    
    [WolverinePut("/todos/{id:int}")]
    public static StoreDoc<Todo> Put(
        // Route argument
        int id,
        
        // The request body
        UpdateRequest request,
        
        // Entity loaded by the method above, 
        // but note the [Required] attribute
        [Required] Todo? todo)
    {
        todo.Name = request.Name;
        todo.IsComplete = request.IsComplete;

        return MartenOps.Store(todo);
    }
}

You’ll notice that the LoadAsync() method is looking up the Todo entity for the route parameter, where Wolverine would normally be passing that value to the matching Todo parameter of the main Put method. In this case though, because of the [Required] attribute, Wolverine.Http will stop processing with a 404 status code if the Todo cannot be found.

By contrast, here’s some sample code of a higher ceremony alternative that helped spawn this feature in the first place:

Note in the code above how the author had to pollute his code with attributes strictly for OpenAPI (Swagger) metadata because the valid response types cannot be inferred when you’re returning the IResult value that could frankly be just about anything in the world.

In the Wolverine 1.2 version above, Wolverine.Http is able to infer the exact same OpenAPI metadata as the busier Put() method in the image above. Also, and I think this is potentially valuable, the Wolverine 1.2 version turns the behavior into a purely synchronous version that is going to be mechanically easier to unit test.

So that’s required data, now let’s turn our attention to Wolverine’s new ProblemDetails support. While there is a Fluent Validation middleware package for Wolverine.Http that supports ProblemDetails in a generic way, I’m seeing usages where you just need to do some explicit validation for an HTTP endpoint. Wolverine 1.2 added this usage:

public class ProblemDetailsUsageEndpoint
{
    public ProblemDetails Before(NumberMessage message)
    {
        // If the number is greater than 5, fail with a
        // validation message
        if (message.Number > 5)
            return new ProblemDetails
            {
                Detail = "Number is bigger than 5",
                Status = 400
            };

        // All good, keep on going!
        return WolverineContinue.NoProblems;
    }

    [WolverinePost("/problems")]
    public static string Post(NumberMessage message)
    {
        return "Ok";
    }
}

public record NumberMessage(int Number);

Wolverine.Http now (as of 1.2.0) has a convention that sees a return value of ProblemDetails and looks at that as a “continuation” to tell the http handler code what to do next. One of two things will happen:

1. If the ProblemDetails return value is the same instance as WolverineContinue.NoProblems, just keep going
2. Otherwise, write the ProblemDetails out to the HTTP response and exit the HTTP request handling

Just as in the first [Required] usage, Wolverine is able to infer OpenAPI metadata about your endpoint to add a “produces ‘application/problem+json` with a 400 status code” item. And for those of you who like to get fancier or more specific with your HTTP status code usage, you can happily override that behavior with your own metadata attributes like so:

    // Use 418 as the status code instead
    [ProducesResponseType(typeof(ProblemDetails), 418)]

Compiled Queries with Marten

I had tentatively promised to do a full “critter stack” version of Oskar’s sample application in his Vertical Slices in Practice post last week that used Marten‘s event sourcing support. I started doing that this morning, but quit because it was just coming out too similar to my earlier post this week on Low Ceremony Vertical Slice Architecture with Wolverine.

In Oskar’s sample reservation booking application, there was an HTTP endpoint that handled a ReserveRoomRequest command and emitted a new RoomReserved event for a new RoomReservation event stream. Part of that processing was validating the availability of rooms of the requested type during the time period of the reservation request. Just for reference, here’s my version of Oskar’s ReserveRoomEndpoint:

using DailyAvailability = System.Collections.Generic.IReadOnlyList<Booking.RoomReservations.GettingRoomTypeAvailability.DailyRoomTypeAvailability>;

namespace Booking.RoomReservations.ReservingRoom;

public record ReserveRoomRequest(
    RoomType RoomType,
    DateOnly From,
    DateOnly To,
    string GuestId,
    int NumberOfPeople
);

public static class ReserveRoomEndpoint
{
    // More on this in a second...
    public static async Task<DailyAvailability> LoadAsync(
        ReserveRoomRequest request,
        IDocumentSession session)
    {
        // Look up the availability of this room type during the requested period
        return (await session.QueryAsync(new GetRoomTypeAvailabilityForPeriod(request))).ToList();
    }

    [WolverinePost("/api/reservations")]
    public static (CreationResponse, StartStream<RoomReservation>) Post(
        ReserveRoomRequest command,
        DailyAvailability dailyAvailability)
    {
        // Make sure there is availability for every day
        if (dailyAvailability.Any(x => x.AvailableRooms == 0))
        {
            throw new InvalidOperationException("Not enough available rooms!");
        }

        var reservationId = CombGuidIdGeneration.NewGuid().ToString();

        // I copied this, but I'd probably eliminate the record usage in favor
        // of init only properties so you can make the potentially error prone
        // mapping easier to troubleshoot in the future
        // That folks is the voice of experience talkine
        var reserved = new RoomReserved(
            reservationId,
            null,
            command.RoomType,
            command.From,
            command.To,
            command.GuestId,
            command.NumberOfPeople,
            ReservationSource.Api,
            DateTimeOffset.UtcNow
        );

        return (
            // This would be the response body, and this also helps Wolverine
            // to create OpenAPI metadata for the endpoint
            new CreationResponse($"/api/reservations/{reservationId}"),
            
            // This return value is recognized by Wolverine as a "side effect"
            // that will be processed as part of a Marten transaction
            new StartStream<RoomReservation>(reservationId, reserved)
        );
    }
}

For this post, I’d like you to focus on the LoadAsync() method above. That’s utilizing Wolverine’s compound handler technique to split out the data loading so that the actual endpoint Post() method can be a pure function that’s easily unit tested by just “pushing” in the inputs and asserting on either the values returned or the presence of an exception in the validation logic.

Back to that LoadAsync() method. Let’s assume that this HTTP service is going to be under quite a bit of load and it wouldn’t hurt to apply some performance optimization. Or also imagine that the data querying to find the room availability of a certain room type and a time period will be fairly common within the system at large. I’m saying all that to justify the usage of Marten’s compiled query feature as shown below:

public class GetRoomTypeAvailabilityForPeriod : ICompiledListQuery<DailyRoomTypeAvailability>
{
    // Sorry, but this signature is necessary for the Marten mechanics
    public GetRoomTypeAvailabilityForPeriod()
    {
    }

    public GetRoomTypeAvailabilityForPeriod(ReserveRoomRequest request)
    {
        RoomType = request.RoomType;
        From = request.From;
        To = request.To;
    }

    public RoomType RoomType { get; set; }
    public DateOnly From { get; set; }
    public DateOnly To { get; set; }

    public Expression<Func<IMartenQueryable<DailyRoomTypeAvailability>, IEnumerable<DailyRoomTypeAvailability>>>
        QueryIs()
    {
        return q => q.Where(day => day.RoomType == RoomType && day.Date >= From && day.Date <= To);
    }
}

First of all, this is Marten’s version of the Query Object pattern which enables you to share the query definition in declarative ways throughout the codebase. (I’ve heard other folks call this a “Specification,” but that name is overloaded a bit too much in software development world). Removing duplication is certainly a good thing all by itself. Doing so in a way that eliminates the need for extra repository abstractions is also a win in my book.

Secondly, by using the “compiled query”, Marten is able to cache the whole execution plan in memory (technically it’s generating code at runtime) for faster runtime execution. The dirty, barely recognized fact in .NET development today is that the act of parsing Linq statements and converting the intermediate query model into actionable SQL and glue code is not cheap. Marten compiled queries sidestep all that preliminary parsing junk and let’s you skip right to the execution part.

It’s a possibly underused and under-appreciated feature within Marten, but compiled queries are a great way to optimize your system’s performance and possibly clean up code duplication in simple ways.

Low Ceremony Vertical Slice Architecture with Wolverine

TL;DR: Wolverine can enable you to write testable code and achieve separation of concerns in your server side code with far less code ceremony than typical Clean Architecture type approaches.

I’m part of the mini-backlash against heavyweight, prescriptively layered architectural patterns like the various flavors of Hexagonal Architecture. I even did a whole talk on that subject at NDC Oslo this year:

Instead, I’m a big fan of keeping closely related code closer together with something like what Jimmy Bogard coined as Vertical Slices. Conveniently enough, I happen to think that Wolverine is a good fit for that style.

From a conference talk I did early last year, I started to build out a sample “TeleHealth Portal” system using the full “critter stack” with both Marten for persistence and event sourcing and Wolverine for everything else. Inside of this fictional TeleHealth system there will be a web service that adds a healthcare provider to an active board of related appointment requests (as an example, you might have a board for pediatric appointments in the state of Texas). When this web service executes, it needs to:

  1. Find the related information about the requested, active Board and the Provider
  2. Validate that the provider in question is able to join the active board based on various business rules like “is this provider licensed in this particular state and for some specialty?”. If the validation fails, the web service should return the validation message with the ProblemDetails specification
  3. Assuming the validation is good, start a new event stream with Marten for a ProviderShift that will track what the provider does during their active shift on that board for that specific day

I’ll need to add a little more context afterward for some application configuration, but here’s that functionality in one single Wolverine.Http endpoint class — with the assumption that the heavy duty business logic for validating the provider & board assignment is in the business domain model:

public record StartProviderShift(Guid BoardId, Guid ProviderId);
public record ShiftStartingResponse(Guid ShiftId) : CreationResponse("/shift/" + ShiftId);

public static class StartProviderShiftEndpoint
{
    // This would be called before the method below
    public static async Task<(Board, Provider, IResult)> LoadAsync(StartProviderShift command, IQuerySession session)
    {
        // You could get clever here and batch the queries to Marten
        // here, but let that be a later optimization step
        var board = await session.LoadAsync<Board>(command.BoardId);
        var provider = await session.LoadAsync<Provider>(command.ProviderId);

        if (board == null || provider == null) return (board, provider, Results.BadRequest());

        // This just means "full speed ahead"
        return (board, provider, WolverineContinue.Result());
    }

    [WolverineBefore]
    public static IResult Validate(Provider provider, Board board)
    {
        // Check if you can proceed to add the provider to the board
        // This logic is out of the scope of this sample:)
        if (provider.CanJoin(board))
        {
            // Again, this value tells Wolverine to keep processing
            // the HTTP request
            return WolverineContinue.Result();
        }
        
        // No soup for you!
        var problems = new ProblemDetails
        {
            Detail = "Provider is ineligible to join this Board",
            Status = 400,
            Extensions =
            {
                [nameof(StartProviderShift.ProviderId)] = provider.Id,
                [nameof(StartProviderShift.BoardId)] = board.Id
            }
        };

        // Wolverine will execute this IResult
        // and stop all other HTTP processing
        return Results.Problem(problems);
    }
    
    [WolverinePost("/shift/start")]
    // In the tuple that's returned below,
    // The first value of ShiftStartingResponse is assumed by Wolverine to be the 
    // HTTP response body
    // The subsequent IStartStream value is executed as a side effect by Wolverine
    public static (ShiftStartingResponse, IStartStream) Create(StartProviderShift command, Board board, Provider provider)
    {
        var started = new ProviderJoined(board.Id, provider.Id);
        var op = MartenOps.StartStream<ProviderShift>(started);

        return (new ShiftStartingResponse(op.StreamId), op);
    }
}

And there’s a few things I’d ask you to notice in the code above:

  1. It’s just one class in one file that’s largely using functional decomposition to establish separation of concerns
  2. Wolverine.Http is able to call the various methods in order from top to bottom, pass the loaded data from LoadAsync() to Validate() and on finally to the Create() method
  3. I didn’t bother with any kind of repository abstraction around the data loading in the first step
  4. The Validate() method is a pure function that’s suitable for easy unit testing of the validation logic
  5. The Create() method is also a pure, synchronous function that’s going to be easy to unit test as you can do assertions on the events contained in the IStartStream object
  6. Wolverine’s Marten integration is able to do the actual persistence of the new event stream for ProviderShift for you and deal with all the icky asynchronous junk

For more context, here’s the (butt ugly) code that Wolverine generates for the HTTP endpoint:

    public class POST_shift_start : Wolverine.Http.HttpHandler
    {
        private readonly Wolverine.Http.WolverineHttpOptions _options;
        private readonly Marten.ISessionFactory _sessionFactory;

        public POST_shift_start(Wolverine.Http.WolverineHttpOptions options, Marten.ISessionFactory sessionFactory) : base(options)
        {
            _options = options;
            _sessionFactory = sessionFactory;
        }



        public override async System.Threading.Tasks.Task Handle(Microsoft.AspNetCore.Http.HttpContext httpContext)
        {
            await using var documentSession = _sessionFactory.OpenSession();
            await using var querySession = _sessionFactory.QuerySession();
            var (command, jsonContinue) = await ReadJsonAsync<TeleHealth.WebApi.StartProviderShift>(httpContext);
            if (jsonContinue == Wolverine.HandlerContinuation.Stop) return;
            (var board, var provider, var result) = await TeleHealth.WebApi.StartProviderShiftEndpoint.LoadAsync(command, querySession).ConfigureAwait(false);
            if (!(result is Wolverine.Http.WolverineContinue))
            {
                await result.ExecuteAsync(httpContext).ConfigureAwait(false);
                return;
            }

            var result = TeleHealth.WebApi.StartProviderShiftEndpoint.Validate(provider, board);
            if (!(result is Wolverine.Http.WolverineContinue))
            {
                await result.ExecuteAsync(httpContext).ConfigureAwait(false);
                return;
            }

            (var shiftStartingResponse, var startStream) = TeleHealth.WebApi.StartProviderShiftEndpoint.Create(command, board, provider);
            
            // Placed by Wolverine's ISideEffect policy
            startStream.Execute(documentSession);

            ((Wolverine.Http.IHttpAware)shiftStartingResponse).Apply(httpContext);
            
            // Commit any outstanding Marten changes
            await documentSession.SaveChangesAsync(httpContext.RequestAborted).ConfigureAwait(false);

            await WriteJsonAsync(httpContext, shiftStartingResponse);
        }

    }

In the application bootstrapping, I have Wolverine applying transactional middleware automatically:

builder.Host.UseWolverine(opts =>
{
    // more config...
    
    // Automatic usage of transactional middleware as 
    // Wolverine recognizes that an HTTP endpoint or message handler
    // persists data
    opts.Policies.AutoApplyTransactions();
});

And the Wolverine/Marten integration configured as well:

builder.Services.AddMarten(opts =>
    {
        var connString = builder
            .Configuration
            .GetConnectionString("marten");

        opts.Connection(connString);

        // There will be more here later...
    })

    // I added this to enroll Marten in the Wolverine outbox
    .IntegrateWithWolverine()

    // I also added this to opt into events being forward to
    // the Wolverine outbox during SaveChangesAsync()
    .EventForwardingToWolverine();

I’ll even go farther and say that in many cases Wolverine will allow you to establish decent separation of concerns and testability with far less ceremony than is required today with high overhead approaches like the popular Clean Architecture style.

Integration Testing an HTTP Service that Publishes a Wolverine Message

As long term Agile practitioners, the folks behind the whole JasperFx / “Critter Stack” ecosystem explicitly design our tools around the quality of “testability.” Case in point, Wolverine has quite a bit of integration test helpers for testing through message handler execution.

However, while helping a Wolverine user last week, they told me that they were bypassing those built in tools because they wanted to do an integration test of an HTTP service call that publishes a message to Wolverine. That’s certainly going to be a common scenario, so let’s talk about a strategy for reliably writing integration tests that both invoke an HTTP request and can observe the ongoing Wolverine activity to “know” when the “act” part of a typical “arrange, act, assert” test is complete.

In the Wolverine codebase itself, there’s a couple projects that we use to test the Wolverine.Http library:

  1. WolverineWebApi — a web api project that has a lot of fake endpoints that tries to cover the whole gamut of usage scenarios for Wolverine.Http, including a couple use cases of publishing messages directly from HTTP endpoint handlers to asynchronous message handling inside of Wolverine core
  2. Wolverine.Http.Tests — an xUnit.Net project that contains a mix of unit tests and integration tests through WolverineWebApi and Wolverine.Http itself

Back to the need to write integration tests that span work from HTTP service invocations through to Wolverine message processing, Wolverine.Http uses the Alba library (another JasperFx project!) to execute and run assertions against HTTP services. At least at the moment, xUnit.Net is my goto test runner library, so Wolverine.Http.Tests has this shared fixture that is shared between test classes:

public class AppFixture : IAsyncLifetime
{
    public IAlbaHost Host { get; private set; }

    public async Task InitializeAsync()
    {
        // Sorry folks, but this is absolutely necessary if you 
        // use Oakton for command line processing and want to 
        // use WebApplicationFactory and/or Alba for integration testing
        OaktonEnvironment.AutoStartHost = true;

        // This is bootstrapping the actual application using
        // its implied Program.Main() set up
        Host = await AlbaHost.For<Program>(x => { });
    }

A couple notes on this approach:

  • I think it’s very important to use the actual application bootstrapping for the integration testing rather than trying to have a parallel IoC container configuration for test automation as I frequently see out in the wild. That doesn’t preclude customizing that bootstrapping a little bit to substitute in fake, stand in services for problematic external infrastructure.
  • The approach I’m showing here with xUnit.Net does have the effect of making the tests execute serially, which might not be what you want in very large test suites
  • I think the xUnit.Net shared fixture approach is somewhat confusing and I always have to review the documentation on it when I try to use it

There’s also a shared base class for integrated HTTP tests called IntegrationContext, with a little bit of that shown below:

[CollectionDefinition("integration")]
public class IntegrationCollection : ICollectionFixture<AppFixture>
{
}

[Collection("integration")]
public abstract class IntegrationContext : IAsyncLifetime
{
    private readonly AppFixture _fixture;

    protected IntegrationContext(AppFixture fixture)
    {
        _fixture = fixture;
    }
    
    // more....

More germane to this particular post, here’s a helper method inside of IntegrationContext I wrote specifically to do integration testing that has to span an HTTP request through to asynchronous Wolverine message handling:

    // This method allows us to make HTTP calls into our system
    // in memory with Alba, but do so within Wolverine's test support
    // for message tracking to both record outgoing messages and to ensure
    // that any cascaded work spawned by the initial command is completed
    // before passing control back to the calling test
    protected async Task<(ITrackedSession, IScenarioResult)> TrackedHttpCall(Action<Scenario> configuration)
    {
        IScenarioResult result = null;

        // The outer part is tying into Wolverine's test support
        // to "wait" for all detected message activity to complete
        var tracked = await Host.ExecuteAndWaitAsync(async () =>
        {
            // The inner part here is actually making an HTTP request
            // to the system under test with Alba
            result = await Host.Scenario(configuration);
        });

        return (tracked, result);
    }

Now, for a sample usage of that test helpers, here’s a fake endpoint from WolverineWebApi that I used to prove that Wolverine.Http endpoints can publish messages through Wolverine’s cascading message approach:

    // This would have a string response and a 200 status code
    [WolverinePost("/spawn")]
    public static (string, OutgoingMessages) Post(SpawnInput input)
    {
        var messages = new OutgoingMessages
        {
            new HttpMessage1(input.Name),
            new HttpMessage2(input.Name),
            new HttpMessage3(input.Name),
            new HttpMessage4(input.Name)
        };

        return ("got it", messages);
    }

Psst. Notice how the endpoint method’s signature up above is a synchronous pure function which is cleaner and easier to unit test than the equivalent functionality would be in other .NET frameworks that would have required you to call asynchronous methods on some kind of IMessageBus interface.

To test this thing, I want to run an HTTP POST to the “/span” Url in our application, then prove that there were four matching messages published through Wolverine. Here’s the test for that functionality using our earlier TrackedHttpCall() testing helper:

    [Fact]
    public async Task send_cascaded_messages_from_tuple_response()
    {
        // This would fail if the status code != 200 btw
        // This method waits until *all* detectable Wolverine message
        // processing has completed
        var (tracked, result) = await TrackedHttpCall(x =>
        {
            x.Post.Json(new SpawnInput("Chris Jones")).ToUrl("/spawn");
        });

        result.ReadAsText().ShouldBe("got it");

        // "tracked" is a Wolverine ITrackedSession object that lets us interrogate
        // what messages were published, sent, and handled during the testing perioc
        tracked.Sent.SingleMessage<HttpMessage1>().Name.ShouldBe("Chris Jones");
        tracked.Sent.SingleMessage<HttpMessage2>().Name.ShouldBe("Chris Jones");
        tracked.Sent.SingleMessage<HttpMessage3>().Name.ShouldBe("Chris Jones");
        tracked.Sent.SingleMessage<HttpMessage4>().Name.ShouldBe("Chris Jones");
    }

There you go. In one fell swoop, we’ve got a reliable way to do integration testing against asynchronous behavior in our system that’s triggered by an HTTP service call — including any and all configured ASP.Net Core or Wolverine.Http middleware that’s part of the execution pipeline.

By “reliable” here in regards to integration testing, I want you to think about any reasonably complicated Selenium test suite and how infuriatingly often you get “blinking” tests that are caused by race conditions between some kind of asynchronous behavior and the test harness trying to make test assertions against the browser state. Wolverine’s built in integration test support can eliminate that kind of inconsistent test behavior by removing the race condition as it tracks all ongoing work for completion.

Oh, and here’s Chris Jones sacking Joe Burrow in the AFC Championship game to seal the Chiefs win that was fresh in my mind when I originally wrote that code above: