Wolverine’s Improved Azure Service Bus Support

Wolverine 1.6.0 came out today, and one of the main themes was a series of improvements to the Azure Service Bus integration with Wolverine. In addition to the basic support Wolverine already had support for messaging with Azure Service Bus queues, topics, and subscriptions, you can now use native scheduled delivery, session identifiers for FIFO delivery, and expanded options for conventional routing topology.

First though, to get started with Azure Service Bus and Wolverine, install the WolverineFx.AzureServiceBus with the Nuget mechanism of your choice:

dotnet add package WolverineFx.AzureServiceBus

Next, you’ll add just a little bit to your Wolverine bootstrapping like this:

using var host = await Host.CreateDefaultBuilder()
    .UseWolverine((context, opts) =>
    {
        // One way or another, you're probably pulling the Azure Service Bus
        // connection string out of configuration
        var azureServiceBusConnectionString = context
            .Configuration
            .GetConnectionString("azure-service-bus");

        // Connect to the broker in the simplest possible way
        opts.UseAzureServiceBus(azureServiceBusConnectionString)
            .AutoProvision()
            .UseConventionalRouting();
    }).StartAsync();

Native Message Scheduling

You can now use native Azure Service Bus scheduled delivery within Wolverine without any explicit configuration beyond what you already do to connect to Azure Service Bus. Putting that into perspective, if you have a message type name ValidateInvoiceIsNotLate that is routed to an Azure Service Bus queue or subscription, you can use this feature:

public async Task SendScheduledMessage(IMessageContext bus, Guid invoiceId)
{
    var message = new public async Task SendScheduledMessage(IMessageContext bus, Guid invoiceId)
{
    var message = new ValidateInvoiceIsNotLate
    {
        InvoiceId = invoiceId
    };

    // Schedule the message to be processed in a certain amount
    // of time
    await bus.ScheduleAsync(message, 30.Days());

    // Schedule the message to be processed at a certain time
    await bus.ScheduleAsync(message, DateTimeOffset.Now.AddDays(30));
}
    {
        InvoiceId = invoiceId
    };

    // Schedule the message to be processed in a certain amount
    // of time
    await bus.ScheduleAsync(message, 30.Days());

    // Schedule the message to be processed at a certain time
    await bus.ScheduleAsync(message, DateTimeOffset.Now.AddDays(30));
}

That would also apply to scheduled retry error handling if the endpoint is also Inline:

using var host = Host.CreateDefaultBuilder()
    .UseWolverine(opts =>
    {
        opts.Policies.OnException<TimeoutException>()
            // Just retry the message again on the
            // first failure
            .RetryOnce()

            // On the 2nd failure, put the message back into the
            // incoming queue to be retried later
            .Then.Requeue()

            // On the 3rd failure, retry the message again after a configurable
            // cool-off period. This schedules the message
            .Then.ScheduleRetry(15.Seconds())

            // On the next failure, move the message to the dead letter queue
            .Then.MoveToErrorQueue();

    }).StartAsync();

Topic & Subscription Conventions

The original conventional routing with Azure Service Bus just sent and listened to queues named after the message type within the application. Wolverine 1.6 adds an additional routing convention to publish outgoing messages to topics and listen for known handled messages with topics and subscriptions. In all cases, you can customize the convention naming and any element of the Wolverine listening, sending, or any of the effected Azure Service Bus topics or subscriptions.

The syntax for this option is shown below:

opts.UseAzureServiceBusTesting()
    .UseTopicAndSubscriptionConventionalRouting(convention =>
    {
        // Optionally control every aspect of the convention and
        // its applicability to types
        // as well as overriding any listener, sender, topic, or subscription
        // options
    })

    .AutoProvision()
    .AutoPurgeOnStartup();

Session Identifiers and FIFO Delivery

You can now take advantage of sessions and first-in, first out queues in Azure Service Bus with Wolverine. To tell Wolverine that an Azure Service Bus queue or subscription should require sessions, you have this syntax shown in an internal test:

_host = await Host.CreateDefaultBuilder()
    .UseWolverine(opts =>
    {
        opts.UseAzureServiceBusTesting()
            .AutoProvision().AutoPurgeOnStartup();

        opts.ListenToAzureServiceBusQueue("send_and_receive");
        opts.PublishMessage<AsbMessage1>().ToAzureServiceBusQueue("send_and_receive");

        opts.ListenToAzureServiceBusQueue("fifo1")
            
            // Require session identifiers with this queue
            .RequireSessions()
            
            // This controls the Wolverine handling to force it to process
            // messages sequentially
            .Sequential();
        
        opts.PublishMessage<AsbMessage2>()
            .ToAzureServiceBusQueue("fifo1");

        opts.PublishMessage<AsbMessage3>().ToAzureServiceBusTopic("asb3");
        opts.ListenToAzureServiceBusSubscription("asb3")
            .FromTopic("asb3")
            
            // Require sessions on this subscription
            .RequireSessions(1)
            
            .ProcessInline();
    }).StartAsync();

Wolverine is using the “group-id” nomenclature from the AMPQ standard, but for Azure Service Bus, this is directly mapped to the SessionId property on the Azure Service Bus client internally.

To publish messages to Azure Service Bus with a session id, you will need to of course supply the session id:

// bus is an IMessageBus
await bus.SendAsync(new AsbMessage3("Red"), new DeliveryOptions { GroupId = "2" });
await bus.SendAsync(new AsbMessage3("Green"), new DeliveryOptions { GroupId = "2" });
await bus.SendAsync(new AsbMessage3("Refactor"), new DeliveryOptions { GroupId = "2" });

You can also send messages with session identifiers through cascading messages as shown in a fake message handler below:

public static IEnumerable<object> Handle(IncomingMessage message)
{
    yield return new Message1().WithGroupId("one");
    yield return new Message2().WithGroupId("one");

    yield return new Message3().ScheduleToGroup("one", 5.Minutes());

    // Long hand
    yield return new Message4().WithDeliveryOptions(new DeliveryOptions
    {
        GroupId = "one"
    });
}

Using Sql Server as a Message Queue with Wolverine

Wolverine 1.4.0 was released last week (and a smaller 1.5.0, with a medium sized 1.6.0 coming Monday). The biggest new feature was a brand new option to use Microsoft Sql Server (or Azure Sql) as a durable message transport with Wolverine.

Let’s say your system is already using Sql Server for persistence, you need some durable, asynchronous messaging, and wouldn’t it be nice to not have to introduce any new infrastructure into the mix? Assuming you’ve decided to also use Wolverine, you can get started with this approach by adding the WolverineFx.SqlServer Nuget to your application:

dotnet add package WolverineFx.SqlServer

Here’s a sample application bootstrapping that shows the inclusion and configuration of Sql Server-backed queueing:

using var host = await Host.CreateDefaultBuilder()
    .UseWolverine((context, opts) =>
    {
        var connectionString = context
            .Configuration
            .GetConnectionString("sqlserver");
        
        // This adds both Sql Server backed
        // transactional inbox/outbox support
        // and the messaging transport support
        opts
           .UseSqlServerPersistenceAndTransport(connectionString, "myapp")
            
            // Tell Wolverine to build out all necessary queue or scheduled message
            // tables on demand as needed
            .AutoProvision()
            
            // Option that may be helpful in testing, but probably bad
            // in production!
            .AutoPurgeOnStartup();

        // Use this extension method to create subscriber rules
        opts.PublishAllMessages()
            .ToSqlServerQueue("outbound");

        // Use this to set up queue listeners
        opts.ListenToSqlServerQueue("inbound")

            // Optional circuit breaker usage
            .CircuitBreaker(cb =>
            {
                // fine tune the circuit breaker
                // policies here
            })
            
            // Optionally specify how many messages to 
            // fetch into the listener at any one time
            .MaximumMessagesToReceive(50);
    }).StartAsync();

The Sql Server transport is pretty simple, it basically just supports named queues right now. Here’s a couple useful properties of the transport that will hopefully make it more useful to you:

  • Scheduled message delivery is absolutely supported with the Sql Server Transport, and some care was taken to optimize the database load and throughput when using this feature
  • Sql Server backed queues can be either “buffered in memory” (Wolverine’s message batching) or be “durable” meaning that the queues are integrated into both the transactional inbox and outbox for durable systems
  • Wolverine can build database tables as necessary for the queue much like it does today for the transactional inbox and outbox. Moreover, the configured queue tables are also part of the stateful resource model in the Critter Stack world that provide quite a bit of command line management directly into your application.
  • The Sql Server backed queues support Wolverine’s circuit breaker functionality on listeners

This feature is something that folks have asked about in the past, but I’ve always been reticent to try because databases don’t make for great, 1st class queueing mechanisms. That being said, I’m working with a JasperFx Software client who wanted a more robust local queueing mechanism that could handle much more throughput for scheduled messaging, and thus, the new Sql Server Transport was born.

There will be a full fledged PostgreSQL backed queue at some point, and it might be a little more robust even based on some preliminary work from a Wolverine contributor, but that’s probably not an immediate priority.

Thoughts on Code Organization in a Post-Hexagonal World

JasperFx Software is up and running, and we’d love to work with you to help make your software development efforts more successful.

I’m one of a number of folks who are actively questioning the conventional wisdom of Hexagonal Architecture approaches. If you’re interested in all the things I think are wrong with the way that enterprise software is built today, you can check out my talk from NDC Oslo this summer (as I suffer through an Austin summer and blissfully remember the feeling of actually wanting a jacket on outside):

That talk was admittedly heavy on all the things I don’t like about long running systems built by aficionados of prescriptive Clean/Onion/Ports & Adapters/iDesign Architecture approaches, and maybe lighter than it should be on “well, what would you do instead?”

To be honest, it’s much easier to be against something and figuring out what exactly I’m for is a work in progress. Before I get into any specifics, I want to say that the only consistent way to arrive at high quality, well performing code that’s easy to maintain is iteration and adaptation. I’m not saying that upfront planning or design can’t help, but it can also be very wrong in the absence of true feedback. Dating back to my old writings in the now defunct CodeBetter website, I thought long ago that there were a couple of “first causes” for successful software development, of which the only two I remember now are:

  • The paramount importance of rapid and effective feedback mechanisms. That mostly means testing of all sorts, but also having your assumptions about system usage, business logic behavior, and performance qualities confirmed or blown up by feedback from users or real life deployments.
  • Reversibility. Granted, hardly anybody uses this term and you won’t find much about it, but let’s call it roughly your ability to change technical directions on a software project. Some choices are hard to reverse and have to be made early, and other decisions, not so much.

I did a talk on Reversibility at Agile Vancouver in 2013 if you’re interested in ancient history.

Back to the idea that iteration and adaptation over time being the most effective way to arrive at good technical results. Your ability to safely iterate is largely tied to the quality and quantity of your feedback cycles. Your ability to actually adapt as you learn more about how your system should behave or to adapt to emergent patterns in the codebase that weren’t obvious at first can be enabled by high reversibility in your system, or hindered by low reversibility.

To break this apart a bit, let’s say we’re all sitting around talking about how to organize and create a codebase that is easy to maintain, pleasant to work in, and generally successful over time. If I were Conan the Barbarian, and you asked me “Conan, what is best in life?”, I would answer with these overarching themes that mostly connect back to the earlier pillars of feedback and reversibility:

  • Keeping closely related code together – this is really simple in theory, but harder in practice. Reusable code might play by different rules, but by and large, I want closely related code for a single feature or use case to live closely together in the file system. Maybe even in the same file. Code that has to change together or be understood together, should live together. Hexagonal architectures encourage folks to organize code by technical stereotypes and think in terms of horizontal layers. That leads to closely related code being scattered around the codebase. The fallacy of any kind of layered architecture to me is that I very rarely need to reason about a whole technical layer of the system at one time, but very frequently need to reason about all the code. All the chatter the last couple years about Vertical Slice Architecture – is in my opinion – a course correction to the previous decade’s focus on layered architectures.
  • Effective test automation coverage – I think this should be almost self-explanatory. If my code coverage is good, meaning that it’s relatively comprehensive, runs quickly (enough, and that’s subjective), and reliably tells us whether the system is in a shape where it can be safely shipped, then most technical problems that arise can be solved with our testing safety net. Describing what is and is not a desirable test automation strategy is a long discussion by itself, but let’s oversimplify that to “basically not the typical over-reliance on slow, buggy, brittle Selenium tests.” And no, even though Playwright or Cypress may be better tools in the end, it’s the focus on black box end to end testing through the user interface that I think its the problem more than anything wrong with Selenium itself.
  • Low ceremony code – If iteration and adaptation is really as valuable as I’m claiming, then it really behooves us to have relatively low code ceremony approaches so we can easily break features apart or introduce new features or even just understand the code we’ve already written without it being obfuscated by lots of boilerplate code. High code ceremony means having a lot of repetitive or manual coding steps that discourage you from changing code after the fact. As a first example, a document database approach like Marten‘s requires a lot less ceremony to introduce new persistent entities or change the structure of existing entities compared to an Object Relational Mapper approach.
  • Modularity between features – The cruel betrayal of hexagonal architectures is that their promise of making infrastructure easy to upgrade through layering is actually a trap. By organizing code primarily by layer first, you can easily arrive at a place where an entire layer may be tightly coupled to a particular set of tooling or approach — and it’s often just too damn expensive to change an entire layer of a large system at one time. Whether you ultimately choose some sort of micro-service architecture or modular monolith, it’s valuable to have loose coupling between features so that you could upgrade the technology in a system one vertical feature at a time. That’s much more feasible than trying to swap out the entire database at one time. In practice, I would describe this as the “vertical slice architecture”, but also trying to minimize shared infrastructure code and shared abstractions between features as that tends to impede modernization efforts in my experience.
  • Keeping infrastructure out of business or workflow logic – I think that at least the .NET community I live within (but I suspect in the Java & TypeScript worlds as well) that folks assume that decoupling the business logic decoupled from infrastructure means cutting “mockable” abstractions between the business logic and its calls into infrastructure. Instead, I’d push developers to concentrate on isolating business and workflow logic completely away from any calls infrastructure. That’s a long conversation all by itself, but my recent post on the A-Frame Architecture with Wolverine hopefully explains some of what I mean here.
  • Technologies that are friendly with integration testing – Hey, some technologies are simply easier to work with for developers than others. Given a choice between technology “A” and technology “B”, I’m going to lean toward whichever is easiest to run locally and whichever is easiest to utilize within integration test harnesses — which generally means, how easy or hard is it really to get the infrastructure into the exact right state for the next test? Corey Kaylor and I’s journey with Marten originally came about because of our strong opinions that document databases had much less friction for local development or integration testing than using a relational database and ORM combination — and 8 years later I feel even more strongly about that advantage.
  • Optimized “Time to Login Screen” – Consider this, a new developer just started with your team, or maybe you’re picking up work on a codebase that you haven’t worked with in quite awhile. How long does it take from you to go from cloning a fresh copy of the code to successfully running all the tests and the system itself on your local development box? This is also a much longer conversation, but this optimization absolutely impacts how I choose technical infrastructure on projects. It also leads to me prioritizing project automation to improve the development experience because I think that project friction in development and testing absolutely impacts how successful software projects can be.

And now let’s leave the arena of technical choices and dip our toes into just a little bit of mushy people-oriented stuff.

Learning Environment

This has purposely been written from a technical first point of view, but company culture inevitably plays a large part as well. I won’t budge off the idea that adaptation and iteration are crucial, but that’s often impossible if development teams are too tightly micromanaged by product owners, management, or the nebulous “the business.”

For the sake of this post, let’s all pretend that we’re all empowered within our workplaces and we can collectively assert ourselves to improve the technical health of our codebases and basically exert some ownership over our world.

Given my previous point, we should all just work on the assumption that you can and will learn new things after a system is started or even mature that can be later used in that system. Moreover, encourage constant learning through your teams and even encourage folks to challenge the current technical direction or development processes. Don’t assume that the way things are at this moment is the way things have to be in perpetuity.

Like I said before, I’m obviously discussing this outside the context of how empowered or how micro-managed the development team is in real life, so let’s also throw in that a team that is empowered with real ownership over their system will outperform a team that is closely micro-managed inside a rigid management structure of some sort.

Understanding Endpoints in Wolverine Messaging

In Wolverine terminology, an “Endpoint” is the configuration time model for any location or mechanism where Wolverine sends or receives messages, including local Wolverine queues within your application. Think of external resource like a Rabbit MQ exchange or an Amazon SQS queue. The Async API specification refers to this as a channel, and Wolverine may very well change its nomenclature in the future to be consistent with Async API. While there are somewhat different configuration options for a Rabbit MQ exchange versus an Azure Service Bus queue, there are some common elements. For the sake of this post (which is mostly ripped out of the Wolverine documentation), endpoints in Wolverine are processed on one of three modes:

  1. Inline – messages are sent immediately, and processed sequentially. While you can parallelize the listeners for better throughput, this is your most likely choice if message delivery order matters to you
  2. Buffered – kind of a batched, in memory mode
  3. Durable – batched in memory, but also backed every step of the way by Wolverine’s transactional inbox/outbox support

Choosing between these three modes is a matter of balancing throughput and delivery guarantees. With that, here’s a deeper dive into the three modes. Do note though, that not every transport type can support all three modes

Inline Endpoints

Wolverine endpoints come in three basic flavors, with the first being Inline endpoints:

// Configuring a Wolverine application to listen to
// an Azure Service Bus queue with the "Inline" mode
opts.ListenToAzureServiceBusQueue(queueName, q => q.Options.AutoDeleteOnIdle = 5.Minutes()).ProcessInline();

With inline endpoints, as the name implies, calling IMessageBus.SendAsync() immediately sends the message to the external message broker. Likewise, messages received from an external message queue are processed inline before Wolverine acknowledges to the message broker that the message is received.

Inline Endpoints

In the absence of a durable inbox/outbox, using inline endpoints is “safer” in terms of guaranteed delivery. As you might think, using inline agents can bottle neck the message processing, but that can be alleviated by opting into parallel listeners.

Buffered Endpoints

In the second Buffered option, messages are queued locally between the actual external broker and the Wolverine handlers or senders.

To opt into buffering, you use this syntax:

// I overrode the buffering limits just to show
// that they exist for "back pressure"
opts.ListenToAzureServiceBusQueue("incoming")
    .BufferedInMemory(new BufferingLimits(1000, 200));

At runtime, you have a local TPL Dataflow queue between the Wolverine callers and the broker:

Buffered Endpoints

On the listening side, buffered endpoints do support back pressure (of sorts) where Wolverine will stop the actual message listener if too many messages are queued in memory to avoid chewing up your application memory. In transports like Amazon SQS that only support batched message sending or receiving, Buffered is the default mode as that facilitates message batching.

Buffered message sending and receiving can lead to higher throughput, and should be considered for cases where messages are ephemeral or expire and throughput is more important than delivery guarantees. The downside is that messages in the in memory queues can be lost in the case of the application shutting down unexpectedly — but Wolverine tries to “drain” the in memory queues on normal application shutdown.

Durable Endpoints

Durable endpoints behave like buffered endpoints, but also use the durable inbox/outbox message storage to create much stronger guarantees about message delivery and processing. You will need to use Durable endpoints in order to truly take advantage of the persistent outbox mechanism in Wolverine. To opt into making an endpoint durable, use this syntax:

// I overrode the buffering limits just to show
// that they exist for "back pressure"
opts.ListenToAzureServiceBusQueue("incoming")
    .UseDurableInbox(new BufferingLimits(1000, 200));

opts.PublishAllMessages().ToAzureServiceBusQueue("outgoing")
    .UseDurableOutbox();

Or use policies to do this in one fell swoop (which may not be what you actually want, but you could do this!):

opts.Policies.UseDurableOutboxOnAllSendingEndpoints();

As shown below, the Durable endpoint option adds an extra step to the Buffered behavior to add database storage of the incoming and outgoing messages:

Durable Endpoints

Outgoing messages are deleted in the durable outbox upon successful sending acknowledgements from the external broker. Likewise, incoming messages are also deleted from the durable inbox upon successful message execution.

The Durable endpoint option makes Wolverine’s local queueing robust enough to use for cases where you need guaranteed processing of messages, but don’t want to use an external broker.

Explicitly Route Messages with Wolverine

TL;DR: Wolverine message handler signatures can lead to easier unit testing code than comparable “IHandler of T” frameworks.

Most of the time I think you can just allow Wolverine to handle message routing for you with some simple configured rules or conventions. However, once in awhile you’ll need to override those rules and tell Wolverine exactly where those messages should go.

I’ve been working (playing) with Midjourney quite a bit lately trying to make images for the JasperFx Software website. You can try it out to generate images for free, but the image generation gets a lower priority than their paying customers. Using that as an example, let’s say that we were using Wolverine to build our own Midjourney clone. At some point, there’s maybe an asynchronous message handler like this one that takes a request to generate a new image based on the user’s prompt, but routes the actual work to either a higher or lower priority queue based on whether the user is a premium customer:

    public record GenerateImage(string Prompt, Guid ImageId);

    public record ImageRequest(string Prompt, string CustomerId);

    public record ImageGenerated(Guid Id, byte[] Image);

    public class Customer
    {
        public string Id { get; set; }
        public bool PremiumMembership { get; set; }
    }

    public class ImageSaga : Saga
    {
        public Guid Id { get; set; }
        
        public string CustomerId { get; set; }

        public Task Handle(ImageGenerated generated)
        {
            // look up the customer, figure out how to send the
            // image to their client.
            throw new NotImplementedException("Not done yet:)");
            
            MarkCompleted();
        }
    }
    
    public static class GenerateImageHandler
    {
        // I'm assuming the usage of Marten middleware here
        // to handle transactions and the outbox mechanics
        public static async Task HandleAsync(
            ImageRequest request, 
            IDocumentSession session, 
            IMessageBus messageBus,
            CancellationToken cancellationToken)
        {
            var customer = await session
                .LoadAsync<Customer>(request.CustomerId, cancellationToken);

            // I'm starting a new saga to track the state of the 
            // image when we get the callback from the downstream
            // image generation service
            var imageSaga = new ImageSaga();
            session.Insert(imageSaga);

            var outgoing = new GenerateImage(request.Prompt, imageSaga.Id);
            if (customer.PremiumMembership)
            {
                // Send the message to a named endpoint we've configured for the faster
                // processing
                await messageBus.EndpointFor("premium-processing")
                    .SendAsync(outgoing);
            }
            else
            {
                // Send the message to a named endpoint we've configured for slower
                // processing
                await messageBus.EndpointFor("basic-processing")
                    .SendAsync(outgoing);
            }
        }
    }

A couple notes on the code above:

  • I’m assuming the usage of Marten for persistence (of course), with the auto transactional middleware policy applied
  • I’ve configured a PostgreSQL backed outbox for Wolverine
  • It’s likely a slow process, so I’m assuming there’s going to be an asynchronous callback from the actual image generator later. I’m leveraging Wolverine’s stateful saga support to track the customer of the original image for processing later

Wolverine V1.3 dropped today with a little improvement for exactly this scenario (based on some usage by a JasperFx client) so you can use cascading messages instead of having to deal directly with the IMessageBus service. Let’s rewrite the explicit code up above, but this time try to turn the actual routing logic into a pure function that could be easy to unit test:

    public static class GenerateImageHandler
    {
        // Using Wolverine's compound handlers to remove all the asynchronous
        // junk from the main Handle() method
        public static Task<Customer> LoadAsync(
            ImageRequest request, 
            IDocumentSession session,
            CancellationToken cancellationToken)
        {
            return session.LoadAsync<Customer>(request.CustomerId, cancellationToken);
        }
        
        
        public static (RoutedToEndpointMessage<GenerateImage>, ImageSaga) Handle(
            ImageRequest request, 
            Customer customer)
        {

            // I'm starting a new saga to track the state of the 
            // image when we get the callback from the downstream
            // image generation service
            var imageSaga = new ImageSaga
            {
                // I need to assign the image id in memory
                // to make this all work
                Id = CombGuidIdGeneration.NewGuid()
            };

            var outgoing = new GenerateImage(request.Prompt, imageSaga.Id);
            var destination = customer.PremiumMembership ? "premium-processing" : "basic-processing";
            
            return (outgoing.ToEndpoint(destination), imageSaga);
        }
    }

The handler above is the equivalent in functionality to the earlier version. It’s not really that much less code, but I think it’s a bit more declarative. What’s most important to me is the potential for unit testing the decision about where the customer requests go as shown in this fake test:

    [Fact]
    public void should_send_the_request_to_premium_processing_for_premium_customers()
    {
        var request = new ImageRequest("a wolverine ice skating in the country side", "alice");
        var customer = new Customer
        {
            Id = "alice",
            PremiumMembership = true
        };

        var (command, image) = GenerateImageHandler.Handle(request, customer);
        
        command.EndpointName.ShouldBe("premium-processing");
        command.Message.Prompt.ShouldBe(request.Prompt);
        command.Message.ImageId.ShouldBe(image.Id);
        
        image.CustomerId.ShouldBe(request.CustomerId);
    }

What I’m hoping you take away from that code sample is that testing the logic part of the ImageRequest message processing turns into a simple state-based test — meaning that you’re just pushing in the known inputs and measuring the values returned by the method. You’d still need to pair this unit test with a full integration test, but at least you’d know that the routing logic is correct before you wrestle with potential integration issues.

JasperFx Software is Open for Business!

As I’d first announced almost *gulp* two months ago, JasperFx Software LLC is officially open for business. In collaboration with the other core Marten team members (Oskar & Babu), we are planning to productize and provide professional services around the “Critter Stack” tools (Marten and Wolverine) that provide a high level of productivity and robustness for server side .NET applications. We’ll soon be able to offer formal support contract agreements for Marten and Wolverine, and we’re already working with early customers on improving these tools.

We’re also happy to help you as consultants for your own software initiatives on whatever technical stack you happen to be using. Maybe you have a legacy system that could use some modernization love, wish your automated testing infrastructure was more successful, want to adopt Test Driven Development but don’t know where to start, or you’re starting a new system and just want help getting it right. If any of that describes where you’re at, JasperFx can help.

Check out more about JasperFx Software on our About Us and Services pages. To contact us, email sales@jasperfx.net or drop in on the Critter Stack Discord community.

A-Frame Architecture with Wolverine

I’m weaseling into making a second blog post about a code sample that I mostly stole from just to meet my unofficial goal of 2-3 posts a week promoting the Critter Stack.

Last week I wrote a blog post ostensibly about Marten’s compiled query feature that also included this code sample that I adapted from Oskar’s excellent post on vertical slices:

using DailyAvailability = System.Collections.Generic.IReadOnlyList<Booking.RoomReservations.GettingRoomTypeAvailability.DailyRoomTypeAvailability>;
 
namespace Booking.RoomReservations.ReservingRoom;
 
public record ReserveRoomRequest(
    RoomType RoomType,
    DateOnly From,
    DateOnly To,
    string GuestId,
    int NumberOfPeople
);
 
public static class ReserveRoomEndpoint
{
    // More on this in a second...
    public static async Task<DailyAvailability> LoadAsync(
        ReserveRoomRequest request,
        IDocumentSession session)
    {
        // Look up the availability of this room type during the requested period
        return (await session.QueryAsync(new GetRoomTypeAvailabilityForPeriod(request))).ToList();
    }
 
    [WolverinePost("/api/reservations")]
    public static (CreationResponse, StartStream<RoomReservation>) Post(
        ReserveRoomRequest command,
        DailyAvailability dailyAvailability)
    {
        // Make sure there is availability for every day
        if (dailyAvailability.Any(x => x.AvailableRooms == 0))
        {
            throw new InvalidOperationException("Not enough available rooms!");
        }
 
        var reservationId = CombGuidIdGeneration.NewGuid().ToString();
 
        // I copied this, but I'd probably eliminate the record usage in favor
        // of init only properties so you can make the potentially error prone
        // mapping easier to troubleshoot in the future
        // That folks is the voice of experience talking
        var reserved = new RoomReserved(
            reservationId,
            null,
            command.RoomType,
            command.From,
            command.To,
            command.GuestId,
            command.NumberOfPeople,
            ReservationSource.Api,
            DateTimeOffset.UtcNow
        );
 
        return (
            // This would be the response body, and this also helps Wolverine
            // to create OpenAPI metadata for the endpoint
            new CreationResponse($"/api/reservations/{reservationId}"),
             
            // This return value is recognized by Wolverine as a "side effect"
            // that will be processed as part of a Marten transaction
            new StartStream<RoomReservation>(reservationId, reserved)
        );
    }
}

The original intent of that code sample was to show off how the full “critter stack” (Marten & Wolverine together) enables relatively low ceremony code that also promotes a high degree of testability. And does all of that without requiring developers to invest a lot of time in complicated , prescriptive architectures like a typical Clean Architecture structure.

Specifically today though, I want to zoom in on “testability” and talk about how Wolverine explicitly encourages code that exhibits what Jim Shore famously called the “A Frame Architecture” in its message handlers, but does so with functional decomposition rather than oodles of abstractions and layers.

Using the “A-Frame Architecture”, you roughly want to divide your code into three sets of functionality:

  1. The domain logic for your system, which I would say includes “deciding” what actions to take next.
  2. Infrastructural service providers
  3. Conductor or mediator code that invokes both the infrastructure and domain logic code to decouple the domain logic from infrastructure code

In the message handler above for the `ReserveRoomRequest` command, Wolverine itself is acting as the “glue” around the methods of the HTTP handler code above that keeps the domain logic (the ReserveRoomEndpoint.Post() method that “decides” what event should be captured) and the raw Marten infrastructure to load existing data and persist changes back to the database.

To illustrate that in action, here’s the full generated code that Wolverine compiles to actually handle the full HTTP request (with some explanatory annotations I made by hand):

    public class POST_api_reservations : Wolverine.Http.HttpHandler
    {
        private readonly Wolverine.Http.WolverineHttpOptions _wolverineHttpOptions;
        private readonly Marten.ISessionFactory _sessionFactory;

        public POST_api_reservations(Wolverine.Http.WolverineHttpOptions wolverineHttpOptions, Marten.ISessionFactory sessionFactory) : base(wolverineHttpOptions)
        {
            _wolverineHttpOptions = wolverineHttpOptions;
            _sessionFactory = sessionFactory;
        }



        public override async System.Threading.Tasks.Task Handle(Microsoft.AspNetCore.Http.HttpContext httpContext)
        {
            await using var documentSession = _sessionFactory.OpenSession();
            var (command, jsonContinue) = await ReadJsonAsync<Booking.RoomReservations.ReservingRoom.ReserveRoomRequest>(httpContext);
            if (jsonContinue == Wolverine.HandlerContinuation.Stop) return;

            // Wolverine has a convention to call methods named
            // "LoadAsync()" before the main endpoint method, and
            // to pipe data returned from this "Before" method
            // to the parameter inputs of the main method
            // as that actually makes sense
            var dailyRoomTypeAvailabilityIReadOnlyList = await Booking.RoomReservations.ReservingRoom.ReserveRoomEndpoint.LoadAsync(command, documentSession).ConfigureAwait(false);

            // Call the "real" HTTP handler method. 
            // The first value is the HTTP response body
            // The second value is a "side effect" that
            // will be part of the transaction around this
            (var creationResponse, var startStream) = Booking.RoomReservations.ReservingRoom.ReserveRoomEndpoint.Post(command, dailyRoomTypeAvailabilityIReadOnlyList);
            
            // Placed by Wolverine's ISideEffect policy
            startStream.Execute(documentSession);

            // This little ugly code helps get the correct
            // status code for creation for those of you 
            // who can't be satisfied by using 200 for everything ((Wolverine.Http.IHttpAware)creationResponse).Apply(httpContext);
            
            // Commit any outstanding Marten changes
            await documentSession.SaveChangesAsync(httpContext.RequestAborted).ConfigureAwait(false);

            // Write the response body as JSON
            await WriteJsonAsync(httpContext, creationResponse);
        }

    }

Wolverine by itself as acting as the mediator between the infrastructure concerns (loading & persisting data) and the business logic function which in Wolverine world becomes a pure function that are typically much easier to unit test than code that has direct coupling to infrastructure concerns — even if that coupling is through abstractions.

Testing wise, if I were actually building a real endpoint like that shown above, I would choose to:

  1. Unit test the Post() method itself by “pushing” inputs to it through the room availability and command data, then assert the expected outcome on the event published through the StartStream<Reservation> value returned by that method. That’s pure state-based testing for the easiest possible unit testing. As an aside, I would claim that this method is an example of the Decider pattern for testable event sourcing business logic code.
  2. I don’t think I’d bother testing the LoadAsync() method by itself, but instead I’d opt to use something like Alba to write an end to end test at the HTTP layer to prove out the entire workflow, but only after the unit tests for the Post() method are all passing.

Responsibility Driven Design

While the “A-Frame Architecture” metaphor is a relatively recent influence upon my design thinking, I’ve long been a proponent of Responsibility Driven Design (RDD) as explained by Rebecca Wirfs-Brock’s excellent A Brief Tour of Responsibility Driven Design. Don’t dismiss that paper because of its age, because the basic concepts and strategies for identifying different responsibilities in your system as a prerequisite for designing or structuring code put forth in that paper are absolutely useful even today.

Applying Responsibility Driven Development to the sample HTTP endpoint code above, I would say that:

  • The Marten IDocumentSession is a “service provider”
  • The Wolverine generated code acts as a “coordinator”
  • The Post() method is responsible for “deciding” what events should be emitted and persisted. One of the most helpful pieces of advice in RDD is to sometimes treat “deciding” to do an action as a separate responsibility from actually carrying out the action. That can lead to better isolating the decision making logic away from infrastructural concerns for easier testing

It’s also old as hell for software, but one of my personal favorite articles I ever wrote was Object Role Stereotypes for MSDN Magazine way back in 2008.

Wolverine has some new tricks to reduce boilerplate code in HTTP endpoints

Wolverine 1.2.0 rolled out this morning with some enhancements for HTTP endpoints. In the realm of HTTP endpoints, Wolverine’s raison d’être is to finally deliver a development experience to .NET developers that requires very low code ceremony, maximizes testability, and does all of that with good performance. Between some feedback from early adopters and some repetitive boilerplate code I saw doing a code review for a client last week (woot, I’ve actually got paying clients now!), Wolverine.Http got a couple new tricks to speed you up.

First off,

Here’s a common pattern in HTTP service development. Based on a route argument, you first load some kind of entity from persistence. If the data is not found, return a status code 404 that means the resource was not found, but otherwise continue working against that entity data you just loaded. Here’s a short hand way of doing that now with Wolverine “compound handlers“:

public record UpdateRequest(string Name, bool IsComplete);

public static class UpdateEndpoint
{
    // Find required Todo entity for the route handler below
    public static Task<Todo?> LoadAsync(int id, IDocumentSession session) 
        => session.LoadAsync<Todo>(id);
    
    [WolverinePut("/todos/{id:int}")]
    public static StoreDoc<Todo> Put(
        // Route argument
        int id,
        
        // The request body
        UpdateRequest request,
        
        // Entity loaded by the method above, 
        // but note the [Required] attribute
        [Required] Todo? todo)
    {
        todo.Name = request.Name;
        todo.IsComplete = request.IsComplete;

        return MartenOps.Store(todo);
    }
}

You’ll notice that the LoadAsync() method is looking up the Todo entity for the route parameter, where Wolverine would normally be passing that value to the matching Todo parameter of the main Put method. In this case though, because of the [Required] attribute, Wolverine.Http will stop processing with a 404 status code if the Todo cannot be found.

By contrast, here’s some sample code of a higher ceremony alternative that helped spawn this feature in the first place:

Note in the code above how the author had to pollute his code with attributes strictly for OpenAPI (Swagger) metadata because the valid response types cannot be inferred when you’re returning the IResult value that could frankly be just about anything in the world.

In the Wolverine 1.2 version above, Wolverine.Http is able to infer the exact same OpenAPI metadata as the busier Put() method in the image above. Also, and I think this is potentially valuable, the Wolverine 1.2 version turns the behavior into a purely synchronous version that is going to be mechanically easier to unit test.

So that’s required data, now let’s turn our attention to Wolverine’s new ProblemDetails support. While there is a Fluent Validation middleware package for Wolverine.Http that supports ProblemDetails in a generic way, I’m seeing usages where you just need to do some explicit validation for an HTTP endpoint. Wolverine 1.2 added this usage:

public class ProblemDetailsUsageEndpoint
{
    public ProblemDetails Before(NumberMessage message)
    {
        // If the number is greater than 5, fail with a
        // validation message
        if (message.Number > 5)
            return new ProblemDetails
            {
                Detail = "Number is bigger than 5",
                Status = 400
            };

        // All good, keep on going!
        return WolverineContinue.NoProblems;
    }

    [WolverinePost("/problems")]
    public static string Post(NumberMessage message)
    {
        return "Ok";
    }
}

public record NumberMessage(int Number);

Wolverine.Http now (as of 1.2.0) has a convention that sees a return value of ProblemDetails and looks at that as a “continuation” to tell the http handler code what to do next. One of two things will happen:

1. If the ProblemDetails return value is the same instance as WolverineContinue.NoProblems, just keep going
2. Otherwise, write the ProblemDetails out to the HTTP response and exit the HTTP request handling

Just as in the first [Required] usage, Wolverine is able to infer OpenAPI metadata about your endpoint to add a “produces ‘application/problem+json` with a 400 status code” item. And for those of you who like to get fancier or more specific with your HTTP status code usage, you can happily override that behavior with your own metadata attributes like so:

    // Use 418 as the status code instead
    [ProducesResponseType(typeof(ProblemDetails), 418)]

Compiled Queries with Marten

I had tentatively promised to do a full “critter stack” version of Oskar’s sample application in his Vertical Slices in Practice post last week that used Marten‘s event sourcing support. I started doing that this morning, but quit because it was just coming out too similar to my earlier post this week on Low Ceremony Vertical Slice Architecture with Wolverine.

In Oskar’s sample reservation booking application, there was an HTTP endpoint that handled a ReserveRoomRequest command and emitted a new RoomReserved event for a new RoomReservation event stream. Part of that processing was validating the availability of rooms of the requested type during the time period of the reservation request. Just for reference, here’s my version of Oskar’s ReserveRoomEndpoint:

using DailyAvailability = System.Collections.Generic.IReadOnlyList<Booking.RoomReservations.GettingRoomTypeAvailability.DailyRoomTypeAvailability>;

namespace Booking.RoomReservations.ReservingRoom;

public record ReserveRoomRequest(
    RoomType RoomType,
    DateOnly From,
    DateOnly To,
    string GuestId,
    int NumberOfPeople
);

public static class ReserveRoomEndpoint
{
    // More on this in a second...
    public static async Task<DailyAvailability> LoadAsync(
        ReserveRoomRequest request,
        IDocumentSession session)
    {
        // Look up the availability of this room type during the requested period
        return (await session.QueryAsync(new GetRoomTypeAvailabilityForPeriod(request))).ToList();
    }

    [WolverinePost("/api/reservations")]
    public static (CreationResponse, StartStream<RoomReservation>) Post(
        ReserveRoomRequest command,
        DailyAvailability dailyAvailability)
    {
        // Make sure there is availability for every day
        if (dailyAvailability.Any(x => x.AvailableRooms == 0))
        {
            throw new InvalidOperationException("Not enough available rooms!");
        }

        var reservationId = CombGuidIdGeneration.NewGuid().ToString();

        // I copied this, but I'd probably eliminate the record usage in favor
        // of init only properties so you can make the potentially error prone
        // mapping easier to troubleshoot in the future
        // That folks is the voice of experience talkine
        var reserved = new RoomReserved(
            reservationId,
            null,
            command.RoomType,
            command.From,
            command.To,
            command.GuestId,
            command.NumberOfPeople,
            ReservationSource.Api,
            DateTimeOffset.UtcNow
        );

        return (
            // This would be the response body, and this also helps Wolverine
            // to create OpenAPI metadata for the endpoint
            new CreationResponse($"/api/reservations/{reservationId}"),
            
            // This return value is recognized by Wolverine as a "side effect"
            // that will be processed as part of a Marten transaction
            new StartStream<RoomReservation>(reservationId, reserved)
        );
    }
}

For this post, I’d like you to focus on the LoadAsync() method above. That’s utilizing Wolverine’s compound handler technique to split out the data loading so that the actual endpoint Post() method can be a pure function that’s easily unit tested by just “pushing” in the inputs and asserting on either the values returned or the presence of an exception in the validation logic.

Back to that LoadAsync() method. Let’s assume that this HTTP service is going to be under quite a bit of load and it wouldn’t hurt to apply some performance optimization. Or also imagine that the data querying to find the room availability of a certain room type and a time period will be fairly common within the system at large. I’m saying all that to justify the usage of Marten’s compiled query feature as shown below:

public class GetRoomTypeAvailabilityForPeriod : ICompiledListQuery<DailyRoomTypeAvailability>
{
    // Sorry, but this signature is necessary for the Marten mechanics
    public GetRoomTypeAvailabilityForPeriod()
    {
    }

    public GetRoomTypeAvailabilityForPeriod(ReserveRoomRequest request)
    {
        RoomType = request.RoomType;
        From = request.From;
        To = request.To;
    }

    public RoomType RoomType { get; set; }
    public DateOnly From { get; set; }
    public DateOnly To { get; set; }

    public Expression<Func<IMartenQueryable<DailyRoomTypeAvailability>, IEnumerable<DailyRoomTypeAvailability>>>
        QueryIs()
    {
        return q => q.Where(day => day.RoomType == RoomType && day.Date >= From && day.Date <= To);
    }
}

First of all, this is Marten’s version of the Query Object pattern which enables you to share the query definition in declarative ways throughout the codebase. (I’ve heard other folks call this a “Specification,” but that name is overloaded a bit too much in software development world). Removing duplication is certainly a good thing all by itself. Doing so in a way that eliminates the need for extra repository abstractions is also a win in my book.

Secondly, by using the “compiled query”, Marten is able to cache the whole execution plan in memory (technically it’s generating code at runtime) for faster runtime execution. The dirty, barely recognized fact in .NET development today is that the act of parsing Linq statements and converting the intermediate query model into actionable SQL and glue code is not cheap. Marten compiled queries sidestep all that preliminary parsing junk and let’s you skip right to the execution part.

It’s a possibly underused and under-appreciated feature within Marten, but compiled queries are a great way to optimize your system’s performance and possibly clean up code duplication in simple ways.

Low Ceremony Vertical Slice Architecture with Wolverine

TL;DR: Wolverine can enable you to write testable code and achieve separation of concerns in your server side code with far less code ceremony than typical Clean Architecture type approaches.

I’m part of the mini-backlash against heavyweight, prescriptively layered architectural patterns like the various flavors of Hexagonal Architecture. I even did a whole talk on that subject at NDC Oslo this year:

Instead, I’m a big fan of keeping closely related code closer together with something like what Jimmy Bogard coined as Vertical Slices. Conveniently enough, I happen to think that Wolverine is a good fit for that style.

From a conference talk I did early last year, I started to build out a sample “TeleHealth Portal” system using the full “critter stack” with both Marten for persistence and event sourcing and Wolverine for everything else. Inside of this fictional TeleHealth system there will be a web service that adds a healthcare provider to an active board of related appointment requests (as an example, you might have a board for pediatric appointments in the state of Texas). When this web service executes, it needs to:

  1. Find the related information about the requested, active Board and the Provider
  2. Validate that the provider in question is able to join the active board based on various business rules like “is this provider licensed in this particular state and for some specialty?”. If the validation fails, the web service should return the validation message with the ProblemDetails specification
  3. Assuming the validation is good, start a new event stream with Marten for a ProviderShift that will track what the provider does during their active shift on that board for that specific day

I’ll need to add a little more context afterward for some application configuration, but here’s that functionality in one single Wolverine.Http endpoint class — with the assumption that the heavy duty business logic for validating the provider & board assignment is in the business domain model:

public record StartProviderShift(Guid BoardId, Guid ProviderId);
public record ShiftStartingResponse(Guid ShiftId) : CreationResponse("/shift/" + ShiftId);

public static class StartProviderShiftEndpoint
{
    // This would be called before the method below
    public static async Task<(Board, Provider, IResult)> LoadAsync(StartProviderShift command, IQuerySession session)
    {
        // You could get clever here and batch the queries to Marten
        // here, but let that be a later optimization step
        var board = await session.LoadAsync<Board>(command.BoardId);
        var provider = await session.LoadAsync<Provider>(command.ProviderId);

        if (board == null || provider == null) return (board, provider, Results.BadRequest());

        // This just means "full speed ahead"
        return (board, provider, WolverineContinue.Result());
    }

    [WolverineBefore]
    public static IResult Validate(Provider provider, Board board)
    {
        // Check if you can proceed to add the provider to the board
        // This logic is out of the scope of this sample:)
        if (provider.CanJoin(board))
        {
            // Again, this value tells Wolverine to keep processing
            // the HTTP request
            return WolverineContinue.Result();
        }
        
        // No soup for you!
        var problems = new ProblemDetails
        {
            Detail = "Provider is ineligible to join this Board",
            Status = 400,
            Extensions =
            {
                [nameof(StartProviderShift.ProviderId)] = provider.Id,
                [nameof(StartProviderShift.BoardId)] = board.Id
            }
        };

        // Wolverine will execute this IResult
        // and stop all other HTTP processing
        return Results.Problem(problems);
    }
    
    [WolverinePost("/shift/start")]
    // In the tuple that's returned below,
    // The first value of ShiftStartingResponse is assumed by Wolverine to be the 
    // HTTP response body
    // The subsequent IStartStream value is executed as a side effect by Wolverine
    public static (ShiftStartingResponse, IStartStream) Create(StartProviderShift command, Board board, Provider provider)
    {
        var started = new ProviderJoined(board.Id, provider.Id);
        var op = MartenOps.StartStream<ProviderShift>(started);

        return (new ShiftStartingResponse(op.StreamId), op);
    }
}

And there’s a few things I’d ask you to notice in the code above:

  1. It’s just one class in one file that’s largely using functional decomposition to establish separation of concerns
  2. Wolverine.Http is able to call the various methods in order from top to bottom, pass the loaded data from LoadAsync() to Validate() and on finally to the Create() method
  3. I didn’t bother with any kind of repository abstraction around the data loading in the first step
  4. The Validate() method is a pure function that’s suitable for easy unit testing of the validation logic
  5. The Create() method is also a pure, synchronous function that’s going to be easy to unit test as you can do assertions on the events contained in the IStartStream object
  6. Wolverine’s Marten integration is able to do the actual persistence of the new event stream for ProviderShift for you and deal with all the icky asynchronous junk

For more context, here’s the (butt ugly) code that Wolverine generates for the HTTP endpoint:

    public class POST_shift_start : Wolverine.Http.HttpHandler
    {
        private readonly Wolverine.Http.WolverineHttpOptions _options;
        private readonly Marten.ISessionFactory _sessionFactory;

        public POST_shift_start(Wolverine.Http.WolverineHttpOptions options, Marten.ISessionFactory sessionFactory) : base(options)
        {
            _options = options;
            _sessionFactory = sessionFactory;
        }



        public override async System.Threading.Tasks.Task Handle(Microsoft.AspNetCore.Http.HttpContext httpContext)
        {
            await using var documentSession = _sessionFactory.OpenSession();
            await using var querySession = _sessionFactory.QuerySession();
            var (command, jsonContinue) = await ReadJsonAsync<TeleHealth.WebApi.StartProviderShift>(httpContext);
            if (jsonContinue == Wolverine.HandlerContinuation.Stop) return;
            (var board, var provider, var result) = await TeleHealth.WebApi.StartProviderShiftEndpoint.LoadAsync(command, querySession).ConfigureAwait(false);
            if (!(result is Wolverine.Http.WolverineContinue))
            {
                await result.ExecuteAsync(httpContext).ConfigureAwait(false);
                return;
            }

            var result = TeleHealth.WebApi.StartProviderShiftEndpoint.Validate(provider, board);
            if (!(result is Wolverine.Http.WolverineContinue))
            {
                await result.ExecuteAsync(httpContext).ConfigureAwait(false);
                return;
            }

            (var shiftStartingResponse, var startStream) = TeleHealth.WebApi.StartProviderShiftEndpoint.Create(command, board, provider);
            
            // Placed by Wolverine's ISideEffect policy
            startStream.Execute(documentSession);

            ((Wolverine.Http.IHttpAware)shiftStartingResponse).Apply(httpContext);
            
            // Commit any outstanding Marten changes
            await documentSession.SaveChangesAsync(httpContext.RequestAborted).ConfigureAwait(false);

            await WriteJsonAsync(httpContext, shiftStartingResponse);
        }

    }

In the application bootstrapping, I have Wolverine applying transactional middleware automatically:

builder.Host.UseWolverine(opts =>
{
    // more config...
    
    // Automatic usage of transactional middleware as 
    // Wolverine recognizes that an HTTP endpoint or message handler
    // persists data
    opts.Policies.AutoApplyTransactions();
});

And the Wolverine/Marten integration configured as well:

builder.Services.AddMarten(opts =>
    {
        var connString = builder
            .Configuration
            .GetConnectionString("marten");

        opts.Connection(connString);

        // There will be more here later...
    })

    // I added this to enroll Marten in the Wolverine outbox
    .IntegrateWithWolverine()

    // I also added this to opt into events being forward to
    // the Wolverine outbox during SaveChangesAsync()
    .EventForwardingToWolverine();

I’ll even go farther and say that in many cases Wolverine will allow you to establish decent separation of concerns and testability with far less ceremony than is required today with high overhead approaches like the popular Clean Architecture style.