Integration Testing an HTTP Service that Publishes a Wolverine Message

As long term Agile practitioners, the folks behind the whole JasperFx / “Critter Stack” ecosystem explicitly design our tools around the quality of “testability.” Case in point, Wolverine has quite a bit of integration test helpers for testing through message handler execution.

However, while helping a Wolverine user last week, they told me that they were bypassing those built in tools because they wanted to do an integration test of an HTTP service call that publishes a message to Wolverine. That’s certainly going to be a common scenario, so let’s talk about a strategy for reliably writing integration tests that both invoke an HTTP request and can observe the ongoing Wolverine activity to “know” when the “act” part of a typical “arrange, act, assert” test is complete.

In the Wolverine codebase itself, there’s a couple projects that we use to test the Wolverine.Http library:

  1. WolverineWebApi — a web api project that has a lot of fake endpoints that tries to cover the whole gamut of usage scenarios for Wolverine.Http, including a couple use cases of publishing messages directly from HTTP endpoint handlers to asynchronous message handling inside of Wolverine core
  2. Wolverine.Http.Tests — an xUnit.Net project that contains a mix of unit tests and integration tests through WolverineWebApi and Wolverine.Http itself

Back to the need to write integration tests that span work from HTTP service invocations through to Wolverine message processing, Wolverine.Http uses the Alba library (another JasperFx project!) to execute and run assertions against HTTP services. At least at the moment, xUnit.Net is my goto test runner library, so Wolverine.Http.Tests has this shared fixture that is shared between test classes:

public class AppFixture : IAsyncLifetime
{
    public IAlbaHost Host { get; private set; }

    public async Task InitializeAsync()
    {
        // Sorry folks, but this is absolutely necessary if you 
        // use Oakton for command line processing and want to 
        // use WebApplicationFactory and/or Alba for integration testing
        OaktonEnvironment.AutoStartHost = true;

        // This is bootstrapping the actual application using
        // its implied Program.Main() set up
        Host = await AlbaHost.For<Program>(x => { });
    }

A couple notes on this approach:

  • I think it’s very important to use the actual application bootstrapping for the integration testing rather than trying to have a parallel IoC container configuration for test automation as I frequently see out in the wild. That doesn’t preclude customizing that bootstrapping a little bit to substitute in fake, stand in services for problematic external infrastructure.
  • The approach I’m showing here with xUnit.Net does have the effect of making the tests execute serially, which might not be what you want in very large test suites
  • I think the xUnit.Net shared fixture approach is somewhat confusing and I always have to review the documentation on it when I try to use it

There’s also a shared base class for integrated HTTP tests called IntegrationContext, with a little bit of that shown below:

[CollectionDefinition("integration")]
public class IntegrationCollection : ICollectionFixture<AppFixture>
{
}

[Collection("integration")]
public abstract class IntegrationContext : IAsyncLifetime
{
    private readonly AppFixture _fixture;

    protected IntegrationContext(AppFixture fixture)
    {
        _fixture = fixture;
    }
    
    // more....

More germane to this particular post, here’s a helper method inside of IntegrationContext I wrote specifically to do integration testing that has to span an HTTP request through to asynchronous Wolverine message handling:

    // This method allows us to make HTTP calls into our system
    // in memory with Alba, but do so within Wolverine's test support
    // for message tracking to both record outgoing messages and to ensure
    // that any cascaded work spawned by the initial command is completed
    // before passing control back to the calling test
    protected async Task<(ITrackedSession, IScenarioResult)> TrackedHttpCall(Action<Scenario> configuration)
    {
        IScenarioResult result = null;

        // The outer part is tying into Wolverine's test support
        // to "wait" for all detected message activity to complete
        var tracked = await Host.ExecuteAndWaitAsync(async () =>
        {
            // The inner part here is actually making an HTTP request
            // to the system under test with Alba
            result = await Host.Scenario(configuration);
        });

        return (tracked, result);
    }

Now, for a sample usage of that test helpers, here’s a fake endpoint from WolverineWebApi that I used to prove that Wolverine.Http endpoints can publish messages through Wolverine’s cascading message approach:

    // This would have a string response and a 200 status code
    [WolverinePost("/spawn")]
    public static (string, OutgoingMessages) Post(SpawnInput input)
    {
        var messages = new OutgoingMessages
        {
            new HttpMessage1(input.Name),
            new HttpMessage2(input.Name),
            new HttpMessage3(input.Name),
            new HttpMessage4(input.Name)
        };

        return ("got it", messages);
    }

Psst. Notice how the endpoint method’s signature up above is a synchronous pure function which is cleaner and easier to unit test than the equivalent functionality would be in other .NET frameworks that would have required you to call asynchronous methods on some kind of IMessageBus interface.

To test this thing, I want to run an HTTP POST to the “/span” Url in our application, then prove that there were four matching messages published through Wolverine. Here’s the test for that functionality using our earlier TrackedHttpCall() testing helper:

    [Fact]
    public async Task send_cascaded_messages_from_tuple_response()
    {
        // This would fail if the status code != 200 btw
        // This method waits until *all* detectable Wolverine message
        // processing has completed
        var (tracked, result) = await TrackedHttpCall(x =>
        {
            x.Post.Json(new SpawnInput("Chris Jones")).ToUrl("/spawn");
        });

        result.ReadAsText().ShouldBe("got it");

        // "tracked" is a Wolverine ITrackedSession object that lets us interrogate
        // what messages were published, sent, and handled during the testing perioc
        tracked.Sent.SingleMessage<HttpMessage1>().Name.ShouldBe("Chris Jones");
        tracked.Sent.SingleMessage<HttpMessage2>().Name.ShouldBe("Chris Jones");
        tracked.Sent.SingleMessage<HttpMessage3>().Name.ShouldBe("Chris Jones");
        tracked.Sent.SingleMessage<HttpMessage4>().Name.ShouldBe("Chris Jones");
    }

There you go. In one fell swoop, we’ve got a reliable way to do integration testing against asynchronous behavior in our system that’s triggered by an HTTP service call — including any and all configured ASP.Net Core or Wolverine.Http middleware that’s part of the execution pipeline.

By “reliable” here in regards to integration testing, I want you to think about any reasonably complicated Selenium test suite and how infuriatingly often you get “blinking” tests that are caused by race conditions between some kind of asynchronous behavior and the test harness trying to make test assertions against the browser state. Wolverine’s built in integration test support can eliminate that kind of inconsistent test behavior by removing the race condition as it tracks all ongoing work for completion.

Oh, and here’s Chris Jones sacking Joe Burrow in the AFC Championship game to seal the Chiefs win that was fresh in my mind when I originally wrote that code above:

Customizing Return Value Behavior in Wolverine for Profit and Fun

I’m frequently and logically asked how Wolverine differs from the plethora of existing tooling in the .NET space for asynchronous messaging or in memory mediator tools. I’d argue that the biggest difference is how you to user of Wolverine go about writing your message handler (or HTTP endpoint) code that will be called by Wolverine at runtime.

All of the existing frameworks that I’m currently aware of are what I call “IHandler of T” frameworks meaning that one way or another, you have to constrain your message/event/command handling code behind some kind of mandatory framework interface like this:

public interface IHandler<T>
{
    Task HandleAsync(T message, IMessageContext context, CancellationToken cancellation);
}

Wolverine takes a very different approach to your message handler code by allowing you to write the simplest possible handler message signature while Wolverine dynamically creates its “IHandler of T” behind the scenes. By and large, Wolverine is trying to allow you to write your message handler code as pure functions that can be much easier and more effective to unit test than traditional .NET message handler code.

Jumping to an example that came up in the Wolverine Discord room last week, let’s say that you’re building a dashboard kind of application where the server side will be constantly broadcasting update messages via web sockets to the client using SignalR and you’re using Wolverine on the back end. The built in cascading messages feature of Wolverine would be nice for this exact kind of system, but Wolverine doesn’t yet have a SignalR transport (it will at some point this year). Instead, let’s customize Wolverine’s execution pipeline a little bit so we can return web socket bound messages directly from our Wolverine message handlers without having to inject any kind of SignalR service (or gateway around that).

First though, let’s say that all client bound messages from the server to the client will implement this little interface:

// Setting this up for usage with Redux style
// state management on the client side
public interface IClientMessage
{
    [JsonPropertyName("type")]
    public string TypeName => GetType().Name;
}

// This is just a "nullo" message that might
// be useful to mean "don't send anything in this case"
public record NoClientMessage : IClientMessage;

In the end, what I want to do is create a policy in Wolverine such that any “return value” from a Wolverine message handler or HTTP endpoint method that implements IClientMessage or IEnumerable<IClientMessage> will be sent via WebSockets instead of Wolverine trying to route these values through messaging. That leads us to having handler messages like this:

public record CountUpdated(int Value) : IClientMessage;

public record IncrementCount;

public static class SomeUpdateHandler
{
    public static int Count = 0;

    // We're trying to teach Wolverine to send CountUpdated
    // return values via WebSockets instead of async
    // message routing
    public static CountUpdated Handle(IncrementCount command)
    {
        Count++;
        return new CountUpdated(Count);
    }
}

So now onto the actual SignalR integration. I’ll add this simplistic Hub type in SignalR:

public class BroadcastHub : Hub
{
    public Task SendBatchAsync(IClientMessage[] messages)
    {
        return Clients.All.SendAsync("Updates", JsonSerializer.Serialize(messages));
    }
}

Having built one of these applications before and helped troubleshoot problems in several others, I know that it’s frequently useful to debounce or throttle updates to the client to make the Javascript client be more responsive. To that end, I’m going to add this little custom class to our system that will be registered in our system as a singleton:

public class Broadcaster : IDisposable
{
    private readonly BroadcastHub _hub;
    private readonly ActionBlock<IClientMessage[]> _publishing;
    private readonly BatchingBlock<IClientMessage> _batching;

    public Broadcaster(BroadcastHub hub)
    {
        _hub = hub;
        _publishing = new ActionBlock<IClientMessage[]>(messages => _hub.SendBatchAsync(messages),
            new ExecutionDataflowBlockOptions
            {
                EnsureOrdered = true,
                MaxDegreeOfParallelism = 1
            });

        // BatchingBlock is a Wolverine internal building block that's
        // purposely public for this kind of usage.
        // This will do the "debounce" for us
        _batching = new BatchingBlock<IClientMessage>(250, _publishing);
    }


    public void Dispose()
    {
        _hub.Dispose();
        _batching.Dispose();
    }

    public Task Post(IClientMessage? message)
    {
        return message is null or NoClientMessage 
            ? Task.CompletedTask 
            : _batching.SendAsync(message);
    }

    public async Task PostMany(IEnumerable<IClientMessage> messages)
    {
        foreach (var message in messages.Where(x => x != null))
        {
            if (message is NoClientMessage) continue;
            
            await _batching.SendAsync(message);
        }
    }
}

Switching to the application bootstrapping in the Program.Main() method of this application, I’m going to register a couple services:

builder.Services.AddSignalR();
builder.Services.AddSingleton<Broadcaster>();

And add a SignalR route against the WebApplication for the system:

app.MapHub<BroadcastHub>("/updates");

Now, we need to craft a policy for Wolverine that will teach it how to generate code for our desired behavior for IClientMessage return values:

public class BroadcastClientMessages : IChainPolicy
{
    public void Apply(IReadOnlyList<IChain> chains, GenerationRules rules, IContainer container)
    {
        // We're going to look through all known message handler and HTTP endpoint chains
        // and see where there's any return values of IClientMessage or IEnumerable<IClientMessage>
        // and apply our custom return value handling
        foreach (var chain in chains)
        {
            foreach (var message in chain.ReturnVariablesOfType<IClientMessage>())
            {
                message.UseReturnAction(v =>
                {
                    var call = MethodCall.For<Broadcaster>(x => x.Post(null!));
                    call.Arguments[0] = message;

                    return call;
                });
            }

            foreach (var messages in chain.ReturnVariablesOfType<IEnumerable<IClientMessage>>())
            {
                messages.UseReturnAction(v =>
                {
                    var call = MethodCall.For<Broadcaster>(x => x.PostMany(null!));
                    call.Arguments[0] = messages;

                    return call;
                });
            }
        }
    }
}

And add that new policy to our Wolverine application like so:

builder.Host.UseWolverine(opts =>
{
    // Other configuration...
    opts.Policies.Add<BroadcastClientMessages>();
});

Finally, let’s see the results. For the SomeUpdateHandler type that handled the `IncrementCount` message, Wolverine is now generating this code:

    public class IncrementCountHandler1900628703 : Wolverine.Runtime.Handlers.MessageHandler
    {
        private readonly WolverineWebApi.WebSockets.Broadcaster _broadcaster;

        public IncrementCountHandler1900628703(WolverineWebApi.WebSockets.Broadcaster broadcaster)
        {
            _broadcaster = broadcaster;
        }



        public override System.Threading.Tasks.Task HandleAsync(Wolverine.Runtime.MessageContext context, System.Threading.CancellationToken cancellation)
        {
            var incrementCount = (WolverineWebApi.WebSockets.IncrementCount)context.Envelope.Message;
            var outgoing1 = WolverineWebApi.WebSockets.SomeUpdateHandler.Handle(incrementCount);

            // Notice that the return value from the message handler
            // is being broadcast to the outgoing SignalR
            // Hub
            return _broadcaster.Post(outgoing1);
        }

    }

And there it is, your message handlers that need to send messages via WebSockets can now be coded through pure functions that are generally much easier to test and have less code noise than the equivalent functionality if you’d used basically every other .NET messaging framework.

Custom Error Handling Middleware for Wolverine.HTTP

Just a short one for today, mostly to answer a question that came in earlier this week.

When using Wolverine.Http to expose HTTP endpoint services that end up capturing Marten events, you might have an endpoint coded like this one from the Wolverine tests that takes in a command message and tries to start a new Marten event stream for the Order aggregate:

    [Transactional] // This can be omitted if you use auto-transactions
    [WolverinePost("/orders/create4")]
    public static (OrderStatus, IStartStream) StartOrder4(StartOrderWithId command)
    {
        var items = command.Items.Select(x => new Item { Name = x }).ToArray();

        // This is unique to Wolverine (we think)
        var startStream = MartenOps
            .StartStream<Order>(command.Id,new OrderCreated(items));

        return (
            new OrderStatus(startStream.StreamId, false),
            startStream
        );
    }

Where the command looks like this:

public record StartOrderWithId(Guid Id, string[] Items);

In the HTTP endpoint above, we’re:

  1. Creating a new event stream for Order that uses the stream/order id sent in the command
  2. Returning a response body of type OrderStatus to the caller
  3. Using Wolverine’s Marten integration to also return an IStartStream object that integrated middleware will apply to Marten’s IDocumentSession (more on this in my next post because we think this is a big deal by itself).

Great, easy enough right? Just to add some complexity, if the caller happens to send up the same, new order id additional times then Marten will throw an exception called `ExistingStreamIdCollisionException` just noting that no, you can’t create a new stream with that id because one already exists.

Marten’s behavior helps protect the data from duplication, but what about trying to make the HTTP response a little nicer by catching that exception automatically, and returning a ProblemDetails body with a 400 Bad Request status code to denote exactly what happened?

While you actually could do that globally with a bit of ASP.Net Core middleware, that applies everywhere at runtime and not just on the specific routes that could throw that exception. I’m not sure how big a deal this is to many of you, but using ASP.Net Core middleware would also be unable to have any impact on OpenAPI descriptions of your endpoints and it would be up to you to explicitly add attributes on your endpoints to denote the error handling response.

Fortunately, Wolverine’s middleware strategy will allow you to specifically target only the relevant routes and also add OpenAPI descriptions to your API’s generated documentation. And do so in a way that is arguably more efficient than the ASP.Net Core middleware approach at runtime anyway.

Jumping right into the deep end of the pool (I’m helping take my little ones swimming this afternoon and maybe thinking ahead), I’m going to build that policy like so:

public class StreamCollisionExceptionPolicy : IHttpPolicy
{
    private bool shouldApply(HttpChain chain)
    {
        // TODO -- and Wolverine needs a utility method on IChain to make this declarative
        // for future middleware construction
        return chain
            .HandlerCalls()
            .SelectMany(x => x.Creates)
            .Any(x => x.VariableType.CanBeCastTo<IStartStream>());
    }
    
    public void Apply(IReadOnlyList<HttpChain> chains, GenerationRules rules, IContainer container)
    {
        // Find *only* the HTTP routes where the route tries to create new Marten event streams
        foreach (var chain in chains.Where(shouldApply))
        {
            // Add the middleware on the outside
            chain.Middleware.Insert(0, new CatchStreamCollisionFrame());
            
            // Alter the OpenAPI metadata to register the ProblemDetails
            // path
            chain.Metadata.ProducesProblem(400);
        }
    }

    // Make the codegen easier by doing most of the work in this one method
    public static Task RespondWithProblemDetails(ExistingStreamIdCollisionException e, HttpContext context)
    {
        var problems = new ProblemDetails
        {
            Detail = $"Duplicated id '{e.Id}'",
            Extensions =
            {
                ["Id"] = e.Id
            },
            Status = 400 // The default is 500, so watch this
        };

        return Results.Problem(problems).ExecuteAsync(context);
    }
}

// This is the actual middleware that's injecting some code
// into the runtime code generation
internal class CatchStreamCollisionFrame : AsyncFrame
{
    public override void GenerateCode(GeneratedMethod method, ISourceWriter writer)
    {
        writer.Write("BLOCK:try");
        
        // Write the inner code here
        Next?.GenerateCode(method, writer);
        
        writer.FinishBlock();
        writer.Write($@"
BLOCK:catch({typeof(ExistingStreamIdCollisionException).FullNameInCode()} e)
await {typeof(StreamCollisionExceptionPolicy).FullNameInCode()}.{nameof(StreamCollisionExceptionPolicy.RespondWithProblemDetails)}(e, httpContext);
return;
END

");
    }
}

And apply the middleware to the application like so:

app.MapWolverineEndpoints(opts =>
{
    // more configuration for HTTP...
    opts.AddPolicy<StreamCollisionExceptionPolicy>();
});

And lastly, here’s a test using Alba that just verifies the behavior end to end by trying to create a new event stream with the same id multiple times:

    [Fact]
    public async Task use_stream_collision_policy()
    {
        var id = Guid.NewGuid();
        
        // First time should be fine
        await Scenario(x =>
        {
            x.Post.Json(new StartOrderWithId(id, new[] { "Socks", "Shoes", "Shirt" })).ToUrl("/orders/create4");
        });
        
        // Second time hits an exception from stream id collision
        var result2 = await Scenario(x =>
        {
            x.Post.Json(new StartOrderWithId(id, new[] { "Socks", "Shoes", "Shirt" })).ToUrl("/orders/create4");
            x.StatusCodeShouldBe(400);
        });

        // And let's verify that we got what we expected for the ProblemDetails
        // in the HTTP response body of the 2nd request
        var details = result2.ReadAsJson<ProblemDetails>();
        Guid.Parse(details.Extensions["Id"].ToString()).ShouldBe(id);
        details.Detail.ShouldBe($"Duplicated id '{id}'");
    }

To maybe make this a little clearer what’s going on, Wolverine can always show you the generated code it uses for your HTTP endpoints like this (I reformatted the code for legibility with Rider):

public class POST_orders_create4 : HttpHandler
{
    private readonly WolverineHttpOptions _options;
    private readonly ISessionFactory _sessionFactory;

    public POST_orders_create4(WolverineHttpOptions options, ISessionFactory sessionFactory) : base(options)
    {
        _options = options;
        _sessionFactory = sessionFactory;
    }

    public override async Task Handle(HttpContext httpContext)
    {
        await using var documentSession = _sessionFactory.OpenSession();
        try
        {
            var (command, jsonContinue) = await ReadJsonAsync<StartOrderWithId>(httpContext);
            if (jsonContinue == HandlerContinuation.Stop)
            {
                return;
            }

            var (orderStatus, startStream) = MarkItemEndpoint.StartOrder4(command);

            // Placed by Wolverine's ISideEffect policy
            startStream.Execute(documentSession);

            // Commit any outstanding Marten changes
            await documentSession.SaveChangesAsync(httpContext.RequestAborted).ConfigureAwait(false);

            await WriteJsonAsync(httpContext, orderStatus);
        }
        catch (ExistingStreamIdCollisionException e)
        {
            await StreamCollisionExceptionPolicy.RespondWithProblemDetails(e, httpContext);
        }
    }
}

Critter Stack Futures

Starting this month, I think I’m going to blog openly about the ideas and directions that the “Critter Stack” team (Marten and Wolverine) and community is considering for where things go next. I think we’d love to hear any feedback or further suggestions about where this goes (and here’s the link to the Critter Stack Discord channel). I think we’re being mostly reactive to “what are our users struggling with” and “what are folks telling us about what’s stopping them from adopting Marten or Wolverine?”

For the immediate future, I’m trying to get my act together to actually have a real business structure and being ready to start offering support or consulting contracts. I’m personally catching up on open Marten bugs as I’ve been mostly busy with Wolverine lately and not helping the rest of the team much.

Strategic Things

Here are the big themes that we’ve identified that we need to push Marten and/or Wolverine into contention for world domination (or at least entice enough paying users to make a living off of our passion projects here). The first four bullets are happening this year and the rest is just a fanciful vision board.

  1. 1st class subscriptions in Marten — This is the ability to register to listen to feeds of event data from Marten. Maybe you want to stream event data through to Kafka for additional processing. Maybe you want to update an ElasticSearch index for your data. Definitely you want this to work with all the reliability, monitoring, and error handling capabilities that you’d expect.
  2. Linq improvements in Marten — Try to utilize JSONPath operators that are available in recent versions of PostgreSQL that would re-enable the usage of GIN/GIST indexes that was somewhat lost in V4. Try to greatly improve Marten’s Linq querying within document sub-collections. I hate working on the Linq parser in Marten, but I hate seeing Linq-related bugs filter in even more as folks try more and more things, so it’s time
  3. Massive scalability of event projections — This is likely to be a new alternative to the current Marten async daemon that is able to load balance asynchronous projection processing across multiple application nodes. This improved daemon will be built with a combination of Marten and Wolverine as an add on product, likely with some kind of dual usage commercial license and official support license.
  4. Zero downtime, blue/green deployment for event sourcing — Closely related to the previous bullet. Everything you need to be able to blue/green deploy your application using Marten event sourcing without any down time. So, you’ll have support for versioned projections and zero downtime projection rebuilds as well. This will most likely be part of the commercial add on package for the Critter Stack
  5. User interface for monitoring or managing the Critter Stack — Just a place holder. Not sure what the exact functionality would be here. And this will absolutely be a dual license commercial product of some sort.
  6. Sql Server backed event store — While the document database feature set in Marten is unlikely to ever be completely ported to Sql Server, it’s maybe perfectly possible to support Marten’s event sourcing on a Sql Server foundation
  7. Marten for the JVM??? — Just stay tuned on this one down the line. In all likelihood this would mean running Marten’s event store in a separate process, then using some kind of language neutral proxy mechanism (gRPC?) to capture events. Tentatively the idea for projections is to let users use TypeScript/JavaScript to define projects that will run in WebAssembly.
  8. AOT compilation / Serverless optimized recipe — There’s no chance in the world that any combination of Marten or Wolverine can work with AOT compilation without some significant changes on our side. I think it’s going to end up requiring some level of code generation to get there. I’m not clear about whether or not enough folks care about this right now to justify the effort.

Tactical Things

And also a list of hopefully quick wins that help spur Critter Stack adoption

  1. Open Telemetry support in Marten. We have this in Wolverine already, but not yet for Marten activity
  2. Ability to raise events from projections in Marten, or issue commands as aggregates are updates, or I don’t know yet. All I know is that right now this seems to be coming up a lot in user questions in Discord
  3. Document versioning in Marten
  4. Kafka transport in Wolverine
  5. Amazon SNS support in Wolverine
  6. Strong-typed identifiers. Folks have been asking for this periodically in Marten. When it exists in Marten, I’d also like to pursue being able to exploit strong typed identifiers in Wolverine middleware to “know” when to load entities from identifiers automatically
  7. Expanding multi-tenancy support in Wolverine. Today Wolverine has a robust model for Marten-backed multi-tenancy in the message handling, but I’d like to see this extended to detecting tenant identifiers automatically in HTTP requests. I’d also like to extend the multi-tenancy support to EF Core backed persistence and SQL Server backed storage.
  8. Lightweight partial updates in Marten. This is the ability to issue updates to part of a Marten document without first loading the entire document. We’ve had this functionality from the very beginning, but it depends on Javascript support in PostgreSQL through the PLv8 extension that’s in a tenuous state. The new model would use native PostgreSQL features in place of the older JavaScript model.

Waddaya think?

Anything above sound compelling to you? Have questions about how some of that would work? Wanna make suggestions about how it should be done? Have *gasp* completely different suggestions for what we should improve instead in Marten/Wolverine to make it more attractive to your shop? Fire away in comments here or the Critter Stack Discord channel.

Critter Stack Multi-Tenancy

To go along with the Wolverine 1.0 release, I should probably be blogging some introductory, getting started type content. To hell with that though, let’s jump right into the deep end of the pool today! To the best of my knowledge, no other messaging tooling in .NET can span inbox/outbox integration across this kind of multi-tenanted database storage.

Let’s say that you’re wanting to build a system using the full critter stack (Marten + Wolverine) and you need to support multi-tenancy through a database per tenant strategy for some mix of scalability and data segregation. You’d also like to use some of the goodies that comes from Critter Stack that is going to make your development life a whole lot easier and more productive:

Taking everything from a sample application from the Wolverine documentation, we get all of that with this setup:

using Marten;
using MultiTenantedTodoWebService;
using Oakton;
using Oakton.Resources;
using Wolverine;
using Wolverine.Http;
using Wolverine.Marten;

var builder = WebApplication.CreateBuilder(args);

// You do need a "master" database for operations that are
// independent of a specific tenant. Wolverine needs this
// for some of its state tracking
var connectionString = "Host=localhost;Port=5433;Database=postgres;Username=postgres;password=postgres";

// Adding Marten for persistence
builder.Services.AddMarten(m =>
    {
        // With multi-tenancy through a database per tenant
        m.MultiTenantedDatabases(tenancy =>
        {
            // You would probably be pulling the connection strings out of configuration,
            // but it's late in the afternoon and I'm being lazy building out this sample!
            tenancy.AddSingleTenantDatabase("Host=localhost;Port=5433;Database=tenant1;Username=postgres;password=postgres", "tenant1");
            tenancy.AddSingleTenantDatabase("Host=localhost;Port=5433;Database=tenant2;Username=postgres;password=postgres", "tenant2");
            tenancy.AddSingleTenantDatabase("Host=localhost;Port=5433;Database=tenant3;Username=postgres;password=postgres", "tenant3");
        });
        
        m.DatabaseSchemaName = "mttodo";
    })
    .IntegrateWithWolverine(masterDatabaseConnectionString:connectionString);

// This tells both Wolverine & Marten to make
// sure all necessary database schema objects across
// all the tenant databases are up and ready to go
// on application startup
builder.Services.AddResourceSetupOnStartup();

// Wolverine usage is required for WolverineFx.Http
builder.Host.UseWolverine(opts =>
{
    // This middleware will apply to the HTTP
    // endpoints as well
    opts.Policies.AutoApplyTransactions();
    
    // Setting up the outbox on all locally handled
    // background tasks
    opts.Policies.UseDurableLocalQueues();
});

builder.Services.AddEndpointsApiExplorer();
builder.Services.AddSwaggerGen();

var app = builder.Build();

// Configure the HTTP request pipeline.
if (app.Environment.IsDevelopment())
{
    app.UseSwagger();
    app.UseSwaggerUI();
}

// Let's add in Wolverine HTTP endpoints to the routing tree
app.MapWolverineEndpoints();

return await app.RunOaktonCommands(args);

In the code above, we’re configured Marten to use a database per tenant in its own storage. When we also call IntegrateWithWolverine() to add Wolverine inbox/outbox support — which Wolverine in 1.0 is able to do for each and every single known tenant database.

And that’s a lot of setup, but now check out this sample usage of some message handlers in a fake Todo service:

public static class TodoCreatedHandler
{
    public static void Handle(DeleteTodo command, IDocumentSession session)
    {
        session.Delete<Todo>(command.Id);
    }
    
    public static TodoCreated Handle(CreateTodo command, IDocumentSession session)
    {
        var todo = new Todo { Name = command.Name };
        session.Store(todo);

        return new TodoCreated(todo.Id);
    }
    
    // Do something in the background, like assign it to someone,
    // send out emails or texts, alerts, whatever
    public static void Handle(TodoCreated created, ILogger logger)
    {
        logger.LogInformation("Got a new TodoCreated event for " + created.Id);
    }    
}

You’ll note that at no point in any of that code do you see anything to do with a tenant, or opening a Marten session to the right tenant, or propagating the tenant id from the initial CreateTodo command to the cascaded TodoCreated event. That’s because Wolverine is happily able to do that for you behind the scenes in Envelope metadata as long as the original command was tagged to a tenant id.

To do that, see these examples of invoking Wolverine from HTTP endpoints:

public static class TodoEndpoints
{
    [WolverineGet("/todoitems/{tenant}")]
    public static Task<IReadOnlyList<Todo>> Get(string tenant, IDocumentStore store)
    {
        using var session = store.QuerySession(tenant);
        return session.Query<Todo>().ToListAsync();
    }

    [WolverineGet("/todoitems/{tenant}/complete")]
    public static Task<IReadOnlyList<Todo>> GetComplete(string tenant, IDocumentStore store)
    {
        using var session = store.QuerySession(tenant);
        return session.Query<Todo>().Where(x => x.IsComplete).ToListAsync();
    }

    // Wolverine can infer the 200/404 status codes for you here
    // so there's no code noise just to satisfy OpenAPI tooling
    [WolverineGet("/todoitems/{tenant}/{id}")]
    public static async Task<Todo?> GetTodo(int id, string tenant, IDocumentStore store, CancellationToken cancellation)
    {
        using var session = store.QuerySession(tenant);
        var todo = await session.LoadAsync<Todo>(id, cancellation);

        return todo;
    }

    [WolverinePost("/todoitems/{tenant}")]
    public static async Task<IResult> Create(string tenant, CreateTodo command, IMessageBus bus)
    {
        // At the 1.0 release, you would have to use Wolverine as a mediator
        // to get the full multi-tenancy feature set.
        
        // That hopefully changes in 1.1
        var created = await bus.InvokeForTenantAsync<TodoCreated>(tenant, command);

        return Results.Created($"/todoitems/{tenant}/{created.Id}", created);
    }

    [WolverineDelete("/todoitems/{tenant}")]
    public static async Task Delete(
        string tenant, 
        DeleteTodo command, 
        IMessageBus bus)
    {
        // Invoke inline for the specified tenant
        await bus.InvokeForTenantAsync(tenant, command);
    }
}

I think in Wolverine 1.1 or at least a future incremental release of Wolverine that there will be a way to register automatic tenant id detection from an HTTP resource, for for 1.0 developers need to explicitly code and pass that along to Wolverine themselves. Once Wolverine knows the tenant id though, it will happily propagate that automatically to downstream messages. And in all cases, Wolverine is able to use the correct inbox/outbox storage in the current tenant database during message processing so you still have the single, native transaction spanning the application functionality and the inbox/outbox storage.\

Wolverine’s Middleware Strategy is a Different Animal

I saw someone on Twitter today asking to hear how Wolverine differs from MediatR. First off, Wolverine is a much bigger, more ambitious tool than MediatR is trying to be and covers far more use cases. Secondly though, Wolverine’s middleware strategy has some significant advantages over the equivalent strategies in other .NET tools.

Wolverine got its 1.0 release yesterday and I’m hoping that gives the tool a lot more visibility and earns it some usage over the next year. Today I wanted to show how the internal runtime model in general and specifically Wolverine’s approach to middleware is quite different than other .NET tools — and certainly try to sell you on why I think Wolverine’s approach is valuable.

For the “too long, didn’t read” crowd, the Wolverine middleware approach has these advantages over other similar .NET tools:

  1. Potentially more efficient at runtime. Fewer object allocations, less dictionary lookups, and just a lot less runtime logic in general
  2. Able to be more selectively applied on a message by message or HTTP endpoint by endpoint basis
  3. Can show you the exact code that explains exactly what middleware is applied and how it’s used on each individual message or HTTP route

For now, let’s consider the common case of wanting to use Fluent Validation to validate HTTP inputs to web service endpoints. If the validation is successful, continue processing, but if the validation fails, use the ProblemDetails specification to instead return a response denoting the validation errors and a status code of 400 to denote an invalid request.

To do that with Wolverine, first start with an HTTP web service project and add a reference to the WolverineFx.Http.FluentValidation Nuget. When you’re configuring Wolverine HTTP endpoints, add this single line to your application bootstrapping code (follow a sample usage here):

app.MapWolverineEndpoints(opts =>
{
    // more configuration for HTTP...
    
    // Opting into the Fluent Validation middleware from
    // Wolverine.Http.FluentValidation
    opts.UseFluentValidationProblemDetailMiddleware();
});

The code above also adds some automatic discovery of Fluent Validation validators and registration into your application’s IoC container, but with a little twist as Wolverine is guessing at the desired IoC lifetime to try to make some runtime optimizations (i.e., a validator type that has no constructor arguments is assumed to be stateless so it doesn’t have to be recreated between requests).

And now for a simple endpoint, some request models, and Fluent Validation validator types to test this out:

public class ValidatedEndpoint
{
    [WolverinePost("/validate/customer")]
    public static string Post(CreateCustomer customer)
    {
        return "Got a new customer";
    }
    
    [WolverinePost("/validate/user")]
    public static string Post(CreateUser user)
    {
        return "Got a new user";
    }
}

public record CreateCustomer
(
    string FirstName,
    string LastName,
    string PostalCode
)
{
    public class CreateCustomerValidator : AbstractValidator<CreateCustomer>
    {
        public CreateCustomerValidator()
        {
            RuleFor(x => x.FirstName).NotNull();
            RuleFor(x => x.LastName).NotNull();
            RuleFor(x => x.PostalCode).NotNull();
        }
    }
}

public record CreateUser
(
    string FirstName,
    string LastName,
    string PostalCode,
    string Password
)
{
    public class CreateUserValidator : AbstractValidator<CreateUser>
    {
        public CreateUserValidator()
        {
            RuleFor(x => x.FirstName).NotNull();
            RuleFor(x => x.LastName).NotNull();
            RuleFor(x => x.PostalCode).NotNull();
        }
    }
    
    public class PasswordValidator : AbstractValidator<CreateUser>
    {
        public PasswordValidator()
        {
            RuleFor(x => x.Password).Length(8);
        }
    }
}

And with that, let’s check out our functionality with these unit tests from the Wolverine codebase itself that uses Alba to test ASP.Net Core endpoints in memory:


    [Fact]
    public async Task one_validator_happy_path()
    {
        var createCustomer = new CreateCustomer("Creed", "Humphrey", "11111");

        // Succeeds w/ a 200
        var result = await Scenario(x =>
        {
            x.Post.Json(createCustomer).ToUrl("/validate/customer");
            x.ContentTypeShouldBe("text/plain");
        });
    }

    [Fact]
    public async Task one_validator_sad_path()
    {
        var createCustomer = new CreateCustomer(null, "Humphrey", "11111");

        var results = await Scenario(x =>
        {
            x.Post.Json(createCustomer).ToUrl("/validate/customer");
            x.ContentTypeShouldBe("application/problem+json");
            x.StatusCodeShouldBe(400);
        });

        // Just proving that we have ProblemDetails content
        // in the request
        var problems = results.ReadAsJson<ProblemDetails>();
    }

So what that unit test proves, is that the middleware is happily applying the Fluent Validation validators before the main request handler, and aborting the request handling with a ProblemDetails response if there are any validation failures.

At this point, an experienced .NET web developer is saying “so what, I can do this with [other .NET tool] today” — and you’d be right. Before I dive into what Wolverine does differently that makes its middleware both more efficient and potentially easier to understand, let’s take a detour into some value that Wolverine adds that other similar .NET tools cannot match.

Automatic OpenAPI Configuration FTW!

Of course we live in a world where there’s a reasonable expectation that HTTP web services today will be well described by OpenAPI metadata, and the potential usage of a ProblemDetails response should be reflected in that metadata. Not to worry though, because Wolverine’s middleware infrastructure is also able to add OpenAPI metadata automatically as a nice bonus. Here’s a screenshot of the Swashbuckle visualization of the OpenAPI metadata for the /validate/customer endpoint from earlier:

Just so you’re keeping score, I’m not aware of any other ASP.Net Core tool that can derive OpenAPI metadata as part of its middleware strategy

But there’s too much magic!

So there’s some working code that auto-magically applies middleware to your HTTP endpoint code through some type matching, assembly discovery, conventions, and ZOMG there’s magic in there! How will I ever possibly unwind any of this or understand what Wolverine is doing?

Wolverine’s runtime model depends on generating code to be the “glue” between your code, any middleware usage, and Wolverine or ASP.Net Core itself. There are other advantages to that model, but a big one is that Wolverine can reveal and to some degree even explain what it’s going at runtime through the generated code.

For the /validate/customer endpoint shown earlier with the Fluent Validation middleware applied, here’s the code that Wolverine generates (after a quick IDE reformatting to make it less ugly):

    public class POST_validate_customer : Wolverine.Http.HttpHandler
    {
        private readonly WolverineHttpOptions _options;
        private readonly IValidator<CreateCustomer> _validator;
        private readonly IProblemDetailSource<CreateCustomer> _problemDetailSource;

        public POST_validate_customer(WolverineHttpOptions options, IValidator<CreateCustomer> validator, ProblemDetailSource<CreateCustomer> problemDetailSource) : base(options)
        {
            _options = options;
            _validator = validator;
            _problemDetailSource = problemDetailSource;
        }

        public override async Task Handle(Microsoft.AspNetCore.Http.HttpContext httpContext)
        {
            var (customer, jsonContinue) = await ReadJsonAsync<CreateCustomer>(httpContext);
            if (jsonContinue == Wolverine.HandlerContinuation.Stop) return;
            var result = await FluentValidationHttpExecutor.ExecuteOne<CreateCustomer>(_validator, _problemDetailSource, customer).ConfigureAwait(false);
            if (!(result is WolverineContinue))
            {
                await result.ExecuteAsync(httpContext).ConfigureAwait(false);
                return;
            }

            var result_of_Post = ValidatedEndpoint.Post(customer);
            await WriteString(httpContext, result_of_Post);
        }
    }

The code above will clearly show you the exact ordering and usage of any middleware. In the case of the Fluent Validation middleware, Wolverine is able to alter the code generation a little for:

  • With no matching IValidator strategies registered for the request model (CreateCustomer or CreateUser for examples), the Fluent Validation middleware is not applied at all
  • With one IValidator, you see the code above
  • With multiple IValidator strategies for a request type, Wolverine generates slightly more complicated code to iterate through the strategies and combine the validation results

Some other significant points about that ugly, generated code up above:

  1. That object is only created once and directly tied to the ASP.Net Core route at runtime
  2. When that HTTP route is executed, there’s no usage of an IoC container whatsoever at runtime because the exact execution is set in the generated code with all the necessary references already made
  3. This runtime strategy leads to fewer object allocations, dictionary lookups, and service location calls than the equivalent functionality in other popular .NET tools which will lead to better performance and scalability

Wolverine 1.0 is Out!

All of the Nugets are named WolverineFx.* even though the libraries and namespaces are all Wolverine.*. Wolverine was well underway when I found out someone is squatting on the name “Wolverine” in Nuget, and that’s why there’s a discrepancy in the naming.

As of today, Wolverine is officially at 1.0 and available on Nuget! As far as I am concerned, this absolutely means that Wolverine is ready for production usage, the public APIs should be considered to be stable, and the documentation is reasonably complete.

To answer the obvious question of “what is it?”, Wolverine is a set of libraries that can be used in .NET applications as an:

  1. In memory “mediator”
  2. In memory command bus that can be very helpful for asynchronous processing
  3. Asynchronous messaging backbone for your application

And when combined with Marten to form the full fledged “critter stack,” I’m hoping that it grows to become the singular best platform for CQRS with Event Sourcing on any development platform.

Here’s the links:

Wolverine is significantly different (I think) from existing tools in the .NET space in that it delivers a developer experience that results in much less ceremony and friction — and I think that’s vital to enable teams to better iterate and adapt their code over time in ways you can’t efficiently do in higher ceremony tools.

There are neither beginnings nor endings to the Wheel of Time. But it was a beginning.”

Robert Jordan

Now, software projects (and their accompanying documentation websites) are never complete, only abandoned. There’ll be bugs, holes in the current functionality, and feature requests as users hit usages and permutations that haven’t yet been considered in Wolverine. In the case of Wolverine, I have every intention of sticking with Wolverine and its sibling Marten project as Oskar, Babu, and I try to build a services/product model company of some sort around the tools.

And Wolverine 1.0.1 will surely follow soon as folks inevitably find issues with the initial version. Software projects like Wolverine are far more satisfying if you can think of them as a marathon and a continuous process.

The long meandering path here

My superpower compared to many of my peers is that I have a much longer attention span than most. It means that I have from time to time been the living breathing avatar of the sunk cost fallacy, but it’s also meant that Wolverine got to see the light of day.

To rewind a bit:

  • FubuMVC that was an alternative OSS web development framework that started in earnest around 2009 with the idea of being low ceremony with a strong middleware approach
  • Around 2013 I helped build a minimal service bus tool called FubuTransportation that basically exposed the FubuMVC runtime approach to asynchronous messaging
  • About 2015 after FubuMVC had clearly failed and what became known as .NET Core made .NET a *lot* more attractive again, I wrote some “vision” documents about what a next generation FubuMVC would look like on .NET Core that tried to learn from fubu’s technical and performance shortcomings
  • In late 2015, I helped build the very first version of Marten for internal usage at my then employer
  • In 2016 I started working in earnest on that reboot of FubuMVC and called it “Jasper,” but focused mostly on the service bus aspect of it
  • Jasper was released as 1.0 during the very worst of the pandemic in 2020 and was more or less abandoned by me and everyone else
  • Marten started gaining a lot of steam and took a big step forward in the giant 4.0 release in late 2021
  • Jasper was restarted in 2022 partially as a way to extend Marten into a full blown CQRS platform (don’t worry, both Marten & Wolverine are plenty useful by themselves)
  • The rebooted Jasper was renamed Wolverine in late 2022 and announced in a DotNetRocks episode and a JetBrains webinar
  • And finally a 1.0 in June 2023 after the reboot inevitably took longer than I’d hoped

A whole lot of gratitude and thanks

Wolverine has been ingesting a long time and descends from the earlier FubuMVC efforts, so there’s been a lot of folks who have contributed or helped guide the shape of Wolverine along the way. Here’s a very incomplete list:

  • Chad Myers and Josh Flanagan started FubuMVC way back when and some of the ideas about how to apply middleware and even some code has survived even until now
  • Josh Arnold was my partner with FubuMVC for a long time
  • Corey Kaylor was part of the core FubuMVC team, wrote FubuTransportation with me, and was part of getting Marten off the ground
  • Oskar and Babu have worked with me on the Marten team for years, and they took the brunt of Marten support and the recent 6.0 release while I was focused on Wolverine
  • Khalid Abuhakmeh has helped quite a bit with both Marten & Wolverine strategy over the years and contributed all of the graphics for the projects
  • My previous boss Denys Grozenok did a lot to test early Wolverine, encouraged the work, and contributed quite a few ideas around usability
  • Eric J. Smith made some significant suggestions that streamlined the API usability of Wolverine

And quite a few other folks who have contributed code fixes, extensions, or taken the time to write bug reproductions that go a long ways toward making a project like Wolverine better.

Http Services with Wolverine

For folks that have followed me for awhile, I’m back with yet another alternative, HTTP web service framework with Wolverine.Http — but I swear that I learned a whole slew of lessons from FubuMVC‘s failure a decade ago. Wolverine.Http shown here is very much a citizen within the greater ASP.Net Core ecosystem and happily interoperates with a great deal of Minimal API and the rest of ASP.Net Core.

For folks who have no idea what a “fubu” is, Wolverine’s HTTP add on shown here is potentially a way to build more efficient web services in .NET with much less boilerplate and noise code than the equivalent functionality in ASP.Net Core MVC or Minimal API. And especially less code ceremony and indirection than you get with the usage of any kind of mediator tooling in conjunction with MVC or Minimal API.

Server side applications are frequently built with some mixture of HTTP web services, asynchronous processing, and asynchronous messaging. Wolverine by itself can help you with the asynchronous processing through its local queue functionality, and it certainly covers all common asynchronous messaging requirements.

For a simplistic example, let’s say that we’re inevitably building a “Todo” application where we want a web service endpoint that allows our application to create a new Todo entity, save it to a database, and raise an TodoCreated event that will be handled later and off to the side by Wolverine.

Even in this simple example usage, that endpoint should be developed such that the creation of the new Todo entity and the corresponding TodoCreated event message either succeed or fail together to avoid putting the system into an inconsistent state. That’s a perfect use case for Wolverine’s transactional outbox. While the Wolverine team believes that Wolverine’s outbox functionality is significantly easier to use outside of the context of message handlers than other .NET messaging tools, it’s still easiest to use within the context of a message handler, so let’s just build out a Wolverine message handler for the CreateTodo command:

public class CreateTodoHandler
{
    public static (Todo, TodoCreated) Handle(CreateTodo command, IDocumentSession session)
    {
        var todo = new Todo { Name = command.Name };
        
        // Just telling Marten that there's a new entity to persist,
        // but I'm assuming that the transactional middleware in Wolverine is
        // handling the asynchronous persistence outside of this handler
        session.Store(todo);

        return (todo, new TodoCreated(todo.Id));
    }   
}

Okay, but we still need to expose a web service endpoint for this functionality. We could utilize Wolverine within an MVC controller as a “mediator” tool like so:

public class TodoController : ControllerBase
{
    [HttpPost("/todoitems")]
    [ProducesResponseType(201, Type = typeof(Todo))]
    public async Task<ActionResult> Post(
        [FromBody] CreateTodo command, 
        [FromServices] IMessageBus bus)
    {
        // Delegate to Wolverine and capture the response
        // returned from the handler
        var todo = await bus.InvokeAsync<Todo>(command);
        return Created($"/todoitems/{todo.Id}", todo);
    }    
}

Or we could do the same thing with Minimal API:

// app in this case is a WebApplication object
app.MapPost("/todoitems", async (CreateTodo command, IMessageBus bus) =>
{
    var todo = await bus.InvokeAsync<Todo>(command);
    return Results.Created($"/todoitems/{todo.Id}", todo);
}).Produces<Todo>(201);

While the code above is certainly functional, and many teams are succeeding today using a similar strategy with older tools like MediatR, the Wolverine team thinks there are some areas to improve in the code above:

  1. When you look into the internals of the runtime, there’s some potentially unnecessary performance overhead as every single call to that web service does service locations and dictionary lookups that could be eliminated
  2. There’s some opportunity to reduce object allocations on each request — and that can be a big deal for performance and scalability
  3. It’s not that bad, but there’s some boilerplate code above that serves no purpose at runtime but helps in the generation of OpenAPI documentation through Swashbuckle

At this point, let’s look at some tooling in the WolverineFx.Http Nuget library that can help you incorporate Wolverine into ASP.Net Core applications in a potentially more successful way than trying to “just” use Wolverine as a mediator tool.

After adding the WolverineFx.Http Nuget to our Todo web service, I could use this option for a little bit more efficient delegation to the underlying Wolverine message handler:

// This is *almost* an equivalent, but you'd get a status
// code of 200 instead of 201. If you care about that anyway.
app.MapPostToWolverine<CreateTodo, Todo>("/todoitems");

The code up above is very close to a functional equivalent to our early Minimal API or MVC Controller usage, but there’s a couple differences:

  1. In this case the HTTP endpoint will return a status code of 200 instead of the slightly more correct 201 that denotes a creation. Most of use aren’t really going to care, but we’ll come back to this a little later
  2. In the call to MapPostToWolverine(), Wolverine.HTTP is able to make a couple performance optimizations that completely eliminates any usage of the application’s IoC container at runtime and bypasses some dictionary lookups and object allocation that would have to occur in the simple “mediator” approach

I personally find the indirection of delegating to a mediator tool to add more code ceremony and indirection than I prefer, but many folks like that approach because of how bloated MVC Controller types can become in enterprise systems over time. What if instead we just had a much cleaner way to code an HTTP endpoint that still helped us out with OpenAPI documentation?

That’s where the Wolverine.Http “endpoint” model comes into play. Let’s take the same Todo creation endpoint and use Wolverine to build an HTTP endpoint:

// Introducing this special type just for the http response
// gives us back the 201 status code
public record TodoCreationResponse(int Id) 
    : CreationResponse("/todoitems/" + Id);

// The "Endpoint" suffix is meaningful, but you could use
// any name if you don't mind adding extra attributes or a marker interface
// for discovery
public static class TodoCreationEndpoint
{
    [WolverinePost("/todoitems")]
    public static (TodoCreationResponse, TodoCreated) Post(CreateTodo command, IDocumentSession session)
    {
        var todo = new Todo { Name = command.Name };
        
        // Just telling Marten that there's a new entity to persist,
        // but I'm assuming that the transactional middleware in Wolverine is
        // handling the asynchronous persistence outside of this handler
        session.Store(todo);

        // By Wolverine.Http conventions, the first "return value" is always
        // assumed to be the Http response, and any subsequent values are
        // handled independently
        return (
            new TodoCreationResponse(todo.Id), 
            new TodoCreated(todo.Id)
        );
    }
}

The code above will actually generate the exact same OpenAPI documentation as the MVC Controller or Minimal API samples earlier in this post, but there’s significantly less boilerplate code needed to expose that information. Instead, Wolverine.Http relies on type signatures to “know” what the OpenAPI metadata for an endpoint should be. In conjunction with Wolverine’s Marten integration (or Wolverine’s EF Core integration too!), you potentially get a very low ceremony approach to writing HTTP services that also utilizes Wolverine’s durable outbox without giving up anything in regards to crafting effective and accurate OpenAPI metadata about your services.

Learn more about Wolverine.Http in the documentation (that’s hopefully growing really soon).

Wolverine’s Runtime Architecture

I’m working up the documentation website for Wolverine today, and just spent several hours putting together a better description of its runtime architecture. Before I even publish the real thing, here’s an early version of that.

The two key parts of a Wolverine application are messages:

// A "command" message
public record DebitAccount(long AccountId, decimal Amount);

// An "event" message
public record AccountOverdrawn(long AccountId);

And the message handling code for the messages, which in Wolverine’s case just means a function or method that accepts the message type as its first argument like so:

public static class DebitAccountHandler
{
    public static void Handle(DebitAccount account)
    {
        Console.WriteLine($"I'm supposed to debit {account.Amount} from account {account.AccountId}");
    }
}

Invoking a Message Inline

At runtime, you can use Wolverine to invoke the message handling for a message inline in the current executing thread with Wolverine effectively acting as a mediator:

It’s a bit more complicated than that though, as the inline invocation looks like this simplified sequence diagram:

As you can hopefully see, even the inline invocation is adding some value beyond merely “mediating” between the caller and the actual message handler by:

  1. Wrapping Open Telemetry tracing and execution metrics around the execution
  2. Correlating the execution in logs to the original calling activity
  3. Providing some inline retry error handling policies for transient errors
  4. Publishing cascading messages from the message execution only after the execution succeeds as an in memory outbox

Asynchronous Messaging

You can, of course, happily publish messages to an external queue and consume those very same messages later in the same process.

The other main usage of Wolverine is to send messages from your current process to another process through some kind of external transport like a Rabbit MQ/Azure Service Bus/Amazon SQS queue and have Wolverine execute that message in another process (or back to the original process):

The internals of publishing a message are shown in this simplified sequence diagram:

Along the way, Wolverine has to:

  1. Serialize the message body
  2. Route the outgoing message to the proper subscriber(s)
  3. Utilize any publishing rules like “this message should be discarded after 10 seconds”
  4. Map the outgoing Wolverine Envelope representation of the message into whatever the underlying transport (Azure Service Bus et al.) uses
  5. Actually invoke the actual messaging infrastructure to send out the message

On the flip side, listening for a message follows this sequence shown for the “happy path” of receiving a message through Rabbit MQ:

During the listening process, Wolverine has to:

  1. Map the incoming Rabbit MQ message to Wolverine’s own Envelope structure
  2. Determine what the actualymessage type is based on the Envelope data
  3. Find the correct executor strategy for the message type
  4. Deserialize the raw message data to the actual message body
  5. Call the inner message executor for that message type
  6. Carry out quite a bit of Open Telemetry activity tracing, report metrics, and just plain logging
  7. Evaluate any errors against the error handling policies of the application or the specific message type

Endpoint Types

Not all transports support all three types of endpoint modes, and will helpfully assert when you try to choose an invalid option.

Inline Endpoints

Wolverine endpoints come in three basic flavors, with the first being Inline endpoints:

// Configuring a Wolverine application to listen to
// an Azure Service Bus queue with the "Inline" mode
opts.ListenToAzureServiceBusQueue("inline-receiver").ProcessInline();

With inline endpoints, as the name implies, calling IMessageBus.SendAsync() immediately sends the message to the external message broker. Likewise, messages received from an external message queue are processed inline before Wolverine acknowledges to the message broker that the message is received.

In the absence of a durable inbox/outbox, using inline endpoints is “safer” in terms of guaranteed delivery. As you might think, using inline agents can bottle neck the message processing, but that can be alleviated by opting into parallel listeners.

Buffered Endpoints

In the second Buffered option, messages are queued locally between the actual external broker and the Wolverine handlers or senders.

To opt into buffering, you use this syntax:

// I overrode the buffering limits just to show
// that they exist for "back pressure"
opts.ListenToAzureServiceBusQueue("incoming")
    .BufferedInMemory(new BufferingLimits(1000, 200));

At runtime, you have a local TPL Dataflow queue between the Wolverine callers and the broker:

On the listening side, buffered endpoints do support back pressure (of sorts) where Wolverine will stop the actual message listener if too many messages are queued in memory to avoid chewing up your application memory. In transports like Amazon SQS that only support batched message sending or receiving, Buffered is the default mode as that facilitates message batching.

Buffered message sending and receiving can lead to higher throughput, and should be considered for cases where messages are ephemeral or expire and throughput is more important than delivery guarantees. The downside is that messages in the in memory queues can be lost in the case of the application shutting down unexpectedly — but Wolverine tries to “drain” the in memory queues on normal application shutdown.

Durable Endpoints

Durable endpoints behave like buffered endpoints, but also use the durable inbox/outbox message storage to create much stronger guarantees about message delivery and processing. You will need to use Durable endpoints in order to truly take advantage of the persistent outbox mechanism in Wolverine. To opt into making an endpoint durable, use this syntax:

cs

// I overrode the buffering limits just to show
// that they exist for "back pressure"
opts.ListenToAzureServiceBusQueue("incoming")
    .UseDurableInbox(new BufferingLimits(1000, 200));

opts.PublishAllMessages().ToAzureServiceBusQueue("outgoing")
    .UseDurableOutbox();

Or use policies to do this in one fell swoop (which may not be what you actually want, but you could do this!):

opts.Policies.UseDurableOutboxOnAllSendingEndpoints();

As shown below, the Durable endpoint option adds an extra step to the Buffered behavior to add database storage of the incoming and outgoing messages:

Outgoing messages are deleted in the durable outbox upon successful sending acknowledgements from the external broker. Likewise, incoming messages are also deleted from the durable inbox upon successful message execution.

The Durable endpoint option makes Wolverine’s local queueing robust enough to use for cases where you need guaranteed processing of messages, but don’t want to use an external broker.

How Wolverine Calls Your Message Handlers

Wolverine is a little different animal from the tools with similar features in the .NET ecosystem (pun intended:). Instead of the typical strategy of requiring you to implement an adapter interface of some sort in your code, Wolverine uses dynamically generated code to “weave” its internal adapter code and even middleware around your message handler code.

In ideal circumstances, Wolverine is able to completely remove the runtime usage of an IoC container for even better performance. The end result is a runtime pipeline that is able to accomplish its tasks with potentially much less performance overhead than comparable .NET frameworks that depend on adapter interfaces and copious runtime usage of IoC containers.

IoC Container Integration

Lamar started its life as “Blue Milk,” and was originally built specifically to support the “Jasper” framework which was eventually renamed and rebooted as “Wolverine.” Even though Lamar was released many years before Wolverine, it was always intended to help make Wolverine possible.

Wolverine is only able to use Lamar as its IoC container, and actually quietly registers Lamar with your .NET application within any call to UseWolverine(). Wolverine actually uses Lamar’s configuration model to help build out its dynamically generated code and can mostly go far enough to recreate what would be Lamar’s “instance plan” with plain old C# as a way of making the runtime operations a little bit leaner.

I’m doing it my way. Finally. God help me.

Today is my first full day as employee #1 at JasperFx Software as I (in collaboration with Oskar Dudycz and Babu Annamalai) starting a services/consulting/product company around the “Critter Stack” tools (Marten and Wolverine). Not much to say about the actual company until I’m ready for a hard launch later this month, but I’d like to take a few paragraphs just to describe how I’m taking a big leap into being self-employed for the first time at the tender age of 49.

Spinning up a company to try to make a living working on my OSS “portfolio” has been my main professional ambition for at least 15 years. I made a pretty serious effort to get there in the early 2010’s with the FubuMVC project. Since then, I’ve honestly felt adrift professionally after I had to admit that FubuMVC had utterly failed as a project. I’ve tried being the lead of a couple different architecture teams where I was hopefully a somewhat helpful mentor to other folks, but never really had any significant traction. I also got to do a little bit of software consulting that I genuinely enjoy, but that was never my official position.

I’ve always been risk averse to a fault, and that’s probably held me back in my personal career for years. A couple things changed in the last couple years that finally got me out the door to take the leap into doing my own thing:

  • Marten started taking off in adoption after the giant V4 release, especially around the event store functionality as folks discovered that Marten is a much easier way to start with event sourcing than most other tools
  • While Oskar has probably been pushing for commercializing Marten for years, a random conversation online with someone who worked at the time for a commercial competitor of Marten convinced me that there was a solid business opportunity that I might have been wasting
  • My wife has given me a tremendous amount of support and encouragement to finally go off and do this, and that’s probably been the biggest factor in me finally going off on my own
  • And lastly, I have to thank my “investors” below as the small inheritance I received from my late, very beloved grandparents are helping fund this endeavor:

Anyway, on with the new work. Expect a lot more content from me soon here and possibly on YouTube. And an official announcement about my new company when that’s ready to go.