Marten Linq Provider Improvements

A couple years ago I was in a small custom software development shop in Austin as “the .NET guy” for the company. The “Java guy” in the company asked me one day to try to name one think about .NET that he could look at that wasn’t just a copy of something older in the JVM. I almost immediately said for him to look at LINQ (Language INtegrated Query for non .NET folks who might stumble into this), as there isn’t really a one for one equivalent and I’d argue that LINQ is a real advantage within the .NET space.

As the author and primary support person for Marten’s LINQ provider support though, I have a decidedly mixed view of LINQ. It’s undoubtedly a powerful tool for .NET developers, but it’s maybe my least favorite thing to support in my entire OSS purview as a LINQ provider is a permutation hell kind of problem. To put it in perspective, I start making oodles of references to Through the Looking Glass anytime I have to spend some significant amount of time dealing with our LINQ support.

Nevertheless, Marten has an uncomfortably large backlog of LINQ related issues and we had a generous GitHub sponsorship to specifically improve the efficiency of the SQL generated for child collection queries in Marten, so I’ve been working on and off for a couple months to do a complete overhaul of our LINQ support that will land in Marten 7.0 sometime in the next couple months. Just in the last week I finally had a couple breakthroughs I’m ready to share. First though, let’s all get in the right headspace with some psychedelic music:

RIP Tom Petty!

and

And I’m going w/ Grace Potter’s cover version!

Alright, so back to the real problem. When Marten today encounters a LINQ query like this one:

        var results = theSession.Query<Top>().Where(x =>
            x.Middles.Any(m => m.Color == Colors.Green && m.Bottoms.Any(b => b.Name == "Bill")));

Marten generates a really fugly SQL query using PostgreSQL Common Table Expressions to explode out the child collections into flat rows that can then be filtered to matching child rows, then finally uses a sub query filter on the original table to find the right rows. To translate, all that mumbo jumbo I said translates to “a big ass, slow query that doesn’t allow PostgreSQL to utilize its fancy GIN index support for faster JSONB querying.”

The Marten v7 support will be smart enough to “know” when it can generate more efficient SQL for certain child collection filtering. In the case above, Marten v7 can use the PostgreSQL containment operator to utilize the GIN indexing support and just be simpler in general with SQL like this:

select d.id, d.data from public.mt_doc_top as d where CAST(d.data ->> 'Middles' as jsonb) @> :p0 LIMIT :p1
  p0: [{"Color":2,"Bottoms":[{"Name":"Bill"}]}]
  p1: 2

You might have to take my word for it right now that the SQL above is significantly more efficient than the previous LINQ support.

One more sample that I’m especially proud of. Let’s say you use this LINQ query:

        var result = await theSession
            .Query<Root>()
            .Where(r => r.ChildsLevel1.Count(c1 => c1.Name == "child-1.1") == 1)
            .ToListAsync();

This one’s a little more complicated because you need to do a test of the *number* of matching child elements within a child collection. Again, Marten vCurrent will use a nasty and not terribly efficient common table expression approach to give you the right data. For Marten v7, we specifically asked the Marten user base if we could abandon support for any PostgreSQL versions lower than PostgreSQL 12. *That* is letting us use PostgreSQL’s JSONPath query support within our LINQ provider and gets us to this SQL for the LINQ query from up above:

select d.id, d.data from public.mt_doc_root as d where jsonb_array_length(jsonb_path_query_array(d.data, '$.ChildsLevel1[*] ? (@.Name == $val1)', :p0)) = :p1
  p0: {"val1":"child-1.1"}
  p1: 1

It’s still quite a bit away, but the point of this post is that there is some significant improvements coming to Marten’s LINQ provider soon. More importantly to me, finishing this work up and knocking out the slew of open LINQ related GitHub issues will allow the Marten core team to focus on much more exciting new functionality in the event sourcing side of things.

A-Frame Architecture with Wolverine

I’m weaseling into making a second blog post about a code sample that I mostly stole from just to meet my unofficial goal of 2-3 posts a week promoting the Critter Stack.

Last week I wrote a blog post ostensibly about Marten’s compiled query feature that also included this code sample that I adapted from Oskar’s excellent post on vertical slices:

using DailyAvailability = System.Collections.Generic.IReadOnlyList<Booking.RoomReservations.GettingRoomTypeAvailability.DailyRoomTypeAvailability>;
 
namespace Booking.RoomReservations.ReservingRoom;
 
public record ReserveRoomRequest(
    RoomType RoomType,
    DateOnly From,
    DateOnly To,
    string GuestId,
    int NumberOfPeople
);
 
public static class ReserveRoomEndpoint
{
    // More on this in a second...
    public static async Task<DailyAvailability> LoadAsync(
        ReserveRoomRequest request,
        IDocumentSession session)
    {
        // Look up the availability of this room type during the requested period
        return (await session.QueryAsync(new GetRoomTypeAvailabilityForPeriod(request))).ToList();
    }
 
    [WolverinePost("/api/reservations")]
    public static (CreationResponse, StartStream<RoomReservation>) Post(
        ReserveRoomRequest command,
        DailyAvailability dailyAvailability)
    {
        // Make sure there is availability for every day
        if (dailyAvailability.Any(x => x.AvailableRooms == 0))
        {
            throw new InvalidOperationException("Not enough available rooms!");
        }
 
        var reservationId = CombGuidIdGeneration.NewGuid().ToString();
 
        // I copied this, but I'd probably eliminate the record usage in favor
        // of init only properties so you can make the potentially error prone
        // mapping easier to troubleshoot in the future
        // That folks is the voice of experience talking
        var reserved = new RoomReserved(
            reservationId,
            null,
            command.RoomType,
            command.From,
            command.To,
            command.GuestId,
            command.NumberOfPeople,
            ReservationSource.Api,
            DateTimeOffset.UtcNow
        );
 
        return (
            // This would be the response body, and this also helps Wolverine
            // to create OpenAPI metadata for the endpoint
            new CreationResponse($"/api/reservations/{reservationId}"),
             
            // This return value is recognized by Wolverine as a "side effect"
            // that will be processed as part of a Marten transaction
            new StartStream<RoomReservation>(reservationId, reserved)
        );
    }
}

The original intent of that code sample was to show off how the full “critter stack” (Marten & Wolverine together) enables relatively low ceremony code that also promotes a high degree of testability. And does all of that without requiring developers to invest a lot of time in complicated , prescriptive architectures like a typical Clean Architecture structure.

Specifically today though, I want to zoom in on “testability” and talk about how Wolverine explicitly encourages code that exhibits what Jim Shore famously called the “A Frame Architecture” in its message handlers, but does so with functional decomposition rather than oodles of abstractions and layers.

Using the “A-Frame Architecture”, you roughly want to divide your code into three sets of functionality:

  1. The domain logic for your system, which I would say includes “deciding” what actions to take next.
  2. Infrastructural service providers
  3. Conductor or mediator code that invokes both the infrastructure and domain logic code to decouple the domain logic from infrastructure code

In the message handler above for the `ReserveRoomRequest` command, Wolverine itself is acting as the “glue” around the methods of the HTTP handler code above that keeps the domain logic (the ReserveRoomEndpoint.Post() method that “decides” what event should be captured) and the raw Marten infrastructure to load existing data and persist changes back to the database.

To illustrate that in action, here’s the full generated code that Wolverine compiles to actually handle the full HTTP request (with some explanatory annotations I made by hand):

    public class POST_api_reservations : Wolverine.Http.HttpHandler
    {
        private readonly Wolverine.Http.WolverineHttpOptions _wolverineHttpOptions;
        private readonly Marten.ISessionFactory _sessionFactory;

        public POST_api_reservations(Wolverine.Http.WolverineHttpOptions wolverineHttpOptions, Marten.ISessionFactory sessionFactory) : base(wolverineHttpOptions)
        {
            _wolverineHttpOptions = wolverineHttpOptions;
            _sessionFactory = sessionFactory;
        }



        public override async System.Threading.Tasks.Task Handle(Microsoft.AspNetCore.Http.HttpContext httpContext)
        {
            await using var documentSession = _sessionFactory.OpenSession();
            var (command, jsonContinue) = await ReadJsonAsync<Booking.RoomReservations.ReservingRoom.ReserveRoomRequest>(httpContext);
            if (jsonContinue == Wolverine.HandlerContinuation.Stop) return;

            // Wolverine has a convention to call methods named
            // "LoadAsync()" before the main endpoint method, and
            // to pipe data returned from this "Before" method
            // to the parameter inputs of the main method
            // as that actually makes sense
            var dailyRoomTypeAvailabilityIReadOnlyList = await Booking.RoomReservations.ReservingRoom.ReserveRoomEndpoint.LoadAsync(command, documentSession).ConfigureAwait(false);

            // Call the "real" HTTP handler method. 
            // The first value is the HTTP response body
            // The second value is a "side effect" that
            // will be part of the transaction around this
            (var creationResponse, var startStream) = Booking.RoomReservations.ReservingRoom.ReserveRoomEndpoint.Post(command, dailyRoomTypeAvailabilityIReadOnlyList);
            
            // Placed by Wolverine's ISideEffect policy
            startStream.Execute(documentSession);

            // This little ugly code helps get the correct
            // status code for creation for those of you 
            // who can't be satisfied by using 200 for everything ((Wolverine.Http.IHttpAware)creationResponse).Apply(httpContext);
            
            // Commit any outstanding Marten changes
            await documentSession.SaveChangesAsync(httpContext.RequestAborted).ConfigureAwait(false);

            // Write the response body as JSON
            await WriteJsonAsync(httpContext, creationResponse);
        }

    }

Wolverine by itself as acting as the mediator between the infrastructure concerns (loading & persisting data) and the business logic function which in Wolverine world becomes a pure function that are typically much easier to unit test than code that has direct coupling to infrastructure concerns — even if that coupling is through abstractions.

Testing wise, if I were actually building a real endpoint like that shown above, I would choose to:

  1. Unit test the Post() method itself by “pushing” inputs to it through the room availability and command data, then assert the expected outcome on the event published through the StartStream<Reservation> value returned by that method. That’s pure state-based testing for the easiest possible unit testing. As an aside, I would claim that this method is an example of the Decider pattern for testable event sourcing business logic code.
  2. I don’t think I’d bother testing the LoadAsync() method by itself, but instead I’d opt to use something like Alba to write an end to end test at the HTTP layer to prove out the entire workflow, but only after the unit tests for the Post() method are all passing.

Responsibility Driven Design

While the “A-Frame Architecture” metaphor is a relatively recent influence upon my design thinking, I’ve long been a proponent of Responsibility Driven Design (RDD) as explained by Rebecca Wirfs-Brock’s excellent A Brief Tour of Responsibility Driven Design. Don’t dismiss that paper because of its age, because the basic concepts and strategies for identifying different responsibilities in your system as a prerequisite for designing or structuring code put forth in that paper are absolutely useful even today.

Applying Responsibility Driven Development to the sample HTTP endpoint code above, I would say that:

  • The Marten IDocumentSession is a “service provider”
  • The Wolverine generated code acts as a “coordinator”
  • The Post() method is responsible for “deciding” what events should be emitted and persisted. One of the most helpful pieces of advice in RDD is to sometimes treat “deciding” to do an action as a separate responsibility from actually carrying out the action. That can lead to better isolating the decision making logic away from infrastructural concerns for easier testing

It’s also old as hell for software, but one of my personal favorite articles I ever wrote was Object Role Stereotypes for MSDN Magazine way back in 2008.

Compiled Queries with Marten

I had tentatively promised to do a full “critter stack” version of Oskar’s sample application in his Vertical Slices in Practice post last week that used Marten‘s event sourcing support. I started doing that this morning, but quit because it was just coming out too similar to my earlier post this week on Low Ceremony Vertical Slice Architecture with Wolverine.

In Oskar’s sample reservation booking application, there was an HTTP endpoint that handled a ReserveRoomRequest command and emitted a new RoomReserved event for a new RoomReservation event stream. Part of that processing was validating the availability of rooms of the requested type during the time period of the reservation request. Just for reference, here’s my version of Oskar’s ReserveRoomEndpoint:

using DailyAvailability = System.Collections.Generic.IReadOnlyList<Booking.RoomReservations.GettingRoomTypeAvailability.DailyRoomTypeAvailability>;

namespace Booking.RoomReservations.ReservingRoom;

public record ReserveRoomRequest(
    RoomType RoomType,
    DateOnly From,
    DateOnly To,
    string GuestId,
    int NumberOfPeople
);

public static class ReserveRoomEndpoint
{
    // More on this in a second...
    public static async Task<DailyAvailability> LoadAsync(
        ReserveRoomRequest request,
        IDocumentSession session)
    {
        // Look up the availability of this room type during the requested period
        return (await session.QueryAsync(new GetRoomTypeAvailabilityForPeriod(request))).ToList();
    }

    [WolverinePost("/api/reservations")]
    public static (CreationResponse, StartStream<RoomReservation>) Post(
        ReserveRoomRequest command,
        DailyAvailability dailyAvailability)
    {
        // Make sure there is availability for every day
        if (dailyAvailability.Any(x => x.AvailableRooms == 0))
        {
            throw new InvalidOperationException("Not enough available rooms!");
        }

        var reservationId = CombGuidIdGeneration.NewGuid().ToString();

        // I copied this, but I'd probably eliminate the record usage in favor
        // of init only properties so you can make the potentially error prone
        // mapping easier to troubleshoot in the future
        // That folks is the voice of experience talkine
        var reserved = new RoomReserved(
            reservationId,
            null,
            command.RoomType,
            command.From,
            command.To,
            command.GuestId,
            command.NumberOfPeople,
            ReservationSource.Api,
            DateTimeOffset.UtcNow
        );

        return (
            // This would be the response body, and this also helps Wolverine
            // to create OpenAPI metadata for the endpoint
            new CreationResponse($"/api/reservations/{reservationId}"),
            
            // This return value is recognized by Wolverine as a "side effect"
            // that will be processed as part of a Marten transaction
            new StartStream<RoomReservation>(reservationId, reserved)
        );
    }
}

For this post, I’d like you to focus on the LoadAsync() method above. That’s utilizing Wolverine’s compound handler technique to split out the data loading so that the actual endpoint Post() method can be a pure function that’s easily unit tested by just “pushing” in the inputs and asserting on either the values returned or the presence of an exception in the validation logic.

Back to that LoadAsync() method. Let’s assume that this HTTP service is going to be under quite a bit of load and it wouldn’t hurt to apply some performance optimization. Or also imagine that the data querying to find the room availability of a certain room type and a time period will be fairly common within the system at large. I’m saying all that to justify the usage of Marten’s compiled query feature as shown below:

public class GetRoomTypeAvailabilityForPeriod : ICompiledListQuery<DailyRoomTypeAvailability>
{
    // Sorry, but this signature is necessary for the Marten mechanics
    public GetRoomTypeAvailabilityForPeriod()
    {
    }

    public GetRoomTypeAvailabilityForPeriod(ReserveRoomRequest request)
    {
        RoomType = request.RoomType;
        From = request.From;
        To = request.To;
    }

    public RoomType RoomType { get; set; }
    public DateOnly From { get; set; }
    public DateOnly To { get; set; }

    public Expression<Func<IMartenQueryable<DailyRoomTypeAvailability>, IEnumerable<DailyRoomTypeAvailability>>>
        QueryIs()
    {
        return q => q.Where(day => day.RoomType == RoomType && day.Date >= From && day.Date <= To);
    }
}

First of all, this is Marten’s version of the Query Object pattern which enables you to share the query definition in declarative ways throughout the codebase. (I’ve heard other folks call this a “Specification,” but that name is overloaded a bit too much in software development world). Removing duplication is certainly a good thing all by itself. Doing so in a way that eliminates the need for extra repository abstractions is also a win in my book.

Secondly, by using the “compiled query”, Marten is able to cache the whole execution plan in memory (technically it’s generating code at runtime) for faster runtime execution. The dirty, barely recognized fact in .NET development today is that the act of parsing Linq statements and converting the intermediate query model into actionable SQL and glue code is not cheap. Marten compiled queries sidestep all that preliminary parsing junk and let’s you skip right to the execution part.

It’s a possibly underused and under-appreciated feature within Marten, but compiled queries are a great way to optimize your system’s performance and possibly clean up code duplication in simple ways.

Low Ceremony Vertical Slice Architecture with Wolverine

TL;DR: Wolverine can enable you to write testable code and achieve separation of concerns in your server side code with far less code ceremony than typical Clean Architecture type approaches.

I’m part of the mini-backlash against heavyweight, prescriptively layered architectural patterns like the various flavors of Hexagonal Architecture. I even did a whole talk on that subject at NDC Oslo this year:

Instead, I’m a big fan of keeping closely related code closer together with something like what Jimmy Bogard coined as Vertical Slices. Conveniently enough, I happen to think that Wolverine is a good fit for that style.

From a conference talk I did early last year, I started to build out a sample “TeleHealth Portal” system using the full “critter stack” with both Marten for persistence and event sourcing and Wolverine for everything else. Inside of this fictional TeleHealth system there will be a web service that adds a healthcare provider to an active board of related appointment requests (as an example, you might have a board for pediatric appointments in the state of Texas). When this web service executes, it needs to:

  1. Find the related information about the requested, active Board and the Provider
  2. Validate that the provider in question is able to join the active board based on various business rules like “is this provider licensed in this particular state and for some specialty?”. If the validation fails, the web service should return the validation message with the ProblemDetails specification
  3. Assuming the validation is good, start a new event stream with Marten for a ProviderShift that will track what the provider does during their active shift on that board for that specific day

I’ll need to add a little more context afterward for some application configuration, but here’s that functionality in one single Wolverine.Http endpoint class — with the assumption that the heavy duty business logic for validating the provider & board assignment is in the business domain model:

public record StartProviderShift(Guid BoardId, Guid ProviderId);
public record ShiftStartingResponse(Guid ShiftId) : CreationResponse("/shift/" + ShiftId);

public static class StartProviderShiftEndpoint
{
    // This would be called before the method below
    public static async Task<(Board, Provider, IResult)> LoadAsync(StartProviderShift command, IQuerySession session)
    {
        // You could get clever here and batch the queries to Marten
        // here, but let that be a later optimization step
        var board = await session.LoadAsync<Board>(command.BoardId);
        var provider = await session.LoadAsync<Provider>(command.ProviderId);

        if (board == null || provider == null) return (board, provider, Results.BadRequest());

        // This just means "full speed ahead"
        return (board, provider, WolverineContinue.Result());
    }

    [WolverineBefore]
    public static IResult Validate(Provider provider, Board board)
    {
        // Check if you can proceed to add the provider to the board
        // This logic is out of the scope of this sample:)
        if (provider.CanJoin(board))
        {
            // Again, this value tells Wolverine to keep processing
            // the HTTP request
            return WolverineContinue.Result();
        }
        
        // No soup for you!
        var problems = new ProblemDetails
        {
            Detail = "Provider is ineligible to join this Board",
            Status = 400,
            Extensions =
            {
                [nameof(StartProviderShift.ProviderId)] = provider.Id,
                [nameof(StartProviderShift.BoardId)] = board.Id
            }
        };

        // Wolverine will execute this IResult
        // and stop all other HTTP processing
        return Results.Problem(problems);
    }
    
    [WolverinePost("/shift/start")]
    // In the tuple that's returned below,
    // The first value of ShiftStartingResponse is assumed by Wolverine to be the 
    // HTTP response body
    // The subsequent IStartStream value is executed as a side effect by Wolverine
    public static (ShiftStartingResponse, IStartStream) Create(StartProviderShift command, Board board, Provider provider)
    {
        var started = new ProviderJoined(board.Id, provider.Id);
        var op = MartenOps.StartStream<ProviderShift>(started);

        return (new ShiftStartingResponse(op.StreamId), op);
    }
}

And there’s a few things I’d ask you to notice in the code above:

  1. It’s just one class in one file that’s largely using functional decomposition to establish separation of concerns
  2. Wolverine.Http is able to call the various methods in order from top to bottom, pass the loaded data from LoadAsync() to Validate() and on finally to the Create() method
  3. I didn’t bother with any kind of repository abstraction around the data loading in the first step
  4. The Validate() method is a pure function that’s suitable for easy unit testing of the validation logic
  5. The Create() method is also a pure, synchronous function that’s going to be easy to unit test as you can do assertions on the events contained in the IStartStream object
  6. Wolverine’s Marten integration is able to do the actual persistence of the new event stream for ProviderShift for you and deal with all the icky asynchronous junk

For more context, here’s the (butt ugly) code that Wolverine generates for the HTTP endpoint:

    public class POST_shift_start : Wolverine.Http.HttpHandler
    {
        private readonly Wolverine.Http.WolverineHttpOptions _options;
        private readonly Marten.ISessionFactory _sessionFactory;

        public POST_shift_start(Wolverine.Http.WolverineHttpOptions options, Marten.ISessionFactory sessionFactory) : base(options)
        {
            _options = options;
            _sessionFactory = sessionFactory;
        }



        public override async System.Threading.Tasks.Task Handle(Microsoft.AspNetCore.Http.HttpContext httpContext)
        {
            await using var documentSession = _sessionFactory.OpenSession();
            await using var querySession = _sessionFactory.QuerySession();
            var (command, jsonContinue) = await ReadJsonAsync<TeleHealth.WebApi.StartProviderShift>(httpContext);
            if (jsonContinue == Wolverine.HandlerContinuation.Stop) return;
            (var board, var provider, var result) = await TeleHealth.WebApi.StartProviderShiftEndpoint.LoadAsync(command, querySession).ConfigureAwait(false);
            if (!(result is Wolverine.Http.WolverineContinue))
            {
                await result.ExecuteAsync(httpContext).ConfigureAwait(false);
                return;
            }

            var result = TeleHealth.WebApi.StartProviderShiftEndpoint.Validate(provider, board);
            if (!(result is Wolverine.Http.WolverineContinue))
            {
                await result.ExecuteAsync(httpContext).ConfigureAwait(false);
                return;
            }

            (var shiftStartingResponse, var startStream) = TeleHealth.WebApi.StartProviderShiftEndpoint.Create(command, board, provider);
            
            // Placed by Wolverine's ISideEffect policy
            startStream.Execute(documentSession);

            ((Wolverine.Http.IHttpAware)shiftStartingResponse).Apply(httpContext);
            
            // Commit any outstanding Marten changes
            await documentSession.SaveChangesAsync(httpContext.RequestAborted).ConfigureAwait(false);

            await WriteJsonAsync(httpContext, shiftStartingResponse);
        }

    }

In the application bootstrapping, I have Wolverine applying transactional middleware automatically:

builder.Host.UseWolverine(opts =>
{
    // more config...
    
    // Automatic usage of transactional middleware as 
    // Wolverine recognizes that an HTTP endpoint or message handler
    // persists data
    opts.Policies.AutoApplyTransactions();
});

And the Wolverine/Marten integration configured as well:

builder.Services.AddMarten(opts =>
    {
        var connString = builder
            .Configuration
            .GetConnectionString("marten");

        opts.Connection(connString);

        // There will be more here later...
    })

    // I added this to enroll Marten in the Wolverine outbox
    .IntegrateWithWolverine()

    // I also added this to opt into events being forward to
    // the Wolverine outbox during SaveChangesAsync()
    .EventForwardingToWolverine();

I’ll even go farther and say that in many cases Wolverine will allow you to establish decent separation of concerns and testability with far less ceremony than is required today with high overhead approaches like the popular Clean Architecture style.

Custom Error Handling Middleware for Wolverine.HTTP

Just a short one for today, mostly to answer a question that came in earlier this week.

When using Wolverine.Http to expose HTTP endpoint services that end up capturing Marten events, you might have an endpoint coded like this one from the Wolverine tests that takes in a command message and tries to start a new Marten event stream for the Order aggregate:

    [Transactional] // This can be omitted if you use auto-transactions
    [WolverinePost("/orders/create4")]
    public static (OrderStatus, IStartStream) StartOrder4(StartOrderWithId command)
    {
        var items = command.Items.Select(x => new Item { Name = x }).ToArray();

        // This is unique to Wolverine (we think)
        var startStream = MartenOps
            .StartStream<Order>(command.Id,new OrderCreated(items));

        return (
            new OrderStatus(startStream.StreamId, false),
            startStream
        );
    }

Where the command looks like this:

public record StartOrderWithId(Guid Id, string[] Items);

In the HTTP endpoint above, we’re:

  1. Creating a new event stream for Order that uses the stream/order id sent in the command
  2. Returning a response body of type OrderStatus to the caller
  3. Using Wolverine’s Marten integration to also return an IStartStream object that integrated middleware will apply to Marten’s IDocumentSession (more on this in my next post because we think this is a big deal by itself).

Great, easy enough right? Just to add some complexity, if the caller happens to send up the same, new order id additional times then Marten will throw an exception called `ExistingStreamIdCollisionException` just noting that no, you can’t create a new stream with that id because one already exists.

Marten’s behavior helps protect the data from duplication, but what about trying to make the HTTP response a little nicer by catching that exception automatically, and returning a ProblemDetails body with a 400 Bad Request status code to denote exactly what happened?

While you actually could do that globally with a bit of ASP.Net Core middleware, that applies everywhere at runtime and not just on the specific routes that could throw that exception. I’m not sure how big a deal this is to many of you, but using ASP.Net Core middleware would also be unable to have any impact on OpenAPI descriptions of your endpoints and it would be up to you to explicitly add attributes on your endpoints to denote the error handling response.

Fortunately, Wolverine’s middleware strategy will allow you to specifically target only the relevant routes and also add OpenAPI descriptions to your API’s generated documentation. And do so in a way that is arguably more efficient than the ASP.Net Core middleware approach at runtime anyway.

Jumping right into the deep end of the pool (I’m helping take my little ones swimming this afternoon and maybe thinking ahead), I’m going to build that policy like so:

public class StreamCollisionExceptionPolicy : IHttpPolicy
{
    private bool shouldApply(HttpChain chain)
    {
        // TODO -- and Wolverine needs a utility method on IChain to make this declarative
        // for future middleware construction
        return chain
            .HandlerCalls()
            .SelectMany(x => x.Creates)
            .Any(x => x.VariableType.CanBeCastTo<IStartStream>());
    }
    
    public void Apply(IReadOnlyList<HttpChain> chains, GenerationRules rules, IContainer container)
    {
        // Find *only* the HTTP routes where the route tries to create new Marten event streams
        foreach (var chain in chains.Where(shouldApply))
        {
            // Add the middleware on the outside
            chain.Middleware.Insert(0, new CatchStreamCollisionFrame());
            
            // Alter the OpenAPI metadata to register the ProblemDetails
            // path
            chain.Metadata.ProducesProblem(400);
        }
    }

    // Make the codegen easier by doing most of the work in this one method
    public static Task RespondWithProblemDetails(ExistingStreamIdCollisionException e, HttpContext context)
    {
        var problems = new ProblemDetails
        {
            Detail = $"Duplicated id '{e.Id}'",
            Extensions =
            {
                ["Id"] = e.Id
            },
            Status = 400 // The default is 500, so watch this
        };

        return Results.Problem(problems).ExecuteAsync(context);
    }
}

// This is the actual middleware that's injecting some code
// into the runtime code generation
internal class CatchStreamCollisionFrame : AsyncFrame
{
    public override void GenerateCode(GeneratedMethod method, ISourceWriter writer)
    {
        writer.Write("BLOCK:try");
        
        // Write the inner code here
        Next?.GenerateCode(method, writer);
        
        writer.FinishBlock();
        writer.Write($@"
BLOCK:catch({typeof(ExistingStreamIdCollisionException).FullNameInCode()} e)
await {typeof(StreamCollisionExceptionPolicy).FullNameInCode()}.{nameof(StreamCollisionExceptionPolicy.RespondWithProblemDetails)}(e, httpContext);
return;
END

");
    }
}

And apply the middleware to the application like so:

app.MapWolverineEndpoints(opts =>
{
    // more configuration for HTTP...
    opts.AddPolicy<StreamCollisionExceptionPolicy>();
});

And lastly, here’s a test using Alba that just verifies the behavior end to end by trying to create a new event stream with the same id multiple times:

    [Fact]
    public async Task use_stream_collision_policy()
    {
        var id = Guid.NewGuid();
        
        // First time should be fine
        await Scenario(x =>
        {
            x.Post.Json(new StartOrderWithId(id, new[] { "Socks", "Shoes", "Shirt" })).ToUrl("/orders/create4");
        });
        
        // Second time hits an exception from stream id collision
        var result2 = await Scenario(x =>
        {
            x.Post.Json(new StartOrderWithId(id, new[] { "Socks", "Shoes", "Shirt" })).ToUrl("/orders/create4");
            x.StatusCodeShouldBe(400);
        });

        // And let's verify that we got what we expected for the ProblemDetails
        // in the HTTP response body of the 2nd request
        var details = result2.ReadAsJson<ProblemDetails>();
        Guid.Parse(details.Extensions["Id"].ToString()).ShouldBe(id);
        details.Detail.ShouldBe($"Duplicated id '{id}'");
    }

To maybe make this a little clearer what’s going on, Wolverine can always show you the generated code it uses for your HTTP endpoints like this (I reformatted the code for legibility with Rider):

public class POST_orders_create4 : HttpHandler
{
    private readonly WolverineHttpOptions _options;
    private readonly ISessionFactory _sessionFactory;

    public POST_orders_create4(WolverineHttpOptions options, ISessionFactory sessionFactory) : base(options)
    {
        _options = options;
        _sessionFactory = sessionFactory;
    }

    public override async Task Handle(HttpContext httpContext)
    {
        await using var documentSession = _sessionFactory.OpenSession();
        try
        {
            var (command, jsonContinue) = await ReadJsonAsync<StartOrderWithId>(httpContext);
            if (jsonContinue == HandlerContinuation.Stop)
            {
                return;
            }

            var (orderStatus, startStream) = MarkItemEndpoint.StartOrder4(command);

            // Placed by Wolverine's ISideEffect policy
            startStream.Execute(documentSession);

            // Commit any outstanding Marten changes
            await documentSession.SaveChangesAsync(httpContext.RequestAborted).ConfigureAwait(false);

            await WriteJsonAsync(httpContext, orderStatus);
        }
        catch (ExistingStreamIdCollisionException e)
        {
            await StreamCollisionExceptionPolicy.RespondWithProblemDetails(e, httpContext);
        }
    }
}

Critter Stack Futures

Starting this month, I think I’m going to blog openly about the ideas and directions that the “Critter Stack” team (Marten and Wolverine) and community is considering for where things go next. I think we’d love to hear any feedback or further suggestions about where this goes (and here’s the link to the Critter Stack Discord channel). I think we’re being mostly reactive to “what are our users struggling with” and “what are folks telling us about what’s stopping them from adopting Marten or Wolverine?”

For the immediate future, I’m trying to get my act together to actually have a real business structure and being ready to start offering support or consulting contracts. I’m personally catching up on open Marten bugs as I’ve been mostly busy with Wolverine lately and not helping the rest of the team much.

Strategic Things

Here are the big themes that we’ve identified that we need to push Marten and/or Wolverine into contention for world domination (or at least entice enough paying users to make a living off of our passion projects here). The first four bullets are happening this year and the rest is just a fanciful vision board.

  1. 1st class subscriptions in Marten — This is the ability to register to listen to feeds of event data from Marten. Maybe you want to stream event data through to Kafka for additional processing. Maybe you want to update an ElasticSearch index for your data. Definitely you want this to work with all the reliability, monitoring, and error handling capabilities that you’d expect.
  2. Linq improvements in Marten — Try to utilize JSONPath operators that are available in recent versions of PostgreSQL that would re-enable the usage of GIN/GIST indexes that was somewhat lost in V4. Try to greatly improve Marten’s Linq querying within document sub-collections. I hate working on the Linq parser in Marten, but I hate seeing Linq-related bugs filter in even more as folks try more and more things, so it’s time
  3. Massive scalability of event projections — This is likely to be a new alternative to the current Marten async daemon that is able to load balance asynchronous projection processing across multiple application nodes. This improved daemon will be built with a combination of Marten and Wolverine as an add on product, likely with some kind of dual usage commercial license and official support license.
  4. Zero downtime, blue/green deployment for event sourcing — Closely related to the previous bullet. Everything you need to be able to blue/green deploy your application using Marten event sourcing without any down time. So, you’ll have support for versioned projections and zero downtime projection rebuilds as well. This will most likely be part of the commercial add on package for the Critter Stack
  5. User interface for monitoring or managing the Critter Stack — Just a place holder. Not sure what the exact functionality would be here. And this will absolutely be a dual license commercial product of some sort.
  6. Sql Server backed event store — While the document database feature set in Marten is unlikely to ever be completely ported to Sql Server, it’s maybe perfectly possible to support Marten’s event sourcing on a Sql Server foundation
  7. Marten for the JVM??? — Just stay tuned on this one down the line. In all likelihood this would mean running Marten’s event store in a separate process, then using some kind of language neutral proxy mechanism (gRPC?) to capture events. Tentatively the idea for projections is to let users use TypeScript/JavaScript to define projects that will run in WebAssembly.
  8. AOT compilation / Serverless optimized recipe — There’s no chance in the world that any combination of Marten or Wolverine can work with AOT compilation without some significant changes on our side. I think it’s going to end up requiring some level of code generation to get there. I’m not clear about whether or not enough folks care about this right now to justify the effort.

Tactical Things

And also a list of hopefully quick wins that help spur Critter Stack adoption

  1. Open Telemetry support in Marten. We have this in Wolverine already, but not yet for Marten activity
  2. Ability to raise events from projections in Marten, or issue commands as aggregates are updates, or I don’t know yet. All I know is that right now this seems to be coming up a lot in user questions in Discord
  3. Document versioning in Marten
  4. Kafka transport in Wolverine
  5. Amazon SNS support in Wolverine
  6. Strong-typed identifiers. Folks have been asking for this periodically in Marten. When it exists in Marten, I’d also like to pursue being able to exploit strong typed identifiers in Wolverine middleware to “know” when to load entities from identifiers automatically
  7. Expanding multi-tenancy support in Wolverine. Today Wolverine has a robust model for Marten-backed multi-tenancy in the message handling, but I’d like to see this extended to detecting tenant identifiers automatically in HTTP requests. I’d also like to extend the multi-tenancy support to EF Core backed persistence and SQL Server backed storage.
  8. Lightweight partial updates in Marten. This is the ability to issue updates to part of a Marten document without first loading the entire document. We’ve had this functionality from the very beginning, but it depends on Javascript support in PostgreSQL through the PLv8 extension that’s in a tenuous state. The new model would use native PostgreSQL features in place of the older JavaScript model.

Waddaya think?

Anything above sound compelling to you? Have questions about how some of that would work? Wanna make suggestions about how it should be done? Have *gasp* completely different suggestions for what we should improve instead in Marten/Wolverine to make it more attractive to your shop? Fire away in comments here or the Critter Stack Discord channel.

Wolverine 1.0 is Out!

All of the Nugets are named WolverineFx.* even though the libraries and namespaces are all Wolverine.*. Wolverine was well underway when I found out someone is squatting on the name “Wolverine” in Nuget, and that’s why there’s a discrepancy in the naming.

As of today, Wolverine is officially at 1.0 and available on Nuget! As far as I am concerned, this absolutely means that Wolverine is ready for production usage, the public APIs should be considered to be stable, and the documentation is reasonably complete.

To answer the obvious question of “what is it?”, Wolverine is a set of libraries that can be used in .NET applications as an:

  1. In memory “mediator”
  2. In memory command bus that can be very helpful for asynchronous processing
  3. Asynchronous messaging backbone for your application

And when combined with Marten to form the full fledged “critter stack,” I’m hoping that it grows to become the singular best platform for CQRS with Event Sourcing on any development platform.

Here’s the links:

Wolverine is significantly different (I think) from existing tools in the .NET space in that it delivers a developer experience that results in much less ceremony and friction — and I think that’s vital to enable teams to better iterate and adapt their code over time in ways you can’t efficiently do in higher ceremony tools.

There are neither beginnings nor endings to the Wheel of Time. But it was a beginning.”

Robert Jordan

Now, software projects (and their accompanying documentation websites) are never complete, only abandoned. There’ll be bugs, holes in the current functionality, and feature requests as users hit usages and permutations that haven’t yet been considered in Wolverine. In the case of Wolverine, I have every intention of sticking with Wolverine and its sibling Marten project as Oskar, Babu, and I try to build a services/product model company of some sort around the tools.

And Wolverine 1.0.1 will surely follow soon as folks inevitably find issues with the initial version. Software projects like Wolverine are far more satisfying if you can think of them as a marathon and a continuous process.

The long meandering path here

My superpower compared to many of my peers is that I have a much longer attention span than most. It means that I have from time to time been the living breathing avatar of the sunk cost fallacy, but it’s also meant that Wolverine got to see the light of day.

To rewind a bit:

  • FubuMVC that was an alternative OSS web development framework that started in earnest around 2009 with the idea of being low ceremony with a strong middleware approach
  • Around 2013 I helped build a minimal service bus tool called FubuTransportation that basically exposed the FubuMVC runtime approach to asynchronous messaging
  • About 2015 after FubuMVC had clearly failed and what became known as .NET Core made .NET a *lot* more attractive again, I wrote some “vision” documents about what a next generation FubuMVC would look like on .NET Core that tried to learn from fubu’s technical and performance shortcomings
  • In late 2015, I helped build the very first version of Marten for internal usage at my then employer
  • In 2016 I started working in earnest on that reboot of FubuMVC and called it “Jasper,” but focused mostly on the service bus aspect of it
  • Jasper was released as 1.0 during the very worst of the pandemic in 2020 and was more or less abandoned by me and everyone else
  • Marten started gaining a lot of steam and took a big step forward in the giant 4.0 release in late 2021
  • Jasper was restarted in 2022 partially as a way to extend Marten into a full blown CQRS platform (don’t worry, both Marten & Wolverine are plenty useful by themselves)
  • The rebooted Jasper was renamed Wolverine in late 2022 and announced in a DotNetRocks episode and a JetBrains webinar
  • And finally a 1.0 in June 2023 after the reboot inevitably took longer than I’d hoped

A whole lot of gratitude and thanks

Wolverine has been ingesting a long time and descends from the earlier FubuMVC efforts, so there’s been a lot of folks who have contributed or helped guide the shape of Wolverine along the way. Here’s a very incomplete list:

  • Chad Myers and Josh Flanagan started FubuMVC way back when and some of the ideas about how to apply middleware and even some code has survived even until now
  • Josh Arnold was my partner with FubuMVC for a long time
  • Corey Kaylor was part of the core FubuMVC team, wrote FubuTransportation with me, and was part of getting Marten off the ground
  • Oskar and Babu have worked with me on the Marten team for years, and they took the brunt of Marten support and the recent 6.0 release while I was focused on Wolverine
  • Khalid Abuhakmeh has helped quite a bit with both Marten & Wolverine strategy over the years and contributed all of the graphics for the projects
  • My previous boss Denys Grozenok did a lot to test early Wolverine, encouraged the work, and contributed quite a few ideas around usability
  • Eric J. Smith made some significant suggestions that streamlined the API usability of Wolverine

And quite a few other folks who have contributed code fixes, extensions, or taken the time to write bug reproductions that go a long ways toward making a project like Wolverine better.

Http Services with Wolverine

For folks that have followed me for awhile, I’m back with yet another alternative, HTTP web service framework with Wolverine.Http — but I swear that I learned a whole slew of lessons from FubuMVC‘s failure a decade ago. Wolverine.Http shown here is very much a citizen within the greater ASP.Net Core ecosystem and happily interoperates with a great deal of Minimal API and the rest of ASP.Net Core.

For folks who have no idea what a “fubu” is, Wolverine’s HTTP add on shown here is potentially a way to build more efficient web services in .NET with much less boilerplate and noise code than the equivalent functionality in ASP.Net Core MVC or Minimal API. And especially less code ceremony and indirection than you get with the usage of any kind of mediator tooling in conjunction with MVC or Minimal API.

Server side applications are frequently built with some mixture of HTTP web services, asynchronous processing, and asynchronous messaging. Wolverine by itself can help you with the asynchronous processing through its local queue functionality, and it certainly covers all common asynchronous messaging requirements.

For a simplistic example, let’s say that we’re inevitably building a “Todo” application where we want a web service endpoint that allows our application to create a new Todo entity, save it to a database, and raise an TodoCreated event that will be handled later and off to the side by Wolverine.

Even in this simple example usage, that endpoint should be developed such that the creation of the new Todo entity and the corresponding TodoCreated event message either succeed or fail together to avoid putting the system into an inconsistent state. That’s a perfect use case for Wolverine’s transactional outbox. While the Wolverine team believes that Wolverine’s outbox functionality is significantly easier to use outside of the context of message handlers than other .NET messaging tools, it’s still easiest to use within the context of a message handler, so let’s just build out a Wolverine message handler for the CreateTodo command:

public class CreateTodoHandler
{
    public static (Todo, TodoCreated) Handle(CreateTodo command, IDocumentSession session)
    {
        var todo = new Todo { Name = command.Name };
        
        // Just telling Marten that there's a new entity to persist,
        // but I'm assuming that the transactional middleware in Wolverine is
        // handling the asynchronous persistence outside of this handler
        session.Store(todo);

        return (todo, new TodoCreated(todo.Id));
    }   
}

Okay, but we still need to expose a web service endpoint for this functionality. We could utilize Wolverine within an MVC controller as a “mediator” tool like so:

public class TodoController : ControllerBase
{
    [HttpPost("/todoitems")]
    [ProducesResponseType(201, Type = typeof(Todo))]
    public async Task<ActionResult> Post(
        [FromBody] CreateTodo command, 
        [FromServices] IMessageBus bus)
    {
        // Delegate to Wolverine and capture the response
        // returned from the handler
        var todo = await bus.InvokeAsync<Todo>(command);
        return Created($"/todoitems/{todo.Id}", todo);
    }    
}

Or we could do the same thing with Minimal API:

// app in this case is a WebApplication object
app.MapPost("/todoitems", async (CreateTodo command, IMessageBus bus) =>
{
    var todo = await bus.InvokeAsync<Todo>(command);
    return Results.Created($"/todoitems/{todo.Id}", todo);
}).Produces<Todo>(201);

While the code above is certainly functional, and many teams are succeeding today using a similar strategy with older tools like MediatR, the Wolverine team thinks there are some areas to improve in the code above:

  1. When you look into the internals of the runtime, there’s some potentially unnecessary performance overhead as every single call to that web service does service locations and dictionary lookups that could be eliminated
  2. There’s some opportunity to reduce object allocations on each request — and that can be a big deal for performance and scalability
  3. It’s not that bad, but there’s some boilerplate code above that serves no purpose at runtime but helps in the generation of OpenAPI documentation through Swashbuckle

At this point, let’s look at some tooling in the WolverineFx.Http Nuget library that can help you incorporate Wolverine into ASP.Net Core applications in a potentially more successful way than trying to “just” use Wolverine as a mediator tool.

After adding the WolverineFx.Http Nuget to our Todo web service, I could use this option for a little bit more efficient delegation to the underlying Wolverine message handler:

// This is *almost* an equivalent, but you'd get a status
// code of 200 instead of 201. If you care about that anyway.
app.MapPostToWolverine<CreateTodo, Todo>("/todoitems");

The code up above is very close to a functional equivalent to our early Minimal API or MVC Controller usage, but there’s a couple differences:

  1. In this case the HTTP endpoint will return a status code of 200 instead of the slightly more correct 201 that denotes a creation. Most of use aren’t really going to care, but we’ll come back to this a little later
  2. In the call to MapPostToWolverine(), Wolverine.HTTP is able to make a couple performance optimizations that completely eliminates any usage of the application’s IoC container at runtime and bypasses some dictionary lookups and object allocation that would have to occur in the simple “mediator” approach

I personally find the indirection of delegating to a mediator tool to add more code ceremony and indirection than I prefer, but many folks like that approach because of how bloated MVC Controller types can become in enterprise systems over time. What if instead we just had a much cleaner way to code an HTTP endpoint that still helped us out with OpenAPI documentation?

That’s where the Wolverine.Http “endpoint” model comes into play. Let’s take the same Todo creation endpoint and use Wolverine to build an HTTP endpoint:

// Introducing this special type just for the http response
// gives us back the 201 status code
public record TodoCreationResponse(int Id) 
    : CreationResponse("/todoitems/" + Id);

// The "Endpoint" suffix is meaningful, but you could use
// any name if you don't mind adding extra attributes or a marker interface
// for discovery
public static class TodoCreationEndpoint
{
    [WolverinePost("/todoitems")]
    public static (TodoCreationResponse, TodoCreated) Post(CreateTodo command, IDocumentSession session)
    {
        var todo = new Todo { Name = command.Name };
        
        // Just telling Marten that there's a new entity to persist,
        // but I'm assuming that the transactional middleware in Wolverine is
        // handling the asynchronous persistence outside of this handler
        session.Store(todo);

        // By Wolverine.Http conventions, the first "return value" is always
        // assumed to be the Http response, and any subsequent values are
        // handled independently
        return (
            new TodoCreationResponse(todo.Id), 
            new TodoCreated(todo.Id)
        );
    }
}

The code above will actually generate the exact same OpenAPI documentation as the MVC Controller or Minimal API samples earlier in this post, but there’s significantly less boilerplate code needed to expose that information. Instead, Wolverine.Http relies on type signatures to “know” what the OpenAPI metadata for an endpoint should be. In conjunction with Wolverine’s Marten integration (or Wolverine’s EF Core integration too!), you potentially get a very low ceremony approach to writing HTTP services that also utilizes Wolverine’s durable outbox without giving up anything in regards to crafting effective and accurate OpenAPI metadata about your services.

Learn more about Wolverine.Http in the documentation (that’s hopefully growing really soon).

Marten V6 is Out! And the road to Wolverine 1.0

Marten 6.0 came out last week. Rather than describe that, just take a look at Oskar’s killer release notes write up on GitHub for V6. This also includes some updates to the Marten documentation website. Oskar led the charge on this release, so big thanks are due to him — in no small part by allowing me to focus on Wolverine by taking the brunt of the “Critter Stack” Discord rooms. The healthiness of the Marten community shows up with a slew of new contributors in this release.

With Marten 6.0 out, it’s on to finally getting to Wolverine 1.0:

Wolverine has lingered for way, way too long for my taste in a pre-1.0 status, but it’s getting closer. A couple weeks ago I felt like Wolverine 1.0 was very close as soon as the documentation was updated, but then I kept hearing repeated feedback about how early adopters want or need first class database multi-tenancy support as part of their Wolverine + Marten experience — and lesser number wanting some sort of EF Core + Wolverine multi-tenancy, but I’m going to put that aside just for now.

Cool, so I started jotting down what first class support for multi-tenancy through multiple databases was going to entail:

  • Some way to communicate the message tenant information through to Wolverine with message metadata. Easy money, that didn’t take much.
  • A little bit of change to the Marten transactional middleware in Wolverine to be tenant aware. Cool, that’s pretty small. Especially after a last minute change I made in Marten 6.0 specifically to support Wolverine.
  • Uh, oh, the durable inbox/outbox support in Wolverine will require specific table storage in every single tenant database, and you’d probably also want an “any tenant” master database as well for transactions that aren’t for a specific tenant. Right off the bat, this is much more complex than the other bullet points above. Wolverine could try to stretch its current “durability agent” strategy for multiple databases, but it’s a little too greedy on database connection usage and I was getting some feedback from potential users who were concerned by exactly that issue. At that point, I thought it would be helpful to reduce the connection usage, which…
  • Led me to wanting an approach where only one running node was processing the inbox/outbox recovery instead of each node hammering the database with advisory locks to figure out if anything needed to be recovered from previous nodes that shut down before finishing their work. Which now led me to wanting…
  • Some kind of leadership election in Wolverine, which now means that Wolverine needs durable storage for all the active nodes and the assignments to each node — which is functionality I wanted to build out soon regardless for Marten’s “async projection” scalability.

So to get the big leadership election, durability agent assignment across nodes, and finally back to the multi-tenancy support in Wolverine, I’ve got a bit of work to get through. It’s going well so far, but it’s time consuming because of the sheer number of details and the necessity of rigorously testing bitwise before trying to put it all together end to end.

There are a few other loose ends for Wolverine 1.0, but the work described up above is the main battle right now before Wolverine efforts shift to documentation and finally a formal 1.0 release. Famous last words of a fool, but I’m hoping to roll out Wolverine 1.0 right now during the NDC Oslo conference in a couple weeks.

Isolating Side Effects from Wolverine Handlers

For easier unit testing, it’s often valuable to separate responsibilities of “deciding” what to do from the actual “doing.” The side effect facility in Wolverine is an example of this strategy. You will need Wolverine 0.9.17 that just dropped for this feature.

At times, you may with to make Wolverine message handlers (or HTTP endpoints) be pure functions as a way of making the handler code itself easier to test or even just to understand. All the same, your application will almost certainly be interacting with the outside world of databases, file systems, and external infrastructure of all types. Not to worry though, Wolverine has some facility to allow you to declare the side effects as return values from your handler.

To make this concrete, let’s say that we’re building a message handler that will take in some textual content and an id, and then try to write that text to a file at a certain path. In our case, we want to be able to easily unit test the logic that “decides” what content and what file path a message should be written to without ever having any usage of the actual file system (which is notoriously irritating to use in tests).

First off, I’m going to create a new “side effect” type for writing a file like this:

// ISideEffect is a Wolverine marker interface
public class WriteFile : ISideEffect
{
    public string Path { get; }
    public string Contents { get; }

    public WriteFile(string path, string contents)
    {
        Path = path;
        Contents = contents;
    }

    // Wolverine will call this method. 
    public Task ExecuteAsync(PathSettings settings)
    {
        if (!Directory.Exists(settings.Directory))
        {
            Directory.CreateDirectory(settings.Directory);
        }
        
        return File.WriteAllTextAsync(Path, Contents);
    }
}

And the matching message type, message handler, and a settings class for configuration:

// An options class
public class PathSettings
{
    public string Directory { get; set; } 
        = Environment.CurrentDirectory.AppendPath("files");
}

public record RecordText(Guid Id, string Text);

public class RecordTextHandler
{
    public WriteFile Handle(RecordText command)
    {
        return new WriteFile(command.Id + ".txt", command.Text);
    }
}

At runtime, Wolverine is generating this code to handle the RecordText message:

    public class RecordTextHandler597515455 : Wolverine.Runtime.Handlers.MessageHandler
    {
        public override System.Threading.Tasks.Task HandleAsync(Wolverine.Runtime.MessageContext context, System.Threading.CancellationToken cancellation)
        {
            var recordTextHandler = new CoreTests.Acceptance.RecordTextHandler();
            var recordText = (CoreTests.Acceptance.RecordText)context.Envelope.Message;
            var pathSettings = new CoreTests.Acceptance.PathSettings();
            var outgoing1 = recordTextHandler.Handle(recordText);
            
            // Placed by Wolverine's ISideEffect policy
            return outgoing1.ExecuteAsync(pathSettings);
        }
    }

To explain what is happening up above, when Wolverine sees that any return value from a message handler implements the Wolverine.ISideEffect interface, Wolverine knows that that value should have a method named either Execute or ExecuteAsync() that should be executed instead of treating the return value as a cascaded message. The method discovery is completely by method name, and it’s perfectly legal to use arguments for any of the same types available to the actual message handler like:

  • Service dependencies from the application’s IoC container
  • The actual message
  • Any objects created by middleware
  • CancellationToken
  • Message metadata from Envelope

Taking this functionality farther, here’s a new example from the WolverineFx.Marten library that exploits this new side effect model to allow you to start event streams or store/insert/update documents from a side effect return value without having to directly touch Marten‘s IDocumentSession:

public static class StartStreamMessageHandler
{
    // This message handler is creating a brand new Marten event stream
    // of aggregate type NamedDocument. No services, no async junk,
    // pure function mechanics. You could unit test the method by doing
    // state based assertions on the StartStream object coming back out
    public static StartStream Handle(StartStreamMessage message)
    {
        return MartenOps.StartStream<NamedDocument>(message.Id, new AEvent(), new BEvent());
    }
    
    public static StartStream Handle(StartStreamMessage2 message)
    {
        return MartenOps.StartStream<NamedDocument>(message.Id, new CEvent(), new BEvent());
    }
}

As I get a little more time and maybe ambition, I want to start blogging more about how Wolverine is quite different from the “IHandler of T” model tools like MediatR, MassTransit, or NServiceBus. The “pure function” usage above potentially makes for a big benefit in terms of testability and longer term maintainability.