As I’d first announced almost *gulp* two months ago, JasperFx Software LLC is officially open for business. In collaboration with the other core Marten team members (Oskar & Babu), we are planning to productize and provide professional services around the “Critter Stack” tools (Marten and Wolverine) that provide a high level of productivity and robustness for server side .NET applications. We’ll soon be able to offer formal support contract agreements for Marten and Wolverine, and we’re already working with early customers on improving these tools.
We’re also happy to help you as consultants for your own software initiatives on whatever technical stack you happen to be using. Maybe you have a legacy system that could use some modernization love, wish your automated testing infrastructure was more successful, want to adopt Test Driven Development but don’t know where to start, or you’re starting a new system and just want help getting it right. If any of that describes where you’re at, JasperFx can help.
I’m weaseling into making a second blog post about a code sample that I mostly stole from just to meet my unofficial goal of 2-3 posts a week promoting the Critter Stack.
using DailyAvailability = System.Collections.Generic.IReadOnlyList<Booking.RoomReservations.GettingRoomTypeAvailability.DailyRoomTypeAvailability>;
namespace Booking.RoomReservations.ReservingRoom;
public record ReserveRoomRequest(
RoomType RoomType,
DateOnly From,
DateOnly To,
string GuestId,
int NumberOfPeople
);
public static class ReserveRoomEndpoint
{
// More on this in a second...
public static async Task<DailyAvailability> LoadAsync(
ReserveRoomRequest request,
IDocumentSession session)
{
// Look up the availability of this room type during the requested period
return (await session.QueryAsync(new GetRoomTypeAvailabilityForPeriod(request))).ToList();
}
[WolverinePost("/api/reservations")]
public static (CreationResponse, StartStream<RoomReservation>) Post(
ReserveRoomRequest command,
DailyAvailability dailyAvailability)
{
// Make sure there is availability for every day
if (dailyAvailability.Any(x => x.AvailableRooms == 0))
{
throw new InvalidOperationException("Not enough available rooms!");
}
var reservationId = CombGuidIdGeneration.NewGuid().ToString();
// I copied this, but I'd probably eliminate the record usage in favor
// of init only properties so you can make the potentially error prone
// mapping easier to troubleshoot in the future
// That folks is the voice of experience talking
var reserved = new RoomReserved(
reservationId,
null,
command.RoomType,
command.From,
command.To,
command.GuestId,
command.NumberOfPeople,
ReservationSource.Api,
DateTimeOffset.UtcNow
);
return (
// This would be the response body, and this also helps Wolverine
// to create OpenAPI metadata for the endpoint
new CreationResponse($"/api/reservations/{reservationId}"),
// This return value is recognized by Wolverine as a "side effect"
// that will be processed as part of a Marten transaction
new StartStream<RoomReservation>(reservationId, reserved)
);
}
}
The original intent of that code sample was to show off how the full “critter stack” (Marten & Wolverine together) enables relatively low ceremony code that also promotes a high degree of testability. And does all of that without requiring developers to invest a lot of time in complicated , prescriptive architectures like a typical Clean Architecture structure.
Specifically today though, I want to zoom in on “testability” and talk about how Wolverine explicitly encourages code that exhibits what Jim Shore famously called the “A Frame Architecture” in its message handlers, but does so with functional decomposition rather than oodles of abstractions and layers.
Using the “A-Frame Architecture”, you roughly want to divide your code into three sets of functionality:
The domain logic for your system, which I would say includes “deciding” what actions to take next.
Infrastructural service providers
Conductor or mediator code that invokes both the infrastructure and domain logic code to decouple the domain logic from infrastructure code
In the message handler above for the `ReserveRoomRequest` command, Wolverine itself is acting as the “glue” around the methods of the HTTP handler code above that keeps the domain logic (the ReserveRoomEndpoint.Post() method that “decides” what event should be captured) and the raw Marten infrastructure to load existing data and persist changes back to the database.
To illustrate that in action, here’s the full generated code that Wolverine compiles to actually handle the full HTTP request (with some explanatory annotations I made by hand):
public class POST_api_reservations : Wolverine.Http.HttpHandler
{
private readonly Wolverine.Http.WolverineHttpOptions _wolverineHttpOptions;
private readonly Marten.ISessionFactory _sessionFactory;
public POST_api_reservations(Wolverine.Http.WolverineHttpOptions wolverineHttpOptions, Marten.ISessionFactory sessionFactory) : base(wolverineHttpOptions)
{
_wolverineHttpOptions = wolverineHttpOptions;
_sessionFactory = sessionFactory;
}
public override async System.Threading.Tasks.Task Handle(Microsoft.AspNetCore.Http.HttpContext httpContext)
{
await using var documentSession = _sessionFactory.OpenSession();
var (command, jsonContinue) = await ReadJsonAsync<Booking.RoomReservations.ReservingRoom.ReserveRoomRequest>(httpContext);
if (jsonContinue == Wolverine.HandlerContinuation.Stop) return;
// Wolverine has a convention to call methods named
// "LoadAsync()" before the main endpoint method, and
// to pipe data returned from this "Before" method
// to the parameter inputs of the main method
// as that actually makes sense
var dailyRoomTypeAvailabilityIReadOnlyList = await Booking.RoomReservations.ReservingRoom.ReserveRoomEndpoint.LoadAsync(command, documentSession).ConfigureAwait(false);
// Call the "real" HTTP handler method.
// The first value is the HTTP response body
// The second value is a "side effect" that
// will be part of the transaction around this
(var creationResponse, var startStream) = Booking.RoomReservations.ReservingRoom.ReserveRoomEndpoint.Post(command, dailyRoomTypeAvailabilityIReadOnlyList);
// Placed by Wolverine's ISideEffect policy
startStream.Execute(documentSession);
// This little ugly code helps get the correct
// status code for creation for those of you
// who can't be satisfied by using 200 for everything ((Wolverine.Http.IHttpAware)creationResponse).Apply(httpContext);
// Commit any outstanding Marten changes
await documentSession.SaveChangesAsync(httpContext.RequestAborted).ConfigureAwait(false);
// Write the response body as JSON
await WriteJsonAsync(httpContext, creationResponse);
}
}
Wolverine by itself as acting as the mediator between the infrastructure concerns (loading & persisting data) and the business logic function which in Wolverine world becomes a pure function that are typically much easier to unit test than code that has direct coupling to infrastructure concerns — even if that coupling is through abstractions.
Testing wise, if I were actually building a real endpoint like that shown above, I would choose to:
Unit test the Post() method itself by “pushing” inputs to it through the room availability and command data, then assert the expected outcome on the event published through the StartStream<Reservation> value returned by that method. That’s pure state-based testing for the easiest possible unit testing. As an aside, I would claim that this method is an example of the Decider pattern for testable event sourcing business logic code.
I don’t think I’d bother testing the LoadAsync() method by itself, but instead I’d opt to use something like Alba to write an end to end test at the HTTP layer to prove out the entire workflow, but only after the unit tests for the Post() method are all passing.
Responsibility Driven Design
While the “A-Frame Architecture” metaphor is a relatively recent influence upon my design thinking, I’ve long been a proponent of Responsibility Driven Design (RDD) as explained by Rebecca Wirfs-Brock’s excellent A Brief Tour of Responsibility Driven Design. Don’t dismiss that paper because of its age, because the basic concepts and strategies for identifying different responsibilities in your system as a prerequisite for designing or structuring code put forth in that paper are absolutely useful even today.
Applying Responsibility Driven Development to the sample HTTP endpoint code above, I would say that:
The Marten IDocumentSession is a “service provider”
The Wolverine generated code acts as a “coordinator”
The Post() method is responsible for “deciding” what events should be emitted and persisted. One of the most helpful pieces of advice in RDD is to sometimes treat “deciding” to do an action as a separate responsibility from actually carrying out the action. That can lead to better isolating the decision making logic away from infrastructural concerns for easier testing
Wolverine 1.2.0 rolled out this morning with some enhancements for HTTP endpoints. In the realm of HTTP endpoints, Wolverine’s raison d’être is to finally deliver a development experience to .NET developers that requires very low code ceremony, maximizes testability, and does all of that with good performance. Between some feedback from early adopters and some repetitive boilerplate code I saw doing a code review for a client last week (woot, I’ve actually got paying clients now!), Wolverine.Http got a couple new tricks to speed you up.
First off,
Here’s a common pattern in HTTP service development. Based on a route argument, you first load some kind of entity from persistence. If the data is not found, return a status code 404 that means the resource was not found, but otherwise continue working against that entity data you just loaded. Here’s a short hand way of doing that now with Wolverine “compound handlers“:
public record UpdateRequest(string Name, bool IsComplete);
public static class UpdateEndpoint
{
// Find required Todo entity for the route handler below
public static Task<Todo?> LoadAsync(int id, IDocumentSession session)
=> session.LoadAsync<Todo>(id);
[WolverinePut("/todos/{id:int}")]
public static StoreDoc<Todo> Put(
// Route argument
int id,
// The request body
UpdateRequest request,
// Entity loaded by the method above,
// but note the [Required] attribute
[Required] Todo? todo)
{
todo.Name = request.Name;
todo.IsComplete = request.IsComplete;
return MartenOps.Store(todo);
}
}
You’ll notice that the LoadAsync() method is looking up the Todo entity for the route parameter, where Wolverine would normally be passing that value to the matching Todo parameter of the main Put method. In this case though, because of the [Required] attribute, Wolverine.Http will stop processing with a 404 status code if the Todo cannot be found.
By contrast, here’s some sample code of a higher ceremony alternative that helped spawn this feature in the first place:
Note in the code above how the author had to pollute his code with attributes strictly for OpenAPI (Swagger) metadata because the valid response types cannot be inferred when you’re returning the IResult value that could frankly be just about anything in the world.
In the Wolverine 1.2 version above, Wolverine.Http is able to infer the exact same OpenAPI metadata as the busier Put() method in the image above. Also, and I think this is potentially valuable, the Wolverine 1.2 version turns the behavior into a purely synchronous version that is going to be mechanically easier to unit test.
So that’s required data, now let’s turn our attention to Wolverine’s new ProblemDetails support. While there is a Fluent Validation middleware package for Wolverine.Http that supports ProblemDetails in a generic way, I’m seeing usages where you just need to do some explicit validation for an HTTP endpoint. Wolverine 1.2 added this usage:
public class ProblemDetailsUsageEndpoint
{
public ProblemDetails Before(NumberMessage message)
{
// If the number is greater than 5, fail with a
// validation message
if (message.Number > 5)
return new ProblemDetails
{
Detail = "Number is bigger than 5",
Status = 400
};
// All good, keep on going!
return WolverineContinue.NoProblems;
}
[WolverinePost("/problems")]
public static string Post(NumberMessage message)
{
return "Ok";
}
}
public record NumberMessage(int Number);
Wolverine.Http now (as of 1.2.0) has a convention that sees a return value of ProblemDetails and looks at that as a “continuation” to tell the http handler code what to do next. One of two things will happen:
1. If the ProblemDetails return value is the same instance as WolverineContinue.NoProblems, just keep going 2. Otherwise, write the ProblemDetails out to the HTTP response and exit the HTTP request handling
Just as in the first [Required] usage, Wolverine is able to infer OpenAPI metadata about your endpoint to add a “produces ‘application/problem+json` with a 400 status code” item. And for those of you who like to get fancier or more specific with your HTTP status code usage, you can happily override that behavior with your own metadata attributes like so:
// Use 418 as the status code instead
[ProducesResponseType(typeof(ProblemDetails), 418)]
I had tentatively promised to do a full “critter stack” version of Oskar’s sample application in his Vertical Slices in Practice post last week that used Marten‘s event sourcing support. I started doing that this morning, but quit because it was just coming out too similar to my earlier post this week on Low Ceremony Vertical Slice Architecture with Wolverine.
In Oskar’s sample reservation booking application, there was an HTTP endpoint that handled a ReserveRoomRequest command and emitted a new RoomReserved event for a new RoomReservation event stream. Part of that processing was validating the availability of rooms of the requested type during the time period of the reservation request. Just for reference, here’s my version of Oskar’s ReserveRoomEndpoint:
using DailyAvailability = System.Collections.Generic.IReadOnlyList<Booking.RoomReservations.GettingRoomTypeAvailability.DailyRoomTypeAvailability>;
namespace Booking.RoomReservations.ReservingRoom;
public record ReserveRoomRequest(
RoomType RoomType,
DateOnly From,
DateOnly To,
string GuestId,
int NumberOfPeople
);
public static class ReserveRoomEndpoint
{
// More on this in a second...
public static async Task<DailyAvailability> LoadAsync(
ReserveRoomRequest request,
IDocumentSession session)
{
// Look up the availability of this room type during the requested period
return (await session.QueryAsync(new GetRoomTypeAvailabilityForPeriod(request))).ToList();
}
[WolverinePost("/api/reservations")]
public static (CreationResponse, StartStream<RoomReservation>) Post(
ReserveRoomRequest command,
DailyAvailability dailyAvailability)
{
// Make sure there is availability for every day
if (dailyAvailability.Any(x => x.AvailableRooms == 0))
{
throw new InvalidOperationException("Not enough available rooms!");
}
var reservationId = CombGuidIdGeneration.NewGuid().ToString();
// I copied this, but I'd probably eliminate the record usage in favor
// of init only properties so you can make the potentially error prone
// mapping easier to troubleshoot in the future
// That folks is the voice of experience talkine
var reserved = new RoomReserved(
reservationId,
null,
command.RoomType,
command.From,
command.To,
command.GuestId,
command.NumberOfPeople,
ReservationSource.Api,
DateTimeOffset.UtcNow
);
return (
// This would be the response body, and this also helps Wolverine
// to create OpenAPI metadata for the endpoint
new CreationResponse($"/api/reservations/{reservationId}"),
// This return value is recognized by Wolverine as a "side effect"
// that will be processed as part of a Marten transaction
new StartStream<RoomReservation>(reservationId, reserved)
);
}
}
For this post, I’d like you to focus on the LoadAsync() method above. That’s utilizing Wolverine’s compound handler technique to split out the data loading so that the actual endpoint Post() method can be a pure function that’s easily unit tested by just “pushing” in the inputs and asserting on either the values returned or the presence of an exception in the validation logic.
Back to that LoadAsync() method. Let’s assume that this HTTP service is going to be under quite a bit of load and it wouldn’t hurt to apply some performance optimization. Or also imagine that the data querying to find the room availability of a certain room type and a time period will be fairly common within the system at large. I’m saying all that to justify the usage of Marten’s compiled query feature as shown below:
public class GetRoomTypeAvailabilityForPeriod : ICompiledListQuery<DailyRoomTypeAvailability>
{
// Sorry, but this signature is necessary for the Marten mechanics
public GetRoomTypeAvailabilityForPeriod()
{
}
public GetRoomTypeAvailabilityForPeriod(ReserveRoomRequest request)
{
RoomType = request.RoomType;
From = request.From;
To = request.To;
}
public RoomType RoomType { get; set; }
public DateOnly From { get; set; }
public DateOnly To { get; set; }
public Expression<Func<IMartenQueryable<DailyRoomTypeAvailability>, IEnumerable<DailyRoomTypeAvailability>>>
QueryIs()
{
return q => q.Where(day => day.RoomType == RoomType && day.Date >= From && day.Date <= To);
}
}
First of all, this is Marten’s version of the Query Object pattern which enables you to share the query definition in declarative ways throughout the codebase. (I’ve heard other folks call this a “Specification,” but that name is overloaded a bit too much in software development world). Removing duplication is certainly a good thing all by itself. Doing so in a way that eliminates the need for extra repository abstractions is also a win in my book.
Secondly, by using the “compiled query”, Marten is able to cache the whole execution plan in memory (technically it’s generating code at runtime) for faster runtime execution. The dirty, barely recognized fact in .NET development today is that the act of parsing Linq statements and converting the intermediate query model into actionable SQL and glue code is not cheap. Marten compiled queries sidestep all that preliminary parsing junk and let’s you skip right to the execution part.
It’s a possibly underused and under-appreciated feature within Marten, but compiled queries are a great way to optimize your system’s performance and possibly clean up code duplication in simple ways.
TL;DR: Wolverine can enable you to write testable code and achieve separation of concerns in your server side code with far less code ceremony than typical Clean Architecture type approaches.
I’m part of the mini-backlash against heavyweight, prescriptively layered architectural patterns like the various flavors of Hexagonal Architecture. I even did a whole talk on that subject at NDC Oslo this year:
Instead, I’m a big fan of keeping closely related code closer together with something like what Jimmy Bogard coined as Vertical Slices. Conveniently enough, I happen to think that Wolverine is a good fit for that style.
From a conference talk I did early last year, I started to build out a sample “TeleHealth Portal” system using the full “critter stack” with both Marten for persistence and event sourcing and Wolverine for everything else. Inside of this fictional TeleHealth system there will be a web service that adds a healthcare provider to an active board of related appointment requests (as an example, you might have a board for pediatric appointments in the state of Texas). When this web service executes, it needs to:
Find the related information about the requested, active Board and the Provider
Validate that the provider in question is able to join the active board based on various business rules like “is this provider licensed in this particular state and for some specialty?”. If the validation fails, the web service should return the validation message with the ProblemDetails specification
Assuming the validation is good, start a new event stream with Marten for a ProviderShift that will track what the provider does during their active shift on that board for that specific day
I’ll need to add a little more context afterward for some application configuration, but here’s that functionality in one single Wolverine.Http endpoint class — with the assumption that the heavy duty business logic for validating the provider & board assignment is in the business domain model:
public record StartProviderShift(Guid BoardId, Guid ProviderId);
public record ShiftStartingResponse(Guid ShiftId) : CreationResponse("/shift/" + ShiftId);
public static class StartProviderShiftEndpoint
{
// This would be called before the method below
public static async Task<(Board, Provider, IResult)> LoadAsync(StartProviderShift command, IQuerySession session)
{
// You could get clever here and batch the queries to Marten
// here, but let that be a later optimization step
var board = await session.LoadAsync<Board>(command.BoardId);
var provider = await session.LoadAsync<Provider>(command.ProviderId);
if (board == null || provider == null) return (board, provider, Results.BadRequest());
// This just means "full speed ahead"
return (board, provider, WolverineContinue.Result());
}
[WolverineBefore]
public static IResult Validate(Provider provider, Board board)
{
// Check if you can proceed to add the provider to the board
// This logic is out of the scope of this sample:)
if (provider.CanJoin(board))
{
// Again, this value tells Wolverine to keep processing
// the HTTP request
return WolverineContinue.Result();
}
// No soup for you!
var problems = new ProblemDetails
{
Detail = "Provider is ineligible to join this Board",
Status = 400,
Extensions =
{
[nameof(StartProviderShift.ProviderId)] = provider.Id,
[nameof(StartProviderShift.BoardId)] = board.Id
}
};
// Wolverine will execute this IResult
// and stop all other HTTP processing
return Results.Problem(problems);
}
[WolverinePost("/shift/start")]
// In the tuple that's returned below,
// The first value of ShiftStartingResponse is assumed by Wolverine to be the
// HTTP response body
// The subsequent IStartStream value is executed as a side effect by Wolverine
public static (ShiftStartingResponse, IStartStream) Create(StartProviderShift command, Board board, Provider provider)
{
var started = new ProviderJoined(board.Id, provider.Id);
var op = MartenOps.StartStream<ProviderShift>(started);
return (new ShiftStartingResponse(op.StreamId), op);
}
}
And there’s a few things I’d ask you to notice in the code above:
It’s just one class in one file that’s largely using functional decomposition to establish separation of concerns
Wolverine.Http is able to call the various methods in order from top to bottom, pass the loaded data from LoadAsync() to Validate() and on finally to the Create() method
I didn’t bother with any kind of repository abstraction around the data loading in the first step
The Validate() method is a pure function that’s suitable for easy unit testing of the validation logic
The Create() method is also a pure, synchronous function that’s going to be easy to unit test as you can do assertions on the events contained in the IStartStream object
Wolverine’s Marten integration is able to do the actual persistence of the new event stream for ProviderShift for you and deal with all the icky asynchronous junk
For more context, here’s the (butt ugly) code that Wolverine generates for the HTTP endpoint:
public class POST_shift_start : Wolverine.Http.HttpHandler
{
private readonly Wolverine.Http.WolverineHttpOptions _options;
private readonly Marten.ISessionFactory _sessionFactory;
public POST_shift_start(Wolverine.Http.WolverineHttpOptions options, Marten.ISessionFactory sessionFactory) : base(options)
{
_options = options;
_sessionFactory = sessionFactory;
}
public override async System.Threading.Tasks.Task Handle(Microsoft.AspNetCore.Http.HttpContext httpContext)
{
await using var documentSession = _sessionFactory.OpenSession();
await using var querySession = _sessionFactory.QuerySession();
var (command, jsonContinue) = await ReadJsonAsync<TeleHealth.WebApi.StartProviderShift>(httpContext);
if (jsonContinue == Wolverine.HandlerContinuation.Stop) return;
(var board, var provider, var result) = await TeleHealth.WebApi.StartProviderShiftEndpoint.LoadAsync(command, querySession).ConfigureAwait(false);
if (!(result is Wolverine.Http.WolverineContinue))
{
await result.ExecuteAsync(httpContext).ConfigureAwait(false);
return;
}
var result = TeleHealth.WebApi.StartProviderShiftEndpoint.Validate(provider, board);
if (!(result is Wolverine.Http.WolverineContinue))
{
await result.ExecuteAsync(httpContext).ConfigureAwait(false);
return;
}
(var shiftStartingResponse, var startStream) = TeleHealth.WebApi.StartProviderShiftEndpoint.Create(command, board, provider);
// Placed by Wolverine's ISideEffect policy
startStream.Execute(documentSession);
((Wolverine.Http.IHttpAware)shiftStartingResponse).Apply(httpContext);
// Commit any outstanding Marten changes
await documentSession.SaveChangesAsync(httpContext.RequestAborted).ConfigureAwait(false);
await WriteJsonAsync(httpContext, shiftStartingResponse);
}
}
In the application bootstrapping, I have Wolverine applying transactional middleware automatically:
builder.Host.UseWolverine(opts =>
{
// more config...
// Automatic usage of transactional middleware as
// Wolverine recognizes that an HTTP endpoint or message handler
// persists data
opts.Policies.AutoApplyTransactions();
});
And the Wolverine/Marten integration configured as well:
builder.Services.AddMarten(opts =>
{
var connString = builder
.Configuration
.GetConnectionString("marten");
opts.Connection(connString);
// There will be more here later...
})
// I added this to enroll Marten in the Wolverine outbox
.IntegrateWithWolverine()
// I also added this to opt into events being forward to
// the Wolverine outbox during SaveChangesAsync()
.EventForwardingToWolverine();
I’ll even go farther and say that in many cases Wolverine will allow you to establish decent separation of concerns and testability with far less ceremony than is required today with high overhead approaches like the popular Clean Architecture style.
As long term Agile practitioners, the folks behind the whole JasperFx / “Critter Stack” ecosystem explicitly design our tools around the quality of “testability.” Case in point, Wolverine has quite a bit of integration test helpers for testing through message handler execution.
However, while helping a Wolverine user last week, they told me that they were bypassing those built in tools because they wanted to do an integration test of an HTTP service call that publishes a message to Wolverine. That’s certainly going to be a common scenario, so let’s talk about a strategy for reliably writing integration tests that both invoke an HTTP request and can observe the ongoing Wolverine activity to “know” when the “act” part of a typical “arrange, act, assert” test is complete.
In the Wolverine codebase itself, there’s a couple projects that we use to test the Wolverine.Http library:
WolverineWebApi — a web api project that has a lot of fake endpoints that tries to cover the whole gamut of usage scenarios for Wolverine.Http, including a couple use cases of publishing messages directly from HTTP endpoint handlers to asynchronous message handling inside of Wolverine core
Wolverine.Http.Tests — an xUnit.Net project that contains a mix of unit tests and integration tests through WolverineWebApi and Wolverine.Http itself
Back to the need to write integration tests that span work from HTTP service invocations through to Wolverine message processing, Wolverine.Http uses the Alba library (another JasperFx project!) to execute and run assertions against HTTP services. At least at the moment, xUnit.Net is my goto test runner library, so Wolverine.Http.Tests has this shared fixture that is shared between test classes:
public class AppFixture : IAsyncLifetime
{
public IAlbaHost Host { get; private set; }
public async Task InitializeAsync()
{
// Sorry folks, but this is absolutely necessary if you
// use Oakton for command line processing and want to
// use WebApplicationFactory and/or Alba for integration testing
OaktonEnvironment.AutoStartHost = true;
// This is bootstrapping the actual application using
// its implied Program.Main() set up
Host = await AlbaHost.For<Program>(x => { });
}
A couple notes on this approach:
I think it’s very important to use the actual application bootstrapping for the integration testing rather than trying to have a parallel IoC container configuration for test automation as I frequently see out in the wild. That doesn’t preclude customizing that bootstrapping a little bit to substitute in fake, stand in services for problematic external infrastructure.
The approach I’m showing here with xUnit.Net does have the effect of making the tests execute serially, which might not be what you want in very large test suites
I think the xUnit.Net shared fixture approach is somewhat confusing and I always have to review the documentation on it when I try to use it
There’s also a shared base class for integrated HTTP tests called IntegrationContext, with a little bit of that shown below:
[CollectionDefinition("integration")]
public class IntegrationCollection : ICollectionFixture<AppFixture>
{
}
[Collection("integration")]
public abstract class IntegrationContext : IAsyncLifetime
{
private readonly AppFixture _fixture;
protected IntegrationContext(AppFixture fixture)
{
_fixture = fixture;
}
// more....
More germane to this particular post, here’s a helper method inside of IntegrationContext I wrote specifically to do integration testing that has to span an HTTP request through to asynchronous Wolverine message handling:
// This method allows us to make HTTP calls into our system
// in memory with Alba, but do so within Wolverine's test support
// for message tracking to both record outgoing messages and to ensure
// that any cascaded work spawned by the initial command is completed
// before passing control back to the calling test
protected async Task<(ITrackedSession, IScenarioResult)> TrackedHttpCall(Action<Scenario> configuration)
{
IScenarioResult result = null;
// The outer part is tying into Wolverine's test support
// to "wait" for all detected message activity to complete
var tracked = await Host.ExecuteAndWaitAsync(async () =>
{
// The inner part here is actually making an HTTP request
// to the system under test with Alba
result = await Host.Scenario(configuration);
});
return (tracked, result);
}
Now, for a sample usage of that test helpers, here’s a fake endpoint from WolverineWebApi that I used to prove that Wolverine.Http endpoints can publish messages through Wolverine’s cascading message approach:
// This would have a string response and a 200 status code
[WolverinePost("/spawn")]
public static (string, OutgoingMessages) Post(SpawnInput input)
{
var messages = new OutgoingMessages
{
new HttpMessage1(input.Name),
new HttpMessage2(input.Name),
new HttpMessage3(input.Name),
new HttpMessage4(input.Name)
};
return ("got it", messages);
}
Psst. Notice how the endpoint method’s signature up above is a synchronous pure function which is cleaner and easier to unit test than the equivalent functionality would be in other .NET frameworks that would have required you to call asynchronous methods on some kind of IMessageBus interface.
To test this thing, I want to run an HTTP POST to the “/span” Url in our application, then prove that there were four matching messages published through Wolverine. Here’s the test for that functionality using our earlier TrackedHttpCall() testing helper:
[Fact]
public async Task send_cascaded_messages_from_tuple_response()
{
// This would fail if the status code != 200 btw
// This method waits until *all* detectable Wolverine message
// processing has completed
var (tracked, result) = await TrackedHttpCall(x =>
{
x.Post.Json(new SpawnInput("Chris Jones")).ToUrl("/spawn");
});
result.ReadAsText().ShouldBe("got it");
// "tracked" is a Wolverine ITrackedSession object that lets us interrogate
// what messages were published, sent, and handled during the testing perioc
tracked.Sent.SingleMessage<HttpMessage1>().Name.ShouldBe("Chris Jones");
tracked.Sent.SingleMessage<HttpMessage2>().Name.ShouldBe("Chris Jones");
tracked.Sent.SingleMessage<HttpMessage3>().Name.ShouldBe("Chris Jones");
tracked.Sent.SingleMessage<HttpMessage4>().Name.ShouldBe("Chris Jones");
}
There you go. In one fell swoop, we’ve got a reliable way to do integration testing against asynchronous behavior in our system that’s triggered by an HTTP service call — including any and all configured ASP.Net Core or Wolverine.Http middleware that’s part of the execution pipeline.
By “reliable” here in regards to integration testing, I want you to think about any reasonably complicated Selenium test suite and how infuriatingly often you get “blinking” tests that are caused by race conditions between some kind of asynchronous behavior and the test harness trying to make test assertions against the browser state. Wolverine’s built in integration test support can eliminate that kind of inconsistent test behavior by removing the race condition as it tracks all ongoing work for completion.
Oh, and here’s Chris Jones sacking Joe Burrow in the AFC Championship game to seal the Chiefs win that was fresh in my mind when I originally wrote that code above:
I’m frequently and logically asked how Wolverine differs from the plethora of existing tooling in the .NET space for asynchronous messaging or in memory mediator tools. I’d argue that the biggest difference is how you to user of Wolverine go about writing your message handler (or HTTP endpoint) code that will be called by Wolverine at runtime.
All of the existing frameworks that I’m currently aware of are what I call “IHandler of T” frameworks meaning that one way or another, you have to constrain your message/event/command handling code behind some kind of mandatory framework interface like this:
Wolverine takes a very different approach to your message handler code by allowing you to write the simplest possible handler message signature while Wolverine dynamically creates its “IHandler of T” behind the scenes. By and large, Wolverine is trying to allow you to write your message handler code as pure functions that can be much easier and more effective to unit test than traditional .NET message handler code.
Jumping to an example that came up in the Wolverine Discord room last week, let’s say that you’re building a dashboard kind of application where the server side will be constantly broadcasting update messages via web sockets to the client using SignalR and you’re using Wolverine on the back end. The built in cascading messages feature of Wolverine would be nice for this exact kind of system, but Wolverine doesn’t yet have a SignalR transport (it will at some point this year). Instead, let’s customize Wolverine’s execution pipeline a little bit so we can return web socket bound messages directly from our Wolverine message handlers without having to inject any kind of SignalR service (or gateway around that).
First though, let’s say that all client bound messages from the server to the client will implement this little interface:
// Setting this up for usage with Redux style
// state management on the client side
public interface IClientMessage
{
[JsonPropertyName("type")]
public string TypeName => GetType().Name;
}
// This is just a "nullo" message that might
// be useful to mean "don't send anything in this case"
public record NoClientMessage : IClientMessage;
In the end, what I want to do is create a policy in Wolverine such that any “return value” from a Wolverine message handler or HTTP endpoint method that implements IClientMessage or IEnumerable<IClientMessage> will be sent via WebSockets instead of Wolverine trying to route these values through messaging. That leads us to having handler messages like this:
public record CountUpdated(int Value) : IClientMessage;
public record IncrementCount;
public static class SomeUpdateHandler
{
public static int Count = 0;
// We're trying to teach Wolverine to send CountUpdated
// return values via WebSockets instead of async
// message routing
public static CountUpdated Handle(IncrementCount command)
{
Count++;
return new CountUpdated(Count);
}
}
So now onto the actual SignalR integration. I’ll add this simplistic Hub type in SignalR:
public class BroadcastHub : Hub
{
public Task SendBatchAsync(IClientMessage[] messages)
{
return Clients.All.SendAsync("Updates", JsonSerializer.Serialize(messages));
}
}
Having built one of these applications before and helped troubleshoot problems in several others, I know that it’s frequently useful to debounce or throttle updates to the client to make the Javascript client be more responsive. To that end, I’m going to add this little custom class to our system that will be registered in our system as a singleton:
public class Broadcaster : IDisposable
{
private readonly BroadcastHub _hub;
private readonly ActionBlock<IClientMessage[]> _publishing;
private readonly BatchingBlock<IClientMessage> _batching;
public Broadcaster(BroadcastHub hub)
{
_hub = hub;
_publishing = new ActionBlock<IClientMessage[]>(messages => _hub.SendBatchAsync(messages),
new ExecutionDataflowBlockOptions
{
EnsureOrdered = true,
MaxDegreeOfParallelism = 1
});
// BatchingBlock is a Wolverine internal building block that's
// purposely public for this kind of usage.
// This will do the "debounce" for us
_batching = new BatchingBlock<IClientMessage>(250, _publishing);
}
public void Dispose()
{
_hub.Dispose();
_batching.Dispose();
}
public Task Post(IClientMessage? message)
{
return message is null or NoClientMessage
? Task.CompletedTask
: _batching.SendAsync(message);
}
public async Task PostMany(IEnumerable<IClientMessage> messages)
{
foreach (var message in messages.Where(x => x != null))
{
if (message is NoClientMessage) continue;
await _batching.SendAsync(message);
}
}
}
Switching to the application bootstrapping in the Program.Main() method of this application, I’m going to register a couple services:
And add a SignalR route against the WebApplication for the system:
app.MapHub<BroadcastHub>("/updates");
Now, we need to craft a policy for Wolverine that will teach it how to generate code for our desired behavior for IClientMessage return values:
public class BroadcastClientMessages : IChainPolicy
{
public void Apply(IReadOnlyList<IChain> chains, GenerationRules rules, IContainer container)
{
// We're going to look through all known message handler and HTTP endpoint chains
// and see where there's any return values of IClientMessage or IEnumerable<IClientMessage>
// and apply our custom return value handling
foreach (var chain in chains)
{
foreach (var message in chain.ReturnVariablesOfType<IClientMessage>())
{
message.UseReturnAction(v =>
{
var call = MethodCall.For<Broadcaster>(x => x.Post(null!));
call.Arguments[0] = message;
return call;
});
}
foreach (var messages in chain.ReturnVariablesOfType<IEnumerable<IClientMessage>>())
{
messages.UseReturnAction(v =>
{
var call = MethodCall.For<Broadcaster>(x => x.PostMany(null!));
call.Arguments[0] = messages;
return call;
});
}
}
}
}
And add that new policy to our Wolverine application like so:
builder.Host.UseWolverine(opts =>
{
// Other configuration...
opts.Policies.Add<BroadcastClientMessages>();
});
Finally, let’s see the results. For the SomeUpdateHandler type that handled the `IncrementCount` message, Wolverine is now generating this code:
public class IncrementCountHandler1900628703 : Wolverine.Runtime.Handlers.MessageHandler
{
private readonly WolverineWebApi.WebSockets.Broadcaster _broadcaster;
public IncrementCountHandler1900628703(WolverineWebApi.WebSockets.Broadcaster broadcaster)
{
_broadcaster = broadcaster;
}
public override System.Threading.Tasks.Task HandleAsync(Wolverine.Runtime.MessageContext context, System.Threading.CancellationToken cancellation)
{
var incrementCount = (WolverineWebApi.WebSockets.IncrementCount)context.Envelope.Message;
var outgoing1 = WolverineWebApi.WebSockets.SomeUpdateHandler.Handle(incrementCount);
// Notice that the return value from the message handler
// is being broadcast to the outgoing SignalR
// Hub
return _broadcaster.Post(outgoing1);
}
}
And there it is, your message handlers that need to send messages via WebSockets can now be coded through pure functions that are generally much easier to test and have less code noise than the equivalent functionality if you’d used basically every other .NET messaging framework.
Just a short one for today, mostly to answer a question that came in earlier this week.
When using Wolverine.Http to expose HTTP endpoint services that end up capturing Marten events, you might have an endpoint coded like this one from the Wolverine tests that takes in a command message and tries to start a new Marten event stream for the Order aggregate:
[Transactional] // This can be omitted if you use auto-transactions
[WolverinePost("/orders/create4")]
public static (OrderStatus, IStartStream) StartOrder4(StartOrderWithId command)
{
var items = command.Items.Select(x => new Item { Name = x }).ToArray();
// This is unique to Wolverine (we think)
var startStream = MartenOps
.StartStream<Order>(command.Id,new OrderCreated(items));
return (
new OrderStatus(startStream.StreamId, false),
startStream
);
}
Where the command looks like this:
public record StartOrderWithId(Guid Id, string[] Items);
In the HTTP endpoint above, we’re:
Creating a new event stream for Order that uses the stream/order id sent in the command
Returning a response body of type OrderStatus to the caller
Using Wolverine’s Marten integration to also return an IStartStream object that integrated middleware will apply to Marten’s IDocumentSession (more on this in my next post because we think this is a big deal by itself).
Great, easy enough right? Just to add some complexity, if the caller happens to send up the same, new order id additional times then Marten will throw an exception called `ExistingStreamIdCollisionException` just noting that no, you can’t create a new stream with that id because one already exists.
Marten’s behavior helps protect the data from duplication, but what about trying to make the HTTP response a little nicer by catching that exception automatically, and returning a ProblemDetails body with a 400 Bad Request status code to denote exactly what happened?
While you actually could do that globally with a bit of ASP.Net Core middleware, that applies everywhere at runtime and not just on the specific routes that could throw that exception. I’m not sure how big a deal this is to many of you, but using ASP.Net Core middleware would also be unable to have any impact on OpenAPI descriptions of your endpoints and it would be up to you to explicitly add attributes on your endpoints to denote the error handling response.
Fortunately, Wolverine’s middleware strategy will allow you to specifically target only the relevant routes and also add OpenAPI descriptions to your API’s generated documentation. And do so in a way that is arguably more efficient than the ASP.Net Core middleware approach at runtime anyway.
Jumping right into the deep end of the pool (I’m helping take my little ones swimming this afternoon and maybe thinking ahead), I’m going to build that policy like so:
public class StreamCollisionExceptionPolicy : IHttpPolicy
{
private bool shouldApply(HttpChain chain)
{
// TODO -- and Wolverine needs a utility method on IChain to make this declarative
// for future middleware construction
return chain
.HandlerCalls()
.SelectMany(x => x.Creates)
.Any(x => x.VariableType.CanBeCastTo<IStartStream>());
}
public void Apply(IReadOnlyList<HttpChain> chains, GenerationRules rules, IContainer container)
{
// Find *only* the HTTP routes where the route tries to create new Marten event streams
foreach (var chain in chains.Where(shouldApply))
{
// Add the middleware on the outside
chain.Middleware.Insert(0, new CatchStreamCollisionFrame());
// Alter the OpenAPI metadata to register the ProblemDetails
// path
chain.Metadata.ProducesProblem(400);
}
}
// Make the codegen easier by doing most of the work in this one method
public static Task RespondWithProblemDetails(ExistingStreamIdCollisionException e, HttpContext context)
{
var problems = new ProblemDetails
{
Detail = $"Duplicated id '{e.Id}'",
Extensions =
{
["Id"] = e.Id
},
Status = 400 // The default is 500, so watch this
};
return Results.Problem(problems).ExecuteAsync(context);
}
}
// This is the actual middleware that's injecting some code
// into the runtime code generation
internal class CatchStreamCollisionFrame : AsyncFrame
{
public override void GenerateCode(GeneratedMethod method, ISourceWriter writer)
{
writer.Write("BLOCK:try");
// Write the inner code here
Next?.GenerateCode(method, writer);
writer.FinishBlock();
writer.Write($@"
BLOCK:catch({typeof(ExistingStreamIdCollisionException).FullNameInCode()} e)
await {typeof(StreamCollisionExceptionPolicy).FullNameInCode()}.{nameof(StreamCollisionExceptionPolicy.RespondWithProblemDetails)}(e, httpContext);
return;
END
");
}
}
And apply the middleware to the application like so:
app.MapWolverineEndpoints(opts =>
{
// more configuration for HTTP...
opts.AddPolicy<StreamCollisionExceptionPolicy>();
});
And lastly, here’s a test using Alba that just verifies the behavior end to end by trying to create a new event stream with the same id multiple times:
[Fact]
public async Task use_stream_collision_policy()
{
var id = Guid.NewGuid();
// First time should be fine
await Scenario(x =>
{
x.Post.Json(new StartOrderWithId(id, new[] { "Socks", "Shoes", "Shirt" })).ToUrl("/orders/create4");
});
// Second time hits an exception from stream id collision
var result2 = await Scenario(x =>
{
x.Post.Json(new StartOrderWithId(id, new[] { "Socks", "Shoes", "Shirt" })).ToUrl("/orders/create4");
x.StatusCodeShouldBe(400);
});
// And let's verify that we got what we expected for the ProblemDetails
// in the HTTP response body of the 2nd request
var details = result2.ReadAsJson<ProblemDetails>();
Guid.Parse(details.Extensions["Id"].ToString()).ShouldBe(id);
details.Detail.ShouldBe($"Duplicated id '{id}'");
}
To maybe make this a little clearer what’s going on, Wolverine can always show you the generated code it uses for your HTTP endpoints like this (I reformatted the code for legibility with Rider):
public class POST_orders_create4 : HttpHandler
{
private readonly WolverineHttpOptions _options;
private readonly ISessionFactory _sessionFactory;
public POST_orders_create4(WolverineHttpOptions options, ISessionFactory sessionFactory) : base(options)
{
_options = options;
_sessionFactory = sessionFactory;
}
public override async Task Handle(HttpContext httpContext)
{
await using var documentSession = _sessionFactory.OpenSession();
try
{
var (command, jsonContinue) = await ReadJsonAsync<StartOrderWithId>(httpContext);
if (jsonContinue == HandlerContinuation.Stop)
{
return;
}
var (orderStatus, startStream) = MarkItemEndpoint.StartOrder4(command);
// Placed by Wolverine's ISideEffect policy
startStream.Execute(documentSession);
// Commit any outstanding Marten changes
await documentSession.SaveChangesAsync(httpContext.RequestAborted).ConfigureAwait(false);
await WriteJsonAsync(httpContext, orderStatus);
}
catch (ExistingStreamIdCollisionException e)
{
await StreamCollisionExceptionPolicy.RespondWithProblemDetails(e, httpContext);
}
}
}
Starting this month, I think I’m going to blog openly about the ideas and directions that the “Critter Stack” team (Marten and Wolverine) and community is considering for where things go next. I think we’d love to hear any feedback or further suggestions about where this goes (and here’s the link to the Critter Stack Discord channel). I think we’re being mostly reactive to “what are our users struggling with” and “what are folks telling us about what’s stopping them from adopting Marten or Wolverine?”
For the immediate future, I’m trying to get my act together to actually have a real business structure and being ready to start offering support or consulting contracts. I’m personally catching up on open Marten bugs as I’ve been mostly busy with Wolverine lately and not helping the rest of the team much.
Strategic Things
Here are the big themes that we’ve identified that we need to push Marten and/or Wolverine into contention for world domination (or at least entice enough paying users to make a living off of our passion projects here). The first four bullets are happening this year and the rest is just a fanciful vision board.
1st class subscriptions in Marten — This is the ability to register to listen to feeds of event data from Marten. Maybe you want to stream event data through to Kafka for additional processing. Maybe you want to update an ElasticSearch index for your data. Definitely you want this to work with all the reliability, monitoring, and error handling capabilities that you’d expect.
Linq improvements in Marten — Try to utilize JSONPath operators that are available in recent versions of PostgreSQL that would re-enable the usage of GIN/GIST indexes that was somewhat lost in V4. Try to greatly improve Marten’s Linq querying within document sub-collections. I hate working on the Linq parser in Marten, but I hate seeing Linq-related bugs filter in even more as folks try more and more things, so it’s time
Massive scalability of event projections — This is likely to be a new alternative to the current Marten async daemon that is able to load balance asynchronous projection processing across multiple application nodes. This improved daemon will be built with a combination of Marten and Wolverine as an add on product, likely with some kind of dual usage commercial license and official support license.
Zero downtime, blue/green deployment for event sourcing — Closely related to the previous bullet. Everything you need to be able to blue/green deploy your application using Marten event sourcing without any down time. So, you’ll have support for versioned projections and zero downtime projection rebuilds as well. This will most likely be part of the commercial add on package for the Critter Stack
User interface for monitoring or managing the Critter Stack — Just a place holder. Not sure what the exact functionality would be here. And this will absolutely be a dual license commercial product of some sort.
Sql Server backed event store — While the document database feature set in Marten is unlikely to ever be completely ported to Sql Server, it’s maybe perfectly possible to support Marten’s event sourcing on a Sql Server foundation
Marten for the JVM??? — Just stay tuned on this one down the line. In all likelihood this would mean running Marten’s event store in a separate process, then using some kind of language neutral proxy mechanism (gRPC?) to capture events. Tentatively the idea for projections is to let users use TypeScript/JavaScript to define projects that will run in WebAssembly.
AOT compilation / Serverless optimized recipe — There’s no chance in the world that any combination of Marten or Wolverine can work with AOT compilation without some significant changes on our side. I think it’s going to end up requiring some level of code generation to get there. I’m not clear about whether or not enough folks care about this right now to justify the effort.
Tactical Things
And also a list of hopefully quick wins that help spur Critter Stack adoption
Open Telemetry support in Marten. We have this in Wolverine already, but not yet for Marten activity
Ability to raise events from projections in Marten, or issue commands as aggregates are updates, or I don’t know yet. All I know is that right now this seems to be coming up a lot in user questions in Discord
Document versioning in Marten
Kafka transport in Wolverine
Amazon SNS support in Wolverine
Strong-typed identifiers. Folks have been asking for this periodically in Marten. When it exists in Marten, I’d also like to pursue being able to exploit strong typed identifiers in Wolverine middleware to “know” when to load entities from identifiers automatically
Expanding multi-tenancy support in Wolverine. Today Wolverine has a robust model for Marten-backed multi-tenancy in the message handling, but I’d like to see this extended to detecting tenant identifiers automatically in HTTP requests. I’d also like to extend the multi-tenancy support to EF Core backed persistence and SQL Server backed storage.
Lightweight partial updates in Marten. This is the ability to issue updates to part of a Marten document without first loading the entire document. We’ve had this functionality from the very beginning, but it depends on Javascript support in PostgreSQL through the PLv8 extension that’s in a tenuous state. The new model would use native PostgreSQL features in place of the older JavaScript model.
Waddaya think?
Anything above sound compelling to you? Have questions about how some of that would work? Wanna make suggestions about how it should be done? Have *gasp* completely different suggestions for what we should improve instead in Marten/Wolverine to make it more attractive to your shop? Fire away in comments here or the Critter Stack Discord channel.
To go along with the Wolverine 1.0 release, I should probably be blogging some introductory, getting started type content. To hell with that though, let’s jump right into the deep end of the pool today! To the best of my knowledge, no other messaging tooling in .NET can span inbox/outbox integration across this kind of multi-tenanted database storage.
Let’s say that you’re wanting to build a system using the full critter stack (Marten + Wolverine) and you need to support multi-tenancy through a database per tenant strategy for some mix of scalability and data segregation. You’d also like to use some of the goodies that comes from Critter Stack that is going to make your development life a whole lot easier and more productive:
Wolverine’s multi-tenancy support to propagate tenant id between subsequent messages in complex workflows
Both Wolverine and Marten’s ability to manage the setup of database storage for you so your developers can focus on getting things done instead of setting up their environment by hand
using Marten;
using MultiTenantedTodoWebService;
using Oakton;
using Oakton.Resources;
using Wolverine;
using Wolverine.Http;
using Wolverine.Marten;
var builder = WebApplication.CreateBuilder(args);
// You do need a "master" database for operations that are
// independent of a specific tenant. Wolverine needs this
// for some of its state tracking
var connectionString = "Host=localhost;Port=5433;Database=postgres;Username=postgres;password=postgres";
// Adding Marten for persistence
builder.Services.AddMarten(m =>
{
// With multi-tenancy through a database per tenant
m.MultiTenantedDatabases(tenancy =>
{
// You would probably be pulling the connection strings out of configuration,
// but it's late in the afternoon and I'm being lazy building out this sample!
tenancy.AddSingleTenantDatabase("Host=localhost;Port=5433;Database=tenant1;Username=postgres;password=postgres", "tenant1");
tenancy.AddSingleTenantDatabase("Host=localhost;Port=5433;Database=tenant2;Username=postgres;password=postgres", "tenant2");
tenancy.AddSingleTenantDatabase("Host=localhost;Port=5433;Database=tenant3;Username=postgres;password=postgres", "tenant3");
});
m.DatabaseSchemaName = "mttodo";
})
.IntegrateWithWolverine(masterDatabaseConnectionString:connectionString);
// This tells both Wolverine & Marten to make
// sure all necessary database schema objects across
// all the tenant databases are up and ready to go
// on application startup
builder.Services.AddResourceSetupOnStartup();
// Wolverine usage is required for WolverineFx.Http
builder.Host.UseWolverine(opts =>
{
// This middleware will apply to the HTTP
// endpoints as well
opts.Policies.AutoApplyTransactions();
// Setting up the outbox on all locally handled
// background tasks
opts.Policies.UseDurableLocalQueues();
});
builder.Services.AddEndpointsApiExplorer();
builder.Services.AddSwaggerGen();
var app = builder.Build();
// Configure the HTTP request pipeline.
if (app.Environment.IsDevelopment())
{
app.UseSwagger();
app.UseSwaggerUI();
}
// Let's add in Wolverine HTTP endpoints to the routing tree
app.MapWolverineEndpoints();
return await app.RunOaktonCommands(args);
In the code above, we’re configured Marten to use a database per tenant in its own storage. When we also call IntegrateWithWolverine() to add Wolverine inbox/outbox support — which Wolverine in 1.0 is able to do for each and every single known tenant database.
And that’s a lot of setup, but now check out this sample usage of some message handlers in a fake Todo service:
public static class TodoCreatedHandler
{
public static void Handle(DeleteTodo command, IDocumentSession session)
{
session.Delete<Todo>(command.Id);
}
public static TodoCreated Handle(CreateTodo command, IDocumentSession session)
{
var todo = new Todo { Name = command.Name };
session.Store(todo);
return new TodoCreated(todo.Id);
}
// Do something in the background, like assign it to someone,
// send out emails or texts, alerts, whatever
public static void Handle(TodoCreated created, ILogger logger)
{
logger.LogInformation("Got a new TodoCreated event for " + created.Id);
}
}
You’ll note that at no point in any of that code do you see anything to do with a tenant, or opening a Marten session to the right tenant, or propagating the tenant id from the initial CreateTodo command to the cascaded TodoCreated event. That’s because Wolverine is happily able to do that for you behind the scenes in Envelope metadata as long as the original command was tagged to a tenant id.
To do that, see these examples of invoking Wolverine from HTTP endpoints:
public static class TodoEndpoints
{
[WolverineGet("/todoitems/{tenant}")]
public static Task<IReadOnlyList<Todo>> Get(string tenant, IDocumentStore store)
{
using var session = store.QuerySession(tenant);
return session.Query<Todo>().ToListAsync();
}
[WolverineGet("/todoitems/{tenant}/complete")]
public static Task<IReadOnlyList<Todo>> GetComplete(string tenant, IDocumentStore store)
{
using var session = store.QuerySession(tenant);
return session.Query<Todo>().Where(x => x.IsComplete).ToListAsync();
}
// Wolverine can infer the 200/404 status codes for you here
// so there's no code noise just to satisfy OpenAPI tooling
[WolverineGet("/todoitems/{tenant}/{id}")]
public static async Task<Todo?> GetTodo(int id, string tenant, IDocumentStore store, CancellationToken cancellation)
{
using var session = store.QuerySession(tenant);
var todo = await session.LoadAsync<Todo>(id, cancellation);
return todo;
}
[WolverinePost("/todoitems/{tenant}")]
public static async Task<IResult> Create(string tenant, CreateTodo command, IMessageBus bus)
{
// At the 1.0 release, you would have to use Wolverine as a mediator
// to get the full multi-tenancy feature set.
// That hopefully changes in 1.1
var created = await bus.InvokeForTenantAsync<TodoCreated>(tenant, command);
return Results.Created($"/todoitems/{tenant}/{created.Id}", created);
}
[WolverineDelete("/todoitems/{tenant}")]
public static async Task Delete(
string tenant,
DeleteTodo command,
IMessageBus bus)
{
// Invoke inline for the specified tenant
await bus.InvokeForTenantAsync(tenant, command);
}
}
I think in Wolverine 1.1 or at least a future incremental release of Wolverine that there will be a way to register automatic tenant id detection from an HTTP resource, for for 1.0 developers need to explicitly code and pass that along to Wolverine themselves. Once Wolverine knows the tenant id though, it will happily propagate that automatically to downstream messages. And in all cases, Wolverine is able to use the correct inbox/outbox storage in the current tenant database during message processing so you still have the single, native transaction spanning the application functionality and the inbox/outbox storage.\