Before Marten took off and we pivoted to using the “Critter Stack” naming motif, the original naming theme for the JasperFx OSS tool suite were some of the small towns near where I grew up in Southwest Missouri. Alba, MO is somewhat famous as the hometown of the Boyer brothers.
I’m taking a little time this week to build out some improvements to Wolverine’s declarative data access support based on some recent client work. As this work is largely targeted at Wolverine’s HTTP support, I’m heavily leveraging Alba to help test the HTTP behavior and I thought this work would make a great example of how Alba can help you more efficiently test HTTP API code in .NET.
Now, back to Wolverine and the current work I’m in the midst of testing today. To remove a lot of the repetitive code out of this client’s HTTP API, Wolverine is going to improve the [Entity] attribute mechanics to easily customize “on missing” handling something like this simple example from tests:
// Should 400 w/ ProblemDetails on missing
[WolverineGet("/required/todo400/{id}")]
public static Todo2 Get2([Entity(OnMissing = OnMissing.ProblemDetailsWith400)] Todo2 todo)
=> todo;
With Wolverine message handlers or HTTP endpoints, the [Entity] attribute is a little bit of declarative data access that just directs Wolverine to generate some code around your method to load data for that parameter based on its type from whatever your attached data access tooling is for that application, currently supported for Marten (of course), EF Core, and RavenDb. In its current form, if Marten/EF Core/RavenDb cannot find a Todo2 entity in the database with the identity from the route argument “id”, Wolverine will just set the HTTP status code to 404 and exit.
And while I’d argue that’s a perfectly fine default behavior, a recent client wants instead to write out a ProblemDetails response describing what data referenced in the request was unavailable and return a 400 status code instead. They’re handling that with Wolverine’s Railway Programming support just fine, but I think that’s causing my client more repetitive code than I personally prefer, and Wolverine is based on the philosophy that repetitive code should be minimized as much as possible. Hence, the enhancement work hinted at above with a new OnMissing property that lets you specify exactly how an HTTP endpoint should handle the case where a requested entity is missing.
So let’s finally introduce Alba with this test harness using xUnit:
public class reacting_to_entity_attributes : IAsyncLifetime
{
private readonly ITestOutputHelper _output;
private IAlbaHost theHost;
public reacting_to_entity_attributes(ITestOutputHelper output)
{
_output = output;
}
public async Task InitializeAsync()
{
// This probably isn't your typical Alba usage, but
// I'm spinning up a little AspNetCore application
// for endpoint types in the current testing assembly
var builder = WebApplication.CreateBuilder([]);
// Adding Marten as the target persistence provider,
// but the attribute does work w/ EF Core too
builder.Services.AddMarten(opts =>
{
// Establish the connection string to your Marten database
opts.Connection(Servers.PostgresConnectionString);
opts.DatabaseSchemaName = "onmissing";
}).IntegrateWithWolverine().UseLightweightSessions();
builder.Host.UseWolverine(opts => opts.Discovery.IncludeAssembly(GetType().Assembly));
builder.Services.AddWolverineHttp();
// This is using Alba, which uses WebApplicationFactory under the covers
theHost = await AlbaHost.For(builder, app =>
{
app.MapWolverineEndpoints();
});
}
async Task IAsyncLifetime.DisposeAsync()
{
if (theHost != null)
{
await theHost.StopAsync();
}
}
// Other tests...
[Fact]
public async Task problem_details_400_on_missing()
{
var results = await theHost.Scenario(x =>
{
x.Get.Url("/required/todo400/nonexistent");
x.StatusCodeShouldBe(400);
x.ContentTypeShouldBe("application/problem+json");
});
var details = results.ReadAsJson<ProblemDetails>();
details.Detail.ShouldBe("Unknown Todo2 with identity nonexistent");
}
}
Just a few things to call out about the test above:
Alba is using WebApplicationFactory and TestServer from AspNetCore under the covers to bootstrap an AspNetCore IHost without having to use Kestral
The Alba Scenario() method is running an HTTP request all the way through the application in process
Alba has declarative helpers to assert on the expected HTTP status code and content-type headers in the response, and I used those above
The ReadAsJson<T>() helper just helps us deserialize the response body into a .NET type using whatever the JSON serialization configuration is within our application — and by no means should you minimize that because that’s a humongous potential source of false test results for the unwary if folks use mismatched JSON serialization settings between their application and test harness code!
For the record, that test is passing in my local branch right now after a couple iterations. Alba just happened to make the functionality pretty easy to test through both the declarative assertions and the JSON serialization helpers.
JasperFx Software is in business to help our clients make the most of the “Critter Stack” tools, Event Sourcing, CQRS, Event Driven Architecture, Test Automation, and server side .NET development in general. We’d be happy to talk with your company and see how we could help you be more successful!
In the first video, we started diving in on a new sample “Incident Service” that’s admittedly heavily in flight that shows how to use Marten with both Event Sourcing and as a Document Database over PostgreSQL and its integration with Wolverine as a higher level HTTP web service and asynchronous messaging platform.
We covered a lot, but here’s some of the highlights:
Hopefully showing off how easy it is to get started with Marten and Wolverine both, especially with Marten’s ability to lay down its own database schema as needed in its default mode. Later videos will show off how Wolverine does the same for any database schemas it needs and even message broker setup.
Utilizing Wolverine.HTTP for web services and how it can be used for a very low code ceremony approach for “Vertical Slice Architecture” and how it promotes testability in code without all the hassle of a complex Clean Architecture project structure or reams of abstractions scattered about in your code. It also leads to simpler code than the more common “MVC Core/Minimal API + MediatR” approach to Vertical Slice Architecture.
How Wolverine’s emphasis on pure function handlers leads to business or workflow logic being easy to test
The Critter Stack’s support for command line diagnostics and development time tools, including a way to “unwind the magic” with Wolverine so it can show you exactly how it’s calling your code
Here’s the first video:
In the second video, we got into:
Wolverine’s “aggregate handler workflow” style of CQRS command handlers and how you can do that with easily testable pure functions
Using Marten’s ability to stream JSON data directly to HTTP for the most efficient possible “read side” query endpoints
Wolverine’s message scheduling capability
Marten’s utilization of PostgreSQL partitioning for maximizing scalability
I can’t say for sure where we’ll go next, but there will be a part 3 to this series in the next couple weeks and hopefully a series of shorter video content soon too! We’re certainly happy to take requests!
If you’re planning on coming to my workshop, you’ll want .NET 8, Git, and some kind of Docker Desktop on your box to run the sample code I’ll use in the workshop. If Docker doesn’t work for you, you maybe want a local install of PostgreSQL and Rabbit MQ.
Hey folks, I’ll be giving the first ever workshop on building an Event Driven Architecture with the full “Critter Stack” at DevUp 2024 in St. Louis next week on Wednesday the 14th bright and early at 8:30 AM.
We’ll be working through a sample backend web service that also communicates with other headless services using Event Sourcing within a general CQRS architectural approach with the whole “Critter Stack. We’ll use Marten (over PostgreSQL) for our persistence strategy using both its event sourcing support and as a document database. We’ll combine that with Wolverine as a server side framework for background processing, asynchronous messaging, and even as an alternative HTTP endpoint framework. Lastly, just for fun, there’ll be guest appearances from other JasperFx tools like Alba and Oakton for automated integration testing and command line execution respectively.
So why would you want to come to this and what might you get out of it? I’m hoping the takeaways — even if you don’t intend to use Marten or Wolverine — will be:
A good introduction to event sourcing as a technical approach and some of the real challenges you’ll face when building a system using event sourcing as a persistence strategy
An understanding of what goes into building a robust CQRS system including dealing with transient errors, observability, concurrency, and how to best segment message processing to achieve self-healing systems
Challenging the industry conventional wisdom about the efficacy of Hexagonal/Clean/Onion Architecture approaches really are when I show what a very low ceremony “vertical slice architecture” approach can be like with the Wolverine + Marten combination while still being robust, observable, highly testable, and still keeping infrastructure concerns out of the business logic
Some exposure to Open Telemetry and general observability tooling for distributed systems you absolutely want if you don’t already have that
Techniques for automating integration tests against an Event Driven Architecture
Because I’m absolutely in the business of promoting the “Critter Stack” tools, I’ll try to convince you that:
Marten is already the most robust and feature rich solution for event sourcing in the .NET ecosystem while also being arguably the easiest to get up and going with
How the Wolverine + Marten combination makes CQRS with Event Sourcing a much easier architectural pattern to use
Wolverine’s emphasis on low ceremony code approaches can help systems be more successfully maintained over time by simply having much less noise code and layering in your systems while still being robust
The “Critter Stack” has an excellent story for automated integration testing support that can do a lot to make your development efforts more successful
Both Marten & Wolverine can help your teams achieve a low “time to first pull request” by doing a lot to configure necessary infrastructure like databases or message brokers on the fly for a better development experience
I’m excited, because this is my first opportunity to do a workshop on the “Critter Stack” tools, and I think we’ve got a very compelling technical story to tell about the tools! And if nothing else, I’m looking forward to any feedback that might help us improve the tools down the line.
And for any *ahem* older folks from St. Louis in my talk, I personally at the time that Jorge Orta was out at first and the Cards should have won that game.
Hey, did you know that JasperFx Software is ready for formal support plans for Marten and Wolverine? Not only are we trying to make the “Critter Stack” tools be viable long term options for your shop, we’re also interested in hearing your opinions about the tools and how they should change.We’re also certainly open to help you succeed with your software development projects on a consulting basis whether you’re using any part of the Critter Stack or any other .NET server side tooling.
Let’s build a small web service application using the whole “Critter Stack” and their friends, one small step at a time. For right now, the “finished” code is at CritterStackHelpDesk on GitHub.
Heretofore in this series, I’ve been using ASP.Net MVC Core controllers anytime we’ve had to build HTTP endpoints for our incident tracking, help desk system in order to introduce new concepts a little more slowly.
If you would, let’s refer back to an earlier incarnation of an HTTP endpoint to handle our LogIncident command from an earlier post in this series:
public class IncidentController : ControllerBase
{
private readonly IDocumentSession _session;
public IncidentController(IDocumentSession session)
{
_session = session;
}
[HttpPost("/api/incidents")]
public async Task<IResult> Log(
[FromBody] LogIncident command
)
{
var userId = currentUserId();
var logged = new IncidentLogged(command.CustomerId, command.Contact, command.Description, userId);
var incidentId = _session.Events.StartStream(logged).Id;
await _session.SaveChangesAsync(HttpContext.RequestAborted);
return Results.Created("/incidents/" + incidentId, incidentId);
}
private Guid currentUserId()
{
// let's say that we do something here that "finds" the
// user id as a Guid from the ClaimsPrincipal
var userIdClaim = User.FindFirst("user-id");
if (userIdClaim != null && Guid.TryParse(userIdClaim.Value, out var id))
{
return id;
}
throw new UnauthorizedAccessException("No user");
}
}
Just to be clear as possible here, the Wolverine HTTP endpoints feature introduced in this post can be mixed and matched with MVC Core and/or Minimal API or even FastEndpoints within the same application and routing tree. I think the ASP.Net team deserves some serious credit for making that last sentence a fact.
Today though, let’s use Wolverine HTTP endpoints and rewrite that controller method above the “Wolverine way.” To get started, add a Nuget reference to the help desk service like so:
dotnet add package WolverineFx.Http
Next, let’s break into our Program file and add Wolverine endpoints to our routing tree near the bottom of the file like so:
app.MapWolverineEndpoints(opts =>
{
// We'll add a little more in a bit...
});
// Just to show where the above code is within the context
// of the Program file...
return await app.RunOaktonCommands(args);
Now, let’s make our first cut at a Wolverine HTTP endpoint for the LogIncident command, but I’m purposely going to do it without introducing a lot of new concepts, so please bear with me a bit:
public record NewIncidentResponse(Guid IncidentId)
: CreationResponse("/api/incidents/" + IncidentId);
public static class LogIncidentEndpoint
{
[WolverinePost("/api/incidents")]
public static NewIncidentResponse Post(
// No [FromBody] stuff necessary
LogIncident command,
// Service injection is automatic,
// just like message handlers
IDocumentSession session,
// You can take in an argument for HttpContext
// or immediate members of HttpContext
// as method arguments
ClaimsPrincipal principal)
{
// Some ugly code to find the user id
// within a claim for the currently authenticated
// user
Guid userId = Guid.Empty;
var userIdClaim = principal.FindFirst("user-id");
if (userIdClaim != null && Guid.TryParse(userIdClaim.Value, out var claimValue))
{
userId = claimValue;
}
var logged = new IncidentLogged(command.CustomerId, command.Contact, command.Description, userId);
var id = session.Events.StartStream<Incident>(logged).Id;
return new NewIncidentResponse(id);
}
}
Here’s a few salient facts about the code above to explain what it’s doing:
Just like Wolverine message handlers, the endpoint methods are flexible and Wolverine generates code around your code to mediate between the raw HttpContext for the request and your code
We have already enabled Marten transactional middleware for our message handlers in an earlier post, and that happily applies to Wolverine HTTP endpoints as well. That helps make our endpoint method be just a synchronous method with the transactional middleware dealing with the ugly asynchronous stuff for us.
You can “inject” HttpContext and its immediate children into the method signatures as I did with the ClaimsPrincipal up above
Method injection is automatic without any silly [FromServices] attributes, and that’s what’s happening with the IDocumentSession argument
The LogIncident parameter is assumed to be the HTTP request body due to being the first argument, and it will be deserialized from the incoming JSON in the request body just like you’d probably expect
The NewIncidentResponse type is roughly the equivalent to using Results.Created() in Minimal API to create a response body with the url of the newly created Incident stream and an HTTP status code of 201 for “Created.” What’s different about Wolverine.HTTP is that it can infer OpenAPI documentation from the signature of that type without requiring you to pollute your code by manually adding [ProducesResponseType] attributes on the method to get a “proper” OpenAPI document for the endpoint.
Moving on, that user id detection from the ClaimsPrincipal looks a little bit ugly to me, and likely to be repetitive. Let’s ameliorate that by introducing Wolverine’s flavor of HTTP middleware and move that code to this class:
// Using the custom type makes it easier
// for the Wolverine code generation to route
// things around. I'm not ashamed.
public record User(Guid Id);
public static class UserDetectionMiddleware
{
public static (User, ProblemDetails) Load(ClaimsPrincipal principal)
{
var userIdClaim = principal.FindFirst("user-id");
if (userIdClaim != null && Guid.TryParse(userIdClaim.Value, out var id))
{
// Everything is good, keep on trucking with this request!
return (new User(id), WolverineContinue.NoProblems);
}
// Nope, nope, nope. We got problems, so stop the presses and emit a ProblemDetails response
// with a 400 status code telling the caller that there's no valid user for this request
return (new User(Guid.Empty), new ProblemDetails { Detail = "No valid user", Status = 400});
}
}
Do note the usage of ProblemDetails in that middleware. If there is no user-id claim on the ClaimsPrincipal, we’ll abort the request by writing out the ProblemDetails stating there’s no valid user. This pattern is baked into Wolverine.HTTP to help create one off request validations. We’ll utilize this quite a bit more later.
Next, I need to add that new bit of middleware to our application. As a shortcut, I’m going to just add it to every single Wolverine HTTP endpoint by breaking back into our Program file and adding this line of code:
app.MapWolverineEndpoints(opts =>
{
// We'll add a little more in a bit...
// Creates a User object in HTTP requests based on
// the "user-id" claim
opts.AddMiddleware(typeof(UserDetectionMiddleware));
});
Now, back to our endpoint code and I’ll take advantage of that middleware by changing the method to this:
[WolverinePost("/api/incidents")]
public static NewIncidentResponse Post(
// No [FromBody] stuff necessary
LogIncident command,
// Service injection is automatic,
// just like message handlers
IDocumentSession session,
// This will be created for us through the new user detection
// middleware
User user)
{
var logged = new IncidentLogged(
command.CustomerId,
command.Contact,
command.Description,
user.Id);
var id = session.Events.StartStream<Incident>(logged).Id;
return new NewIncidentResponse(id);
}
This is a little bit of a bonus, but let’s also get rid of the need to inject the Marten IDocumentSession service by using a Wolverine “side effect” with this equivalent code:
[WolverinePost("/api/incidents")]
public static (NewIncidentResponse, IStartStream) Post(LogIncident command, User user)
{
var logged = new IncidentLogged(
command.CustomerId,
command.Contact,
command.Description,
user.Id);
var op = MartenOps.StartStream<Incident>(logged);
return (new NewIncidentResponse(op.StreamId), op);
}
In the code above I’m using the MartenOps.StartStream() method to return a “side effect” that will create a new Marten stream as part of the request instead of directly interacting with the IDocumentSession from Marten. That’s a small thing you might not care for, but it can lead to the elimination of mock objects within your unit tests as you can now write a state-based test directly against the method above like so:
public class LogIncident_handling
{
[Fact]
public void handle_the_log_incident_command()
{
// This is trivial, but the point is that
// we now have a pure function that can be
// unit tested by pushing inputs in and measuring
// outputs without any pesky mock object setup
var contact = new Contact(ContactChannel.Email);
var theCommand = new LogIncident(BaselineData.Customer1Id, contact, "It's broken");
var theUser = new User(Guid.NewGuid());
var (_, stream) = LogIncidentEndpoint.Post(theCommand, theUser);
// Test the *decision* to emit the correct
// events and make sure all that pesky left/right
// hand mapping is correct
var logged = stream.Events.Single()
.ShouldBeOfType<IncidentLogged>();
logged.CustomerId.ShouldBe(theCommand.CustomerId);
logged.Contact.ShouldBe(theCommand.Contact);
logged.LoggedBy.ShouldBe(theUser.Id);
}
}
Hey, let’s add some validation too!
We’ve already introduced middleware, so let’s just incorporate the popular Fluent Validation library into our project and let it do some basic validation on the incoming LogIncident command body, and if any validation fails, pull the ripcord and parachute out of the request with a ProblemDetails body and 400 status code that describes the validation errors.
Next, I have to add the usage of that middleware through this new line of code:
app.MapWolverineEndpoints(opts =>
{
// Direct Wolverine.HTTP to use Fluent Validation
// middleware to validate any request bodies where
// there's a known validator (or many validators)
opts.UseFluentValidationProblemDetailMiddleware();
// Creates a User object in HTTP requests based on
// the "user-id" claim
opts.AddMiddleware(typeof(UserDetectionMiddleware));
});
And add an actual validator for our LogIncident, and in this case that model is just an internal concern of our service, so I’ll just embed that new validator as an inner type of the command type like so:
public record LogIncident(
Guid CustomerId,
Contact Contact,
string Description
)
{
public class LogIncidentValidator : AbstractValidator<LogIncident>
{
// I stole this idea of using inner classes to keep them
// close to the actual model from *someone* online,
// but don't remember who
public LogIncidentValidator()
{
RuleFor(x => x.Description).NotEmpty().NotNull();
RuleFor(x => x.Contact).NotNull();
}
}
};
Now, Wolverine does have to “know” about these validators to use them within the endpoint handling, so I’ll need to have these types registered in the application’s IoC container against the right IValidator<T> interface. This is not required, but Wolverine has a (Lamar) helper to find and register these validators within your project and do so in a way that’s most efficient at runtime (i.e., there’s a micro optimization for making these validators have a Singleton life time in the container if Wolverine can see that the types are stateless). I’ll use that little helper in our Program file within the UseWolverine() configuration like so:
builder.Host.UseWolverine(opts =>
{
// lots more stuff unfortunately, but focus on the line below
// just for now:-)
// Apply the validation middleware *and* discover and register
// Fluent Validation validators
opts.UseFluentValidation();
}
And that’s that. We’ve not got Fluent Validation validation in the request handling for the LogIncident command. In a later section, I’ll explain how Wolverine does this, and try to sell you all on the idea that Wolverine is able to do this more efficiently than other commonly used frameworks *cough* MediatR *cough* that depend on conditional runtime code.
One off validation with “Compound Handlers”
As you might have noticed, the LogIncident command has a CustomerId property that we’re using as is within our HTTP handler. We should never just trust the inputs of a random client, so let’s at least validate that the command refers to a real customer.
Now, typically I like to make Wolverine message handler or HTTP endpoint methods be the “happy path” and handle exception cases and one off validations with a Wolverine feature we inelegantly call “compound handlers.”
I’m going to add a new method to our LogIncidentHandler class like so:
// Wolverine has some naming conventions for Before/Load
// or After/AfterAsync, but you can use a more descriptive
// method name and help Wolverine out with an attribute
[WolverineBefore]
public static async Task<ProblemDetails> ValidateCustomer(
LogIncident command,
// Method injection works just fine within middleware too
IDocumentSession session)
{
var exists = await session.Query<Customer>().AnyAsync(x => x.Id == command.CustomerId);
return exists
? WolverineContinue.NoProblems
: new ProblemDetails { Detail = $"Unknown customer id {command.CustomerId}", Status = 400};
}
Integration Testing
While the individual methods and middleware can all be tested separately, you do want to put everything together with an integration test to prove out whether or not all this magic really works. As I described in an earlier post where we learned how to use Alba to create an integration testing harness for a “critter stack” application, we can write an end to end integration test against the HTTP endpoint like so (this sample doesn’t cover every permutation, but hopefully you get the point):
[Fact]
public async Task create_a_new_incident_happy_path()
{
// We'll need a user
var user = new User(Guid.NewGuid());
// Log a new incident first
var initial = await Scenario(x =>
{
var contact = new Contact(ContactChannel.Email);
x.Post.Json(new LogIncident(BaselineData.Customer1Id, contact, "It's broken")).ToUrl("/api/incidents");
x.StatusCodeShouldBe(201);
x.WithClaim(new Claim("user-id", user.Id.ToString()));
});
var incidentId = initial.ReadAsJson<NewIncidentResponse>().IncidentId;
using var session = Store.LightweightSession();
var events = await session.Events.FetchStreamAsync(incidentId);
var logged = events.First().ShouldBeOfType<IncidentLogged>();
// This deserves more assertions, but you get the point...
logged.CustomerId.ShouldBe(BaselineData.Customer1Id);
}
[Fact]
public async Task log_incident_with_invalid_customer()
{
// We'll need a user
var user = new User(Guid.NewGuid());
// Reject the new incident because the Customer for
// the command cannot be found
var initial = await Scenario(x =>
{
var contact = new Contact(ContactChannel.Email);
var nonExistentCustomerId = Guid.NewGuid();
x.Post.Json(new LogIncident(nonExistentCustomerId, contact, "It's broken")).ToUrl("/api/incidents");
x.StatusCodeShouldBe(400);
x.WithClaim(new Claim("user-id", user.Id.ToString()));
});
}
}
Um, how does this all work?
So far I’ve shown you some “magic” code, and that tends to really upset some folks. I also made some big time claims about how Wolverine is able to be more efficient at runtime (alas, there is a significant “cold start” problem you can easily work around, so don’t get upset if your first ever Wolverine request isn’t snappy).
Wolverine works by using code generation to wrap its handling code around your code. That includes the middleware, and the usage of any IoC services as well. Moreover, do you know what the fastest IoC container is in all the .NET land? I certainly think that Lamar is at least in the game for that one, but nope, the answer is no IoC container at runtime.
One of the advantages of this approach is that we can preview the generated code to unravel the “magic” and explain what Wolverine is doing at runtime. Moreover, we’ve tried to add descriptive comments to the generated code to further explain what and why code is in place.
Here’s the generated code for our LogIncident endpoint (warning, ugly generated code ahead):
// <auto-generated/>
#pragma warning disable
using FluentValidation;
using Microsoft.AspNetCore.Routing;
using System;
using System.Linq;
using Wolverine.Http;
using Wolverine.Http.FluentValidation;
using Wolverine.Marten.Publishing;
using Wolverine.Runtime;
namespace Internal.Generated.WolverineHandlers
{
// START: POST_api_incidents
public class POST_api_incidents : Wolverine.Http.HttpHandler
{
private readonly Wolverine.Http.WolverineHttpOptions _wolverineHttpOptions;
private readonly Wolverine.Runtime.IWolverineRuntime _wolverineRuntime;
private readonly Wolverine.Marten.Publishing.OutboxedSessionFactory _outboxedSessionFactory;
private readonly FluentValidation.IValidator<Helpdesk.Api.LogIncident> _validator;
private readonly Wolverine.Http.FluentValidation.IProblemDetailSource<Helpdesk.Api.LogIncident> _problemDetailSource;
public POST_api_incidents(Wolverine.Http.WolverineHttpOptions wolverineHttpOptions, Wolverine.Runtime.IWolverineRuntime wolverineRuntime, Wolverine.Marten.Publishing.OutboxedSessionFactory outboxedSessionFactory, FluentValidation.IValidator<Helpdesk.Api.LogIncident> validator, Wolverine.Http.FluentValidation.IProblemDetailSource<Helpdesk.Api.LogIncident> problemDetailSource) : base(wolverineHttpOptions)
{
_wolverineHttpOptions = wolverineHttpOptions;
_wolverineRuntime = wolverineRuntime;
_outboxedSessionFactory = outboxedSessionFactory;
_validator = validator;
_problemDetailSource = problemDetailSource;
}
public override async System.Threading.Tasks.Task Handle(Microsoft.AspNetCore.Http.HttpContext httpContext)
{
var messageContext = new Wolverine.Runtime.MessageContext(_wolverineRuntime);
// Building the Marten session
await using var documentSession = _outboxedSessionFactory.OpenSession(messageContext);
// Reading the request body via JSON deserialization
var (command, jsonContinue) = await ReadJsonAsync<Helpdesk.Api.LogIncident>(httpContext);
if (jsonContinue == Wolverine.HandlerContinuation.Stop) return;
// Execute FluentValidation validators
var result1 = await Wolverine.Http.FluentValidation.Internals.FluentValidationHttpExecutor.ExecuteOne<Helpdesk.Api.LogIncident>(_validator, _problemDetailSource, command).ConfigureAwait(false);
// Evaluate whether or not the execution should be stopped based on the IResult value
if (!(result1 is Wolverine.Http.WolverineContinue))
{
await result1.ExecuteAsync(httpContext).ConfigureAwait(false);
return;
}
(var user, var problemDetails2) = Helpdesk.Api.UserDetectionMiddleware.Load(httpContext.User);
// Evaluate whether the processing should stop if there are any problems
if (!(ReferenceEquals(problemDetails2, Wolverine.Http.WolverineContinue.NoProblems)))
{
await WriteProblems(problemDetails2, httpContext).ConfigureAwait(false);
return;
}
var problemDetails3 = await Helpdesk.Api.LogIncidentEndpoint.ValidateCustomer(command, documentSession).ConfigureAwait(false);
// Evaluate whether the processing should stop if there are any problems
if (!(ReferenceEquals(problemDetails3, Wolverine.Http.WolverineContinue.NoProblems)))
{
await WriteProblems(problemDetails3, httpContext).ConfigureAwait(false);
return;
}
// The actual HTTP request handler execution
(var newIncidentResponse_response, var startStream) = Helpdesk.Api.LogIncidentEndpoint.Post(command, user);
// Placed by Wolverine's ISideEffect policy
startStream.Execute(documentSession);
// This response type customizes the HTTP response
ApplyHttpAware(newIncidentResponse_response, httpContext);
// Commit any outstanding Marten changes
await documentSession.SaveChangesAsync(httpContext.RequestAborted).ConfigureAwait(false);
// Have to flush outgoing messages just in case Marten did nothing because of https://github.com/JasperFx/wolverine/issues/536
await messageContext.FlushOutgoingMessagesAsync().ConfigureAwait(false);
// Writing the response body to JSON because this was the first 'return variable' in the method signature
await WriteJsonAsync(httpContext, newIncidentResponse_response);
}
}
// END: POST_api_incidents
}
Summary and What’s Next
The Wolverine.HTTP library was originally built to be a supplement to MVC Core or Minimal API by allowing you to create endpoints that integrated well into Wolverine’s messaging, transactional outbox functionality, and existing transactional middleware. It has since grown into being more of a full fledged alternative for building web services, but with potential for substantially less ceremony and far more testability than MVC Core.
In later posts I’ll talk more about the runtime architecture and how Wolverine squeezes out more performance by eliminating conditional runtime switching, reducing object allocations, and sidestepping the dictionary lookups that are endemic to other “flexible” .NET frameworks like MVC Core.
Wolverine.HTTP has not yet been used with Razor at all, and I’m not sure that will ever happen. Not to worry though, you can happily use Wolverine.HTTP in the same application with MVC Core controllers or even Minimal API endpoints.
OpenAPI support has been a constant challenge with Wolverine.HTTP as the OpenAPI generation in ASP.Net Core is very MVC-centric, but I think we’re in much better shape now.
In the next post, I think we’ll introduce asynchronous messaging with Rabbit MQ. At some point in this series I’m going to talk more about how the “Critter Stack” is well suited for a lower ceremony vertical slice architecture that (hopefully) creates a maintainable and testable codebase without all the typical Clean/Onion Architecture baggage that I could personally do without.
And just for fun…
My “History” with ASP.Net MVC
There’s no useful content in this section, just some navel-gazing. Even though I really haven’t had to use ASP.Net MVC too terribly much, I do have a long history with it:
In the beginning, there was what we now call ASP Classic, and it was good. For that day and time anyway when we would happily code directly in production and before TDD and SOLID and namby-pamby “source control.” (I started my development career in “Shadow IT” if that’s not obvious here). And when we did use source control, it was VSS because on the sly because the official source control in the office was something far, far worse that was COBOL-centric that I don’t think even exists any longer.
Next there was ASP.Net WebForms and it was dreadful. I hated it.
We started collectively learning about Agile and wanted to practice Test Driven Development, and began to hate WebForms even more
Ruby on Rails came out in the middle 00’s and made what later became the ALT.Net community absolutely loathe WebForms even more than we already did
At an MVP Summit on the Microsoft campus, the one and only Scott Guthrie, the Gu himself, showed a very early prototype of ASP.Net MVC to a handful of us and I was intrigued. That continued onward through the official unveiling of MVC at the very first ALT.Net open spaces event in Austin in ’07.
A few collaborators and I decided that early ASP.Net MVC was too high ceremony and went all “Captain Ahab” trying to make an alternative, open source framework called FubuMVC go as an alternative — all while NancyFx, a “yet another Sinatra clone” became far more successful years before Microsoft finally got around to their own inevitable Sinatra clone (Minimal API)
After .NET Core came along and made .NET a helluva lot better ecosystem, I decided that whatever, MVC Core is fine, it’s not going to be the biggest problem on our project, and if the client wants to use it, there’s no need to be upset about it. It’s fine, no really.
MVC Core has gotten some incremental improvements over time that made it lower ceremony than earlier ASP.Net MVC, and that’s worth calling out as a positive
People working with MVC Core started running into the problem of bloated controllers, and started using early MediatR as a way to kind of, sort of manage controller bloat by offloading it into focused command handlers. I mocked that approach mercilessly, but that was partially because of how awful a time I had helping folks do absurdly complicated middleware schemes with MediatR using StructureMap or Lamar (MVC Core + MediatR is probably worthwhile as a forcing function to avoid the controller bloat problems with MVC Core by itself)
I worked on several long-running codebases built with MVC Core based on Clean Architecture templates that were ginormous piles of technical debt, and I absolutely blame MVC Core as a contributing factor for that
I’m back to mildly disliking MVC Core (and I’m outright hostile to Clean/Onion templates). Not that you can’t write maintainable systems with MVC Core, but I think that its idiomatic usage can easily lead to unmaintainable systems. Let’s just say that I don’t think that MVC Core — and especially combined with some kind of Clean/Onion Architecture template as it very commonly is out in the wild — leads folks to the “pit of success” in the long run
Let’s build a small web service application using the whole “Critter Stack” and their friends, one small step at a time. For right now, the “finished” code is at CritterStackHelpDesk on GitHub.
Before I go on with anything else in this series, I think we should establish some automated testing infrastructure for our incident tracking, help desk service. While we’re absolutely going to talk about how to structure code with Wolverine to make isolated unit testing as easy as possible for our domain logic, there are some elements of your system’s behavior that are best tested with automated integration tests that use the system’s infrastructure.
In this post I’m going to show you how I like to set up an integration testing harness for a “Critter Stack” service. I’m going to use xUnit.Net in this post, and while the mechanics would be a little different, I think the basic concepts should be easily transferable to other testing libraries like NUnit or MSTest. I’m also going to bring in the Alba library that we’ll use for testing HTTP calls through our system in memory, but in this first step, all you need to understand is that Alba is helping to set up the system under test in our testing harness.
Heads up a little bit, I’m skipping to the “finished” state of the help desk API code in this post, so there’s some Marten and Wolverine concepts sneaking in that haven’t been introduced until now.
First, let’s start our new testing project with:
dotnet new xunit
Then add some additional Nuget references:
dotnet add package Shouldly
dotnet add package Alba
That gives us a skeleton of the testing project. Before going on, we need to add a project reference from our new testing project to the entry point project of our help desk API. As we are worried about integration testing right now, we’re going to want the testing project to be able to start the system under test project up by calling the normal Program.Main() entrypoint so that we’re running the application the way that the system is normally configured — give or take a few overrides.
Let’s stop and talk about this a little bit because I think this is an important point. I think integration tests are more “valid” (i.e. less prone to false positives or false negatives) as they more closely reflect the actual system. I don’t want completely separate bootstrapping for the test harness that may or may not reflect the application’s production bootstrapping (don’t blow that point off, I’ve seen countless teams do partial IoC configuration for testing that can vary quite a bit from the application’s configuration).
So if you’ll accept my argument that we should be bootstrapping the system under test with its own Program.Main() entry point, our next step is to add this code to the main service to enable the test project to access that entry point:
using System.Runtime.CompilerServices;
// You have to do this in order to reference the Program
// entry point in the test harness
[assembly:InternalsVisibleTo("Helpdesk.Api.Tests")]
Switching finally to our testing project, I like to create a class I usually call AppFixture that manages the lifetime of the system under test running in our test project like so:
public class AppFixture : IAsyncLifetime
{
public IAlbaHost Host { get; private set; }
// This is a one time initialization of the
// system under test before the first usage
public async Task InitializeAsync()
{
// Sorry folks, but this is absolutely necessary if you
// use Oakton for command line processing and want to
// use WebApplicationFactory and/or Alba for integration testing
OaktonEnvironment.AutoStartHost = true;
// This is bootstrapping the actual application using
// its implied Program.Main() set up
// This is using a library named "Alba". See https://jasperfx.github.io/alba for more information
Host = await AlbaHost.For<Program>(x =>
{
x.ConfigureServices(services =>
{
// We'll be using Rabbit MQ messaging later...
services.DisableAllExternalWolverineTransports();
// We're going to establish some baseline data
// for testing
services.InitializeMartenWith<BaselineData>();
});
}, new AuthenticationStub());
}
public Task DisposeAsync()
{
if (Host != null)
{
return Host.DisposeAsync().AsTask();
}
return Task.CompletedTask;
}
}
A few notes about the code above:
Alba is using the WebApplicationFactory under the covers to bootstrap our help desk API service using the in memory TestServer in place of Kestrel. WebApplicationFactory does allow us to modify the IoC service registrations for our system and override parts of the system’s normal configuration
In this case, I’m telling Wolverine to effectively stub out all external transports. In later posts we’ll use Rabbit MQ for example to publish messages to an external process, but in this test harness we’re going to turn that off and simple have Wolverine be able to “catch” the outgoing messages in our tests. See Wolverine’s test automation support documentation for more information about this.
The DisposeAsync() method is very important. If you want to make your integration tests be repeatable and run smoothly as you iterate, you need the tests to clean up after themselves and not leave locks on resources like ports or files that could stop the next test run from functioning correctly
Pay attention to the `OaktonEnvironment.AutoStartHost = true;` call, that’s 100% necessary if your application is using Oakton for command parsing. Sorry.
As will be inevitably necessary, I’m using Alba’s facility for stubbing out web authentication that allows us to both sidestep pesky authentication infrastucture in functional testing while also happily letting us pass along user claims as test inputs in individual tests
Bootstrapping the IHost for your application can be expensive, so I prefer to share that host across tests whenever possible, and I generally rely on having individual tests establish their inputs at beginning of each test. See the xUnit.Net documentation on sharing fixtures between tests for more context about the xUnit mechanics.
For the Marten baseline data, right now I’m just making sure there’s at least one valid Customer document that we’ll need later:
public class BaselineData : IInitialData
{
public static Guid Customer1Id { get; } = Guid.NewGuid();
public async Task Populate(IDocumentStore store, CancellationToken cancellation)
{
await using var session = store.LightweightSession();
session.Store(new Customer
{
Id = Customer1Id,
Region = "West Cost",
Duration = new ContractDuration(DateOnly.FromDateTime(DateTime.Today.Subtract(100.Days())), DateOnly.FromDateTime(DateTime.Today.Add(100.Days())))
});
await session.SaveChangesAsync(cancellation);
}
}
To simplify the usage a little bit, I like to have a base class for integration tests that I like to call IntegrationContext:
[Collection("integration")]
public abstract class IntegrationContext : IAsyncLifetime
{
private readonly AppFixture _fixture;
protected IntegrationContext(AppFixture fixture)
{
_fixture = fixture;
}
// more....
public IAlbaHost Host => _fixture.Host;
public IDocumentStore Store => _fixture.Host.Services.GetRequiredService<IDocumentStore>();
async Task IAsyncLifetime.InitializeAsync()
{
// Using Marten, wipe out all data and reset the state
// back to exactly what we described in BaselineData
await Store.Advanced.ResetAllData();
}
// This is required because of the IAsyncLifetime
// interface. Note that I do *not* tear down database
// state after the test. That's purposeful
public Task DisposeAsync()
{
return Task.CompletedTask;
}
// This is just delegating to Alba to run HTTP requests
// end to end
public async Task<IScenarioResult> Scenario(Action<Scenario> configure)
{
return await Host.Scenario(configure);
}
// This method allows us to make HTTP calls into our system
// in memory with Alba, but do so within Wolverine's test support
// for message tracking to both record outgoing messages and to ensure
// that any cascaded work spawned by the initial command is completed
// before passing control back to the calling test
protected async Task<(ITrackedSession, IScenarioResult)> TrackedHttpCall(Action<Scenario> configuration)
{
IScenarioResult result = null;
// The outer part is tying into Wolverine's test support
// to "wait" for all detected message activity to complete
var tracked = await Host.ExecuteAndWaitAsync(async () =>
{
// The inner part here is actually making an HTTP request
// to the system under test with Alba
result = await Host.Scenario(configuration);
});
return (tracked, result);
}
}
The first thing I want to draw your attention to is the call to await Store.Advanced.ResetAllData(); in the InitializeAsync() method that will be called before each of our integration tests executing. In my approach, I strongly prefer to reset the state of the database before each test in order to start from a known system state. I’m also assuming that each test if necessary, will add additional state to the system’s Marten database as necessary for the test. This philosophically is what I’ve long called “Self-Contained Tests.” I also think it’s important to have the tests leave the database state alone after a test run so that if you are running tests one at a time you can use the left over database state to help troubleshoot why a test might have failed.
Other folks will try to spin up a separate database (maybe with TestContainers) per test or even a completely separate IHost per test, but I think that the cost of doing it that way is just too slow. I’d rather reset the system between tests and not incur the cost of recycling database containers and/or the system’s IHost. This comes at the cost of forcing your test suite to run tests in serial order, but I also think that xUnit.Net is not the best possible tool at parallel test runs, so I’m not sure you lose out on anything there.
And now for an actual test. We have an HTTP endpoint in our system we built early on that can process a LogIncident command, and create a new event stream for this new Incident with a single IncidentLogged event. I’ve skipped ahead a little bit and added a requirement that we capture a user id from an expected Claim on the ClaimsPrincipal for the current request that you’ll see reflected in the test below:
public class log_incident : IntegrationContext
{
public log_incident(AppFixture fixture) : base(fixture)
{
}
[Fact]
public async Task create_a_new_incident()
{
// We'll need a user
var user = new User(Guid.NewGuid());
// Log a new incident by calling the HTTP
// endpoint in our system
var initial = await Scenario(x =>
{
var contact = new Contact(ContactChannel.Email);
x.Post.Json(new LogIncident(BaselineData.Customer1Id, contact, "It's broken")).ToUrl("/api/incidents");
x.StatusCodeShouldBe(201);
x.WithClaim(new Claim("user-id", user.Id.ToString()));
});
var incidentId = initial.ReadAsJson<NewIncidentResponse>().IncidentId;
using var session = Store.LightweightSession();
var events = await session.Events.FetchStreamAsync(incidentId);
var logged = events.First().ShouldBeOfType<IncidentLogged>();
// This deserves more assertions, but you get the point...
logged.CustomerId.ShouldBe(BaselineData.Customer1Id);
}
}
Summary and What’s Next
The “Critter Stack” core team and our community care very deeply about effective testing, so we’ve invested from the very beginning in making integration testing as easy as possible with both Marten and Wolverine.
Alba is another little library from the JasperFx family that just makes it easier to write integration tests at the HTTP layer. Alba is perfect for doing integration testing of your web services. I definitely find it advantageous to be able to quickly bootstrap a web service project and run tests completely in memory on demand. That’s a much easier and quicker feedback cycle than trying to deploy the service and write tests that remotely interact with the web service through HTTP. And I shouldn’t even have to mention how absurdly slow it is in comparison to try to test the same web service functionality through the actual user interface with something like Selenium.
From the Marten side of things, PostgreSQL has a pretty small Docker image size, so it’s pretty painless to spin up on development boxes. Especially contrasted with situations where development teams share a centralized development database (shudder, hope not many folks still do that), having an isolated database for each developer that they can also tear down and rebuild at will certainly helps make it a lot easier to succeed with automated integration testing.
I think that document databases in general are a lot easier to deal with in automated testing than using a relational database with an ORM as the persistence tooling as it’s much less friction in setting up database schemas or to tear down database state. Marten goes a step farther than most persistence tools by having built in APIs to tear down database state or reset to baseline data sets in between tests.
We’ll dig deeper into Wolverine’s integration testing support later in this series with message handler testing, testing handlers that in turn spawn other messages, and dealing with external messaging in tests.
I think the next post is just going to be a quick survey of “Marten as Document Database” before I get back to Wolverine’s HTTP endpoint model.
I’m helping a JasperFx Software client get a new system off the ground that’s using both Hot Chocolate for GraphQL and Marten for event sourcing and general persistence. That’s led to a couple blog posts so far:
Today though, I want to talk about some early ideas for automating integration testing of GraphQL endpoints. Before I show my intended approach, here’s a video from ChiliCream (the company behind Hot Chocolate) showing their recommendations for testing:
Now, to be honest, I don’t agree with their recommended approach. I played a lot of sports growing up in a small town, and one of my coach’s favorite sayings actually applies here:
If you want to be good, practice like you play
every basketball coach I ever played for
That saying really just meant to try to do things well in practice so that it would carry right through into the real games. In the case of integration testing, I want to be testing against the “real” application configuration including the full ASP.Net Core middleware stack and the exact Marten and Hot Chocolate configuration for the application instead of against a separately constructed IoC and Hot Chocolate configuration. In this particular case, the application is using multi-tenancy through a separate database per tenant strategy with the tenant selection at runtime being ultimately dependent upon expected claims on the ClaimsPrincipal for the request.
All that being said, I’m unsurprisingly opting to use the Alba library within xUnit specifications to test through the entire application stack with just a few overrides of the application. My usual approach with xUnit.Net and Alba is to create a shared context that manages the lifecycle of the bootstrapped application in memory like so:
public class AppFixture : IAsyncLifetime
{
public IAlbaHost Host { get; private set; }
public async Task InitializeAsync()
{
// This is bootstrapping the actual application using
// its implied Program.Main() set up
Host = await AlbaHost.For<Program>(x => { });
}
Moving on to the GraphQL mechanics, what I’ve come up with so far is to put a GraphQL query and/or mutation in a flat file within the test project. I hate not having the test inputs in the same code file as the test, but I’m trying to offset that by spitting out the GraphQL query text into the test output to make it a little easier to troubleshoot failing tests. The Alba mechanics — so far — look like this (simplified a bit from the real code):
public Task<IScenarioResult> PostGraphqlQueryFile(string filename)
{
// This ugly code is just loading up the GraphQL query from
// a named file
var path = AppContext
.BaseDirectory
.ParentDirectory()
.ParentDirectory()
.ParentDirectory()
.AppendPath("GraphQL")
.AppendPath(filename);
var queryText = File.ReadAllText(path);
// Building up the right JSON to POST to the /graphql
// endpoint
var dictionary = new Dictionary<string, string>();
dictionary["query"] = queryText;
var json = JsonConvert.SerializeObject(dictionary);
// Write the GraphQL query being used to the test output
// just as information for troubleshooting
this.output.WriteLine(queryText);
// Using Alba to run a GraphQL request end to end
// in memory. This would throw an exception if the
// HTTP status code is not 200
return Host.Scenario(x =>
{
// I'm omitting some code here that we're using to mimic
// the tenant detection in the real code
x.Post.Url("/graphql").ContentType("application/json");
// Dirty hackery.
x.ConfigureHttpContext(c =>
{
var stream = c.Request.Body;
// This encoding turned out to be necessary
// Thank you Stackoverflow!
stream.WriteAsync(Encoding.UTF8.GetBytes(json));
stream.Position = 0;
});
});
}
That’s the basics of running the GraphQL request through, but part of the value of Alba in testing more traditional “JSON over HTTP” endpoints is being able to easily read the HTTP outputs with Alba’s built in helpers that use the application’s JSON serialization setup. I was missing that initially with the GraphQL usage, so I added this extra helper for testing a single GraphQL query or mutation at a time where there is a return body from the mutation:
public async Task<T> PostGraphqlQueryFile<T>(string filename)
{
// Delegating to the previous method
var result = await PostGraphqlQueryFile(filename);
// Get the raw HTTP response
var text = await result.ReadAsTextAsync();
// I'm using Newtonsoft.Json to get into the raw JSON
// a little bit
var json = (JObject)JsonConvert.DeserializeObject(text);
// Make the test fail if the GraphQL response had any errors
json.ContainsKey("errors").ShouldBeFalse($"GraphQL response had errors:\n{text}");
// Find the *actual* response within the larger GraphQL response
// wrapper structure
var data = json["data"].First().First().First().First();
// This would vary a bit in your application
var serializer = JsonSerializer.Create(new JsonSerializerSettings
{
ContractResolver = new CamelCasePropertyNamesContractResolver()
});
// Deserialize the raw JSON into the response type for
// easier access in tests because "strong typing for the win!"
return serializer.Deserialize<T>(new JTokenReader(data));
}
And after all that, that leads to integration tests in test fixture classes subclassing our IntegrationContext base type like this:
public class SomeTestFixture : IntegrationContext
{
public SomeTestFixture(ITestOutputHelper output, AppFixture fixture) : base(output, fixture)
{
}
[Fact]
public async Task perform_mutation()
{
var response = await this.PostGraphqlQueryFile<SomeResponseType>("someGraphQLMutation.txt");
// Use the strong typed response object in the
// "assert" part of your test
}
}
Summary
We’ll see how it goes, but already this harness helped me out to have some repeatable steps to tweak transaction management and multi-tenancy without breaking the actual code. With the custom harness around it, I think we’ve made the GraphQL endpoint testing be somewhat declarative.
Hey, JasperFx Software is more than just some silly named open source frameworks. We’re also deeply experienced in test driven development, designing for testability, and making test automation work without driving into the ditch with over dependence on slow, brittle Selenium testing. Hit us up about what we could do to help you be more successful in your own test automation or TDD efforts.
I have been working furiously on getting an incremental Wolverine release out this week, with one of the new shiny features being end to end support for multi-tenancy (the work in progress GitHub issue is here) through Wolverine.Http endpoints. I hit a point today where I have to admit that I can’t finish that work today, but did see the potential for a blog post on the Alba library (also part of JasperFx’s OSS offerings) and how I was using Alba today to write integration tests for this new functionality, show how the sausage is being made, and even work in a test-first manner.
To put the desired functionality in context, let’s say that we’re building a “Todo” web service using Marten for persistence. Moreover, we’re expecting this system to have a massive number of users and want to be sure to isolate data between customers, so we plan on using Marten’s support for using a separate database for each tenant (think user organization in this case). Within that “Todo” system, let’s say that we’ve got a very simple web service endpoint to just serve up all the completed Todo documents for the current tenant like this one:
Now, you’ll notice that there is a route argument named “tenant” that isn’t consumed at all by this web api endpoint. What I want Wolverine to do in this case is to infer that the value of that “tenant” value within the route is the current tenant id for the request, and quietly select the correct Marten tenant database for me without me having to write a lot of repetitive code.
Just a note, all of this is work in progress and I haven’t even pushed the code at the time of writing this post. Soon. Maybe tomorrow.
Stepping into the bootstrapping for this web service, I’m going to add these new lines of code to the Todo web service’s Program file to teach Wolverine.HTTP how to handle multi-tenancy detection for me:
// Let's add in Wolverine HTTP endpoints to the routing tree
app.MapWolverineEndpoints(opts =>
{
// Letting Wolverine HTTP automatically detect the tenant id!
opts.TenantId.IsRouteArgumentNamed("tenant");
// Assert that the tenant id was successfully detected,
// or pull the rip cord on the request and return a
// 400 w/ ProblemDetails
opts.TenantId.AssertExists();
});
So that’s some of the desired, built in multi-tenancy features going into Wolverine.HTTP 1.7 some time soon. Back to the actual construction of these new features and how I used Alba this morning to drive the coding.
I started by asking around on social media about what other folks used as strategies to detect the tenant id in ASP.Net Core multi-tenancy, and came up with this list (plus a few other options):
Use a custom request header
Use a named route argument
Use a named query string value (I hate using the query string myself, but like cockroaches or scorpions in our Central Texas house, they always sneak in somehow)
Use an expected Claim on the ClaimsPrincipal
Mix and match the strategies above because you’re inevitably retrofitting this to an existing system
Use sub domain names (I’m arbitrarily skipping this one for now just because it was going to be harder to test and I’m pressed for time this week)
Once I saw a little bit of consensus on the most common strategies (and thank you to everyone who responded to me today), I jotted down some tasks in GitHub-flavored markdown (I *love* this feature) on what the configuration API would look like and my guesses for development tasks:
- [x] `WolverineHttpOptions.TenantId.IsRouteArgumentNamed("foo")` -- creates a policy
- [ ] `[TenantId("route arg")]`, or make `[TenantId]` on a route parameter for one offs. Will need to throw if not a route argument
- [x] `WolverineHttpOptions.TenantId.IsQueryStringValue("key") -- creates policy
- [x] `WolverineHttpOptions.TenantId.IsRequestHeaderValue("key") -- creates policy
- [x] `WolverineHttpOptions.TenantId.IsClaimNamed("key") -- creates policy
- [ ] New way to add custom middleware that's first inline
- [ ] Documentation on custom strategies
- [ ] Way to register the "preprocess context" middleware methods
- [x] Middleware or policy that blows it up with no tenant id detected. Use ProblemDetails
- [ ] Need an attribute to opt into tenant id is required, or tenant id is NOT required on certain endpoints
Knowing that I was going to need to quickly stand up different configurations of a test web service’s IHost, I started with this skeleton that I hoped would make the test setup relatively easy:
public class multi_tenancy_detection_and_integration : IAsyncDisposable, IDisposable
{
private IAlbaHost theHost;
public void Dispose()
{
theHost.Dispose();
}
// The configuration of the Wolverine.HTTP endpoints is the only variable
// part of the test, so isolate all this test setup noise here so
// each test can more clearly communicate the relationship between
// Wolverine configuration and the desired behavior
protected async Task configure(Action<WolverineHttpOptions> configure)
{
var builder = WebApplication.CreateBuilder(Array.Empty<string>());
builder.Services.AddScoped<IUserService, UserService>();
// Haven't gotten around to it yet, but there'll be some end to
// end tests in a bit from the ASP.Net request all the way down
// to the underlying tenant databases
builder.Services.AddMarten(Servers.PostgresConnectionString)
.IntegrateWithWolverine();
// Defaults are good enough here
builder.Host.UseWolverine();
// Setting up Alba stubbed authentication so that we can fake
// out ClaimsPrincipal data on requests later
var securityStub = new AuthenticationStub()
.With("foo", "bar")
.With(JwtRegisteredClaimNames.Email, "guy@company.com")
.WithName("jeremy");
// Spinning up a test application using Alba
theHost = await AlbaHost.For(builder, app =>
{
app.MapWolverineEndpoints(configure);
}, securityStub);
}
public async ValueTask DisposeAsync()
{
// Hey, this is important!
// Make sure you clean up after your tests
// to make the subsequent tests run cleanly
await theHost.StopAsync();
}
Now, the intermediate step of tenant detection even before Marten itself gets involved is to analyze the HttpContext for the current request, try to derive the tenant id, then set the MessageContext.TenantId in Wolverine for this current request — which Wolverine’s Marten integration will use a little later to create a Marten session pointing at the correct database for that tenant.
Just to measure the tenant id detection — because that’s what I want to build and test first before even trying to put everything together with a real database too — I built these two simple GET endpoints with Wolverine.HTTP:
public static class TenantedEndpoints
{
[WolverineGet("/tenant/route/{tenant}")]
public static string GetTenantIdFromRoute(IMessageBus bus)
{
return bus.TenantId;
}
[WolverineGet("/tenant")]
public static string GetTenantIdFromWhatever(IMessageBus bus)
{
return bus.TenantId;
}
}
That folks is the scintillating code that brings droves of readership to my blog!
Alright, so now I’ve got some support code for the “Arrange” and “Assert” part of my Arrange/Act/Assert workflow. To finally jump into a real test, I started with detecting the tenant id with a named route pattern using Alba with this code:
[Fact]
public async Task get_the_tenant_id_from_route_value()
{
// Set up a new application with the desired configuration
await configure(opts => opts.TenantId.IsRouteArgumentNamed("tenant"));
// Run a web request end to end in memory
var result = await theHost.Scenario(x => x.Get.Url("/tenant/route/chartreuse"));
// Make sure it worked!
// ZZ Top FTW! https://www.youtube.com/watch?v=uTjgZEapJb8
result.ReadAsText().ShouldBe("chartreuse");
}
The code itself is a little wonky, but I had that quickly working end to end. I next proceeded to the query string strategy like this:
[Fact]
public async Task get_the_tenant_id_from_the_query_string()
{
await configure(opts => opts.TenantId.IsQueryStringValue("t"));
var result = await theHost.Scenario(x => x.Get.Url("/tenant?t=bar"));
result.ReadAsText().ShouldBe("bar");
}
Hopefully you can see from the two tests above how that configure() method already helped me quickly write the next test. Sometimes — but not always so be careful with this — the best thing you can do is to first invest in a test harness that makes subsequent tests be more declarative, quicker to write mechanically, and easier to read later.
Next, let’s go to the request header strategy test:
[Fact]
public async Task get_the_tenant_id_from_request_header()
{
await configure(opts => opts.TenantId.IsRequestHeaderValue("tenant"));
var result = await theHost.Scenario(x =>
{
x.Get.Url("/tenant");
// Alba is helping set up the request header
// for me here
x.WithRequestHeader("tenant", "green");
});
result.ReadAsText().ShouldBe("green");
}
Easy enough, and hopefully you see how Alba helped me get the preconditions into the request quickly in that test. Now, let’s go for a little more complicated test where I first ran into a little trouble and work with the Claim strategy:
[Fact]
public async Task get_the_tenant_id_from_a_claim()
{
await configure(opts => opts.TenantId.IsClaimTypeNamed("tenant"));
var result = await theHost.Scenario(x =>
{
x.Get.Url("/tenant");
// Add a Claim to *only* this request
x.WithClaim(new Claim("tenant", "blue"));
});
result.ReadAsText().ShouldBe("blue");
}
I hit a little friction at first because I didn’t have Alba set up exactly right at first, but since Alba runs your application code completely within process, it was very quick to step right into the code and figure out why the code wasn’t working at first (I’d forgotten to set up the SecurityStub shown above). Refreshing my memory on how Alba’s Security Extensions worked, I was able to get going again. Arguably, Alba’s ability to fake out or even work with your application’s security in tests is its best features.
So that’s been a lot of “happy path” tests, so now let’s break things by specifying Wolverine’s new behavior to validate that a request has a valid tenant id with these two new tests. First, a happy path:
[Fact]
public async Task require_tenant_id_happy_path()
{
await configure(opts =>
{
opts.TenantId.IsQueryStringValue("tenant");
opts.TenantId.AssertExists();
});
// Got a 200? All good!
await theHost.Scenario(x =>
{
x.Get.Url("/tenant?tenant=green");
});
}
Note that Alba would cause a test failure if the web request did not return a 200 status code.
And to lock down the binary behavior, here’s the “sad path” where Wolverine should be returning a 400 status code with ProblemDetails data:
[Fact]
public async Task require_tenant_id_sad_path()
{
await configure(opts =>
{
opts.TenantId.IsQueryStringValue("tenant");
opts.TenantId.AssertExists();
});
var results = await theHost.Scenario(x =>
{
x.Get.Url("/tenant");
// Tell Alba we expect a non-200 response
x.StatusCodeShouldBe(400);
});
// Alba's helpers to deserialize JSON responses
// to a strong typed object for easy
// assertions
var details = results.ReadAsJson<ProblemDetails>();
// I like to refer to constants in test assertions sometimes
// so that you can tweak error messages later w/o breaking
// automated tests. And inevitably regret it when I
// don't do this
details.Detail.ShouldBe(TenantIdDetection
.NoMandatoryTenantIdCouldBeDetectedForThisHttpRequest);
}
To be honest, it took me a few minutes to get the test above to pass because of some internal middleware mechanics I didn’t expect. As usual. All the same though, Alba helped me drive the code through “outside in” tests that ran quickly so I could iterate rapidly.
Alba itself is a descendant of some very old test helper code in FubuMVC, then was ported to OWIN (RIP, but I don’t miss you), then to early ASP.Net Core, and finally rebuilt as a helper around ASP.Net Core’s. built in TestServer and WebApplicationFactory. Alba has been continuously used for well over a decade now. If you’re looking for selling points for Alba, I’d say:
Alba makes your integration tests more declarative
There are quite a few helpers for common repetitive tasks in integration tests like reading JSON data with the application’s built in serialization
Simplifies test setup
It runs completely in memory where you can quickly spin up your application and jump right into debugging when necessary
Testing web services with Alba is much more efficient and faster than trying to do the same thing through inevitably slow, brittle, and laborious Selenium/Playwright/Cypress testing
As long term Agile practitioners, the folks behind the whole JasperFx / “Critter Stack” ecosystem explicitly design our tools around the quality of “testability.” Case in point, Wolverine has quite a bit of integration test helpers for testing through message handler execution.
However, while helping a Wolverine user last week, they told me that they were bypassing those built in tools because they wanted to do an integration test of an HTTP service call that publishes a message to Wolverine. That’s certainly going to be a common scenario, so let’s talk about a strategy for reliably writing integration tests that both invoke an HTTP request and can observe the ongoing Wolverine activity to “know” when the “act” part of a typical “arrange, act, assert” test is complete.
In the Wolverine codebase itself, there’s a couple projects that we use to test the Wolverine.Http library:
WolverineWebApi — a web api project that has a lot of fake endpoints that tries to cover the whole gamut of usage scenarios for Wolverine.Http, including a couple use cases of publishing messages directly from HTTP endpoint handlers to asynchronous message handling inside of Wolverine core
Wolverine.Http.Tests — an xUnit.Net project that contains a mix of unit tests and integration tests through WolverineWebApi and Wolverine.Http itself
Back to the need to write integration tests that span work from HTTP service invocations through to Wolverine message processing, Wolverine.Http uses the Alba library (another JasperFx project!) to execute and run assertions against HTTP services. At least at the moment, xUnit.Net is my goto test runner library, so Wolverine.Http.Tests has this shared fixture that is shared between test classes:
public class AppFixture : IAsyncLifetime
{
public IAlbaHost Host { get; private set; }
public async Task InitializeAsync()
{
// Sorry folks, but this is absolutely necessary if you
// use Oakton for command line processing and want to
// use WebApplicationFactory and/or Alba for integration testing
OaktonEnvironment.AutoStartHost = true;
// This is bootstrapping the actual application using
// its implied Program.Main() set up
Host = await AlbaHost.For<Program>(x => { });
}
A couple notes on this approach:
I think it’s very important to use the actual application bootstrapping for the integration testing rather than trying to have a parallel IoC container configuration for test automation as I frequently see out in the wild. That doesn’t preclude customizing that bootstrapping a little bit to substitute in fake, stand in services for problematic external infrastructure.
The approach I’m showing here with xUnit.Net does have the effect of making the tests execute serially, which might not be what you want in very large test suites
I think the xUnit.Net shared fixture approach is somewhat confusing and I always have to review the documentation on it when I try to use it
There’s also a shared base class for integrated HTTP tests called IntegrationContext, with a little bit of that shown below:
[CollectionDefinition("integration")]
public class IntegrationCollection : ICollectionFixture<AppFixture>
{
}
[Collection("integration")]
public abstract class IntegrationContext : IAsyncLifetime
{
private readonly AppFixture _fixture;
protected IntegrationContext(AppFixture fixture)
{
_fixture = fixture;
}
// more....
More germane to this particular post, here’s a helper method inside of IntegrationContext I wrote specifically to do integration testing that has to span an HTTP request through to asynchronous Wolverine message handling:
// This method allows us to make HTTP calls into our system
// in memory with Alba, but do so within Wolverine's test support
// for message tracking to both record outgoing messages and to ensure
// that any cascaded work spawned by the initial command is completed
// before passing control back to the calling test
protected async Task<(ITrackedSession, IScenarioResult)> TrackedHttpCall(Action<Scenario> configuration)
{
IScenarioResult result = null;
// The outer part is tying into Wolverine's test support
// to "wait" for all detected message activity to complete
var tracked = await Host.ExecuteAndWaitAsync(async () =>
{
// The inner part here is actually making an HTTP request
// to the system under test with Alba
result = await Host.Scenario(configuration);
});
return (tracked, result);
}
Now, for a sample usage of that test helpers, here’s a fake endpoint from WolverineWebApi that I used to prove that Wolverine.Http endpoints can publish messages through Wolverine’s cascading message approach:
// This would have a string response and a 200 status code
[WolverinePost("/spawn")]
public static (string, OutgoingMessages) Post(SpawnInput input)
{
var messages = new OutgoingMessages
{
new HttpMessage1(input.Name),
new HttpMessage2(input.Name),
new HttpMessage3(input.Name),
new HttpMessage4(input.Name)
};
return ("got it", messages);
}
Psst. Notice how the endpoint method’s signature up above is a synchronous pure function which is cleaner and easier to unit test than the equivalent functionality would be in other .NET frameworks that would have required you to call asynchronous methods on some kind of IMessageBus interface.
To test this thing, I want to run an HTTP POST to the “/span” Url in our application, then prove that there were four matching messages published through Wolverine. Here’s the test for that functionality using our earlier TrackedHttpCall() testing helper:
[Fact]
public async Task send_cascaded_messages_from_tuple_response()
{
// This would fail if the status code != 200 btw
// This method waits until *all* detectable Wolverine message
// processing has completed
var (tracked, result) = await TrackedHttpCall(x =>
{
x.Post.Json(new SpawnInput("Chris Jones")).ToUrl("/spawn");
});
result.ReadAsText().ShouldBe("got it");
// "tracked" is a Wolverine ITrackedSession object that lets us interrogate
// what messages were published, sent, and handled during the testing perioc
tracked.Sent.SingleMessage<HttpMessage1>().Name.ShouldBe("Chris Jones");
tracked.Sent.SingleMessage<HttpMessage2>().Name.ShouldBe("Chris Jones");
tracked.Sent.SingleMessage<HttpMessage3>().Name.ShouldBe("Chris Jones");
tracked.Sent.SingleMessage<HttpMessage4>().Name.ShouldBe("Chris Jones");
}
There you go. In one fell swoop, we’ve got a reliable way to do integration testing against asynchronous behavior in our system that’s triggered by an HTTP service call — including any and all configured ASP.Net Core or Wolverine.Http middleware that’s part of the execution pipeline.
By “reliable” here in regards to integration testing, I want you to think about any reasonably complicated Selenium test suite and how infuriatingly often you get “blinking” tests that are caused by race conditions between some kind of asynchronous behavior and the test harness trying to make test assertions against the browser state. Wolverine’s built in integration test support can eliminate that kind of inconsistent test behavior by removing the race condition as it tracks all ongoing work for completion.
Oh, and here’s Chris Jones sacking Joe Burrow in the AFC Championship game to seal the Chiefs win that was fresh in my mind when I originally wrote that code above:
Just a short one for today, mostly to answer a question that came in earlier this week.
When using Wolverine.Http to expose HTTP endpoint services that end up capturing Marten events, you might have an endpoint coded like this one from the Wolverine tests that takes in a command message and tries to start a new Marten event stream for the Order aggregate:
[Transactional] // This can be omitted if you use auto-transactions
[WolverinePost("/orders/create4")]
public static (OrderStatus, IStartStream) StartOrder4(StartOrderWithId command)
{
var items = command.Items.Select(x => new Item { Name = x }).ToArray();
// This is unique to Wolverine (we think)
var startStream = MartenOps
.StartStream<Order>(command.Id,new OrderCreated(items));
return (
new OrderStatus(startStream.StreamId, false),
startStream
);
}
Where the command looks like this:
public record StartOrderWithId(Guid Id, string[] Items);
In the HTTP endpoint above, we’re:
Creating a new event stream for Order that uses the stream/order id sent in the command
Returning a response body of type OrderStatus to the caller
Using Wolverine’s Marten integration to also return an IStartStream object that integrated middleware will apply to Marten’s IDocumentSession (more on this in my next post because we think this is a big deal by itself).
Great, easy enough right? Just to add some complexity, if the caller happens to send up the same, new order id additional times then Marten will throw an exception called `ExistingStreamIdCollisionException` just noting that no, you can’t create a new stream with that id because one already exists.
Marten’s behavior helps protect the data from duplication, but what about trying to make the HTTP response a little nicer by catching that exception automatically, and returning a ProblemDetails body with a 400 Bad Request status code to denote exactly what happened?
While you actually could do that globally with a bit of ASP.Net Core middleware, that applies everywhere at runtime and not just on the specific routes that could throw that exception. I’m not sure how big a deal this is to many of you, but using ASP.Net Core middleware would also be unable to have any impact on OpenAPI descriptions of your endpoints and it would be up to you to explicitly add attributes on your endpoints to denote the error handling response.
Fortunately, Wolverine’s middleware strategy will allow you to specifically target only the relevant routes and also add OpenAPI descriptions to your API’s generated documentation. And do so in a way that is arguably more efficient than the ASP.Net Core middleware approach at runtime anyway.
Jumping right into the deep end of the pool (I’m helping take my little ones swimming this afternoon and maybe thinking ahead), I’m going to build that policy like so:
public class StreamCollisionExceptionPolicy : IHttpPolicy
{
private bool shouldApply(HttpChain chain)
{
// TODO -- and Wolverine needs a utility method on IChain to make this declarative
// for future middleware construction
return chain
.HandlerCalls()
.SelectMany(x => x.Creates)
.Any(x => x.VariableType.CanBeCastTo<IStartStream>());
}
public void Apply(IReadOnlyList<HttpChain> chains, GenerationRules rules, IContainer container)
{
// Find *only* the HTTP routes where the route tries to create new Marten event streams
foreach (var chain in chains.Where(shouldApply))
{
// Add the middleware on the outside
chain.Middleware.Insert(0, new CatchStreamCollisionFrame());
// Alter the OpenAPI metadata to register the ProblemDetails
// path
chain.Metadata.ProducesProblem(400);
}
}
// Make the codegen easier by doing most of the work in this one method
public static Task RespondWithProblemDetails(ExistingStreamIdCollisionException e, HttpContext context)
{
var problems = new ProblemDetails
{
Detail = $"Duplicated id '{e.Id}'",
Extensions =
{
["Id"] = e.Id
},
Status = 400 // The default is 500, so watch this
};
return Results.Problem(problems).ExecuteAsync(context);
}
}
// This is the actual middleware that's injecting some code
// into the runtime code generation
internal class CatchStreamCollisionFrame : AsyncFrame
{
public override void GenerateCode(GeneratedMethod method, ISourceWriter writer)
{
writer.Write("BLOCK:try");
// Write the inner code here
Next?.GenerateCode(method, writer);
writer.FinishBlock();
writer.Write($@"
BLOCK:catch({typeof(ExistingStreamIdCollisionException).FullNameInCode()} e)
await {typeof(StreamCollisionExceptionPolicy).FullNameInCode()}.{nameof(StreamCollisionExceptionPolicy.RespondWithProblemDetails)}(e, httpContext);
return;
END
");
}
}
And apply the middleware to the application like so:
app.MapWolverineEndpoints(opts =>
{
// more configuration for HTTP...
opts.AddPolicy<StreamCollisionExceptionPolicy>();
});
And lastly, here’s a test using Alba that just verifies the behavior end to end by trying to create a new event stream with the same id multiple times:
[Fact]
public async Task use_stream_collision_policy()
{
var id = Guid.NewGuid();
// First time should be fine
await Scenario(x =>
{
x.Post.Json(new StartOrderWithId(id, new[] { "Socks", "Shoes", "Shirt" })).ToUrl("/orders/create4");
});
// Second time hits an exception from stream id collision
var result2 = await Scenario(x =>
{
x.Post.Json(new StartOrderWithId(id, new[] { "Socks", "Shoes", "Shirt" })).ToUrl("/orders/create4");
x.StatusCodeShouldBe(400);
});
// And let's verify that we got what we expected for the ProblemDetails
// in the HTTP response body of the 2nd request
var details = result2.ReadAsJson<ProblemDetails>();
Guid.Parse(details.Extensions["Id"].ToString()).ShouldBe(id);
details.Detail.ShouldBe($"Duplicated id '{id}'");
}
To maybe make this a little clearer what’s going on, Wolverine can always show you the generated code it uses for your HTTP endpoints like this (I reformatted the code for legibility with Rider):
public class POST_orders_create4 : HttpHandler
{
private readonly WolverineHttpOptions _options;
private readonly ISessionFactory _sessionFactory;
public POST_orders_create4(WolverineHttpOptions options, ISessionFactory sessionFactory) : base(options)
{
_options = options;
_sessionFactory = sessionFactory;
}
public override async Task Handle(HttpContext httpContext)
{
await using var documentSession = _sessionFactory.OpenSession();
try
{
var (command, jsonContinue) = await ReadJsonAsync<StartOrderWithId>(httpContext);
if (jsonContinue == HandlerContinuation.Stop)
{
return;
}
var (orderStatus, startStream) = MarkItemEndpoint.StartOrder4(command);
// Placed by Wolverine's ISideEffect policy
startStream.Execute(documentSession);
// Commit any outstanding Marten changes
await documentSession.SaveChangesAsync(httpContext.RequestAborted).ConfigureAwait(false);
await WriteJsonAsync(httpContext, orderStatus);
}
catch (ExistingStreamIdCollisionException e)
{
await StreamCollisionExceptionPolicy.RespondWithProblemDetails(e, httpContext);
}
}
}
This builds on the previous blog posts in this list:
Wolverine on the JetBrains Webinar series — but watch out, the ICommandBus interface shown in that webinar was consolidated and changed to IMessageBus in the latest release. The rest of the syntax and the concepts are all unchanged though.
Wolverine on DotNetRocks — a conversation about Wolverine with a bonus rant from me about prescriptive, hexagonal architectures
Some time over the holidays Jim Shore released an updated version of his excellent paper Testing Without Mocks: A Pattern Language. He also posted this truly massive thread with some provocative opinions about test automation strategies:
Testing Without Mocks: A 🧵.
So a few days ago I released this massive update to my article, "Testing Without Mocks: A Pattern Language." It's 40 pages long if you print it. (Which you absolutely should. I have a fantastic print stylesheet.)
I think it’s a great thread over all, and the paper is chock full of provocative thoughts about designing for testability. Moreover, some of the older content in that paper is influencing the direction of my own work with Wolverine. I’ve also made it recommended reading for the developers in my own company.
All that being said, I strongly disagree with approach the approach he describes for integration testing with “nullable infrastructure” and eschewing DI/IoC for composition in favor of just willy nilly hard coding things because “DI us scary” or whatever. My strong preference and also where I’ve had the most success is to purposely choose to rely on development technologies that lend themselves to low friction, reliable, and productive integration testing.
And as it just so happens, the “critter stack” tools (Marten and Wolverine) that I work on are purposely designed for testability and include several features specifically to make integration testing more effective for applications using these tools.
Integration Testing with the Critter Stack
From my previous blog posts linked up above, I’ve been showing a very simplistic banking system to demonstrate the usage of Wolverine with Marten. For a testing scenario, let’s go back to part of this message handler for a WithdrawFromAccount message that will effect changes on an Account document entity and potentially send out other messages to perform other actions:
[Transactional]
public static async Task Handle(
WithdrawFromAccount command,
Account account,
IDocumentSession session,
IMessageContext messaging)
{
account.Balance -= command.Amount;
// This just marks the account as changed, but
// doesn't actually commit changes to the database
// yet. That actually matters as I hopefully explain
session.Store(account);
// Conditionally trigger other, cascading messages
if (account.Balance > 0 && account.Balance < account.MinimumThreshold)
{
await messaging.SendAsync(new LowBalanceDetected(account.Id));
}
else if (account.Balance < 0)
{
await messaging.SendAsync(new AccountOverdrawn(account.Id), new DeliveryOptions{DeliverWithin = 1.Hours()});
// Give the customer 10 days to deal with the overdrawn account
await messaging.ScheduleAsync(new EnforceAccountOverdrawnDeadline(account.Id), 10.Days());
}
// "messaging" is a Wolverine IMessageContext or IMessageBus service
// Do the deliver within rule on individual messages
await messaging.SendAsync(new AccountUpdated(account.Id, account.Balance),
new DeliveryOptions { DeliverWithin = 5.Seconds() });
}
For a little more context, I’ve set up a Minimal API endpoint to delegate to this command like so:
// One Minimal API endpoint that just delegates directly to Wolverine
app.MapPost("/accounts/withdraw", (WithdrawFromAccount command, IMessageBus bus) => bus.InvokeAsync(command));
In the end here, I want a set of integration tests that works through the /accounts/withdraw endpoint, through all ASP.NET Core middleware, all configured Wolverine middleware or policies that wrap around that handler above, and verifies the expected state changes in the underlying Marten Postgresql database as well as any messages that I would expect to go out. And oh, yeah, I’d like those tests to be completely deterministic.
First, a Shared Test Harness
I’m starting to be interested in moving back to NUnit for the first time in years strictly for integration testing because I’m starting to suspect it would give you more control over the test fixture lifecycle in ways that are frequently valuable in integration testing.
Now, before writing the actual tests, I’m going to build an integration test harness for this system. I prefer to use xUnit.Net these days as my test runner, so we’re going to start with building what will be a shared fixture to run our application within integration tests. To be able to test through HTTP endpoints, I’m also going to add another JasperFx project named Alba to the testing project (See Alba for Effective ASP.Net Core Integration Testing for more information):
public class AppFixture : IAsyncLifetime
{
public async Task InitializeAsync()
{
// Workaround for Oakton with WebApplicationBuilder
// lifecycle issues. Doesn't matter to you w/o Oakton
OaktonEnvironment.AutoStartHost = true;
// This is bootstrapping the actual application using
// its implied Program.Main() set up
Host = await AlbaHost.For<Program>(x =>
{
// I'm overriding
x.ConfigureServices(services =>
{
// Let's just take any pesky message brokers out of
// our integration tests for now so we can work in
// isolation
services.DisableAllExternalWolverineTransports();
// Just putting in some baseline data for our database
// There's usually *some* sort of reference data in
// enterprise-y systems
services.InitializeMartenWith<InitialAccountData>();
});
});
}
public IAlbaHost Host { get; private set; }
public Task DisposeAsync()
{
return Host.DisposeAsync().AsTask();
}
}
There’s a bit to unpack in that class above, so let’s start:
A .NET IHost can be expensive to set up in memory, so in any kind of sizable system I will try to share one single instance of that between integration tests.
The AlbaHost mechanism is using WebApplicationFactory to bootstrap our application. This mechanism allows us to make some modifications to the application’s normal bootstrapping for test specific setup, and I’m exploiting that here.
The `DisableAllExternalWolverineTransports()` method is a built in extension method in Wolverine that will disable all external sending or listening to external transport options like Rabbit MQ. That’s not to say that Rabbit MQ itself is necessarily impossible to use within automated tests — and Wolverine even comes with some help for that in testing as well — but it’s certainly easier to create our tests without having to worry about messages coming and going from outside. Don’t worry though, because we’ll still be able to verify the messages that should be sent out later.
I’m using Marten’s “initial data” functionality that’s a way of establishing baseline data (reference data usually, but for testing you may include a baseline set of test user data maybe). For more context, `InitialAccountData` is shown below:
public class InitialAccountData : IInitialData
{
public static Guid Account1 = Guid.NewGuid();
public static Guid Account2 = Guid.NewGuid();
public static Guid Account3 = Guid.NewGuid();
public Task Populate(IDocumentStore store, CancellationToken cancellation)
{
return store.BulkInsertAsync(accounts().ToArray());
}
private IEnumerable<Account> accounts()
{
yield return new Account
{
Id = Account1,
Balance = 1000,
MinimumThreshold = 500
};
yield return new Account
{
Id = Account2,
Balance = 1200
};
yield return new Account
{
Id = Account3,
Balance = 2500,
MinimumThreshold = 100
};
}
}
[CollectionDefinition("integration")]
public class ScenarioCollection : ICollectionFixture<AppFixture>
{
}
I have to look this up every single time I use this functionality.
For integration testing, I like to a have a slim base class that I tend to quite originally call “IntegrationContext” like this one:
public abstract class IntegrationContext : IAsyncLifetime
{
public IntegrationContext(AppFixture fixture)
{
Host = fixture.Host;
Store = Host.Services.GetRequiredService<IDocumentStore>();
}
public IAlbaHost Host { get; }
public IDocumentStore Store { get; }
public async Task InitializeAsync()
{
// Using Marten, wipe out all data and reset the state
// back to exactly what we described in InitialAccountData
await Store.Advanced.ResetAllData();
}
// This is required because of the IAsyncLifetime
// interface. Note that I do *not* tear down database
// state after the test. That's purposeful
public Task DisposeAsync()
{
return Task.CompletedTask;
}
}
Other than simply connecting real test fixtures to the ASP.Net Core system under test (the IAlbaHost), this IntegrationContext utilizes another bit of Marten functionality to completely reset the database state back to only the data defined by the InitialAccountData so that we always have known data in the database before tests execute.
By and large, I find NoSQL databases to be more easily usable in automated testing than purely relational databases because it’s generally easier to tear down and rebuild databases with NoSQL. When I’m having to use a relational database in tests, I opt for Jimmy Bogard’s Respawn library to do the same kind of reset, but it’s substantially more work to use than Marten’s built in functionality.
In the case of Marten, we very purposely designed in the ability to reset the database state for integration testing scenarios from the very beginning. Add this functionality to the easy ability to run the underlying Postgresql database in a local Docker container for isolated testing, and I’ll claim that Marten is very usable within test automation scenarios with no real need to try to stub out the database or use some kind of low fidelity fake in memory database in testing.
See My Opinions on Data Setup for Functional Tests for more explanation of why I’m doing the database state reset before all tests, but never immediately afterward. And also why I think it’s important to place test data setup directly into tests rather than trying to rely on any kind of external, expected data set (when possible).
From my first pass at writing the sample test that’s coming in the next section, I discovered the need for one more helper method on IntegrationContext to make HTTP calls to the system while also tracking background Wolverine activity as shown below:
// This method allows us to make HTTP calls into our system
// in memory with Alba, but do so within Wolverine's test support
// for message tracking to both record outgoing messages and to ensure
// that any cascaded work spawned by the initial command is completed
// before passing control back to the calling test
protected async Task<(ITrackedSession, IScenarioResult)> TrackedHttpCall(Action<Scenario> configuration)
{
IScenarioResult result = null;
// The outer part is tying into Wolverine's test support
// to "wait" for all detected message activity to complete
var tracked = await Host.ExecuteAndWaitAsync(async () =>
{
// The inner part here is actually making an HTTP request
// to the system under test with Alba
result = await Host.Scenario(configuration);
});
return (tracked, result);
}
The method above gives me access to the complete history of Wolverine messages during the activity including all outgoing messages spawned by the HTTP call. It also delegates to Alba to run HTTP requests in memory and gives me access to the Alba wrapped response for easy interrogation of the response later (which I don’t need in the following test, but would frequently in other tests).
See Test Automation Support from the Wolverine documentation for more information on the integration testing support baked into Wolverine.
Writing the first integration test
The first “happy path” test that verifies that calling the web service through to the Wolverine message handler for withdrawing from an account without going into any kind of low balance conditions might look like this:
public class when_debiting_an_account : IntegrationContext
{
public when_debiting_an_account(AppFixture fixture) : base(fixture)
{
}
[Fact]
public async Task should_increase_the_account_balance_happy_path()
{
// Drive in a known data, so the "Arrange"
var account = new Account
{
Balance = 2500,
MinimumThreshold = 200
};
await using (var session = Store.LightweightSession())
{
session.Store(account);
await session.SaveChangesAsync();
}
// The "Act" part of the test.
var (tracked, _) = await TrackedHttpCall(x =>
{
// Send a JSON post with the DebitAccount command through the HTTP endpoint
// BUT, it's all running in process
x.Post.Json(new WithdrawFromAccount(account.Id, 1300)).ToUrl("/accounts/debit");
// This is the default behavior anyway, but still good to show it here
x.StatusCodeShouldBeOk();
});
// Finally, let's do the "assert"
await using (var session = Store.LightweightSession())
{
// Load the newly persisted copy of the data from Marten
var persisted = await session.LoadAsync<Account>(account.Id);
persisted.Balance.ShouldBe(1300); // Started with 2500, debited 1200
}
// And also assert that an AccountUpdated message was published as well
var updated = tracked.Sent.SingleMessage<AccountUpdated>();
updated.AccountId.ShouldBe(account.Id);
updated.Balance.ShouldBe(1300);
}
}
The test above follows the basic “arrange, act, assert” model. In order, the test:
Writes a brand new Account document to the Marten database
Makes an HTTP call to the system to POST a WithdrawFromAccount command to our system using our TrackedHttpCall method that also tracks Wolverine activity during the HTTP call
Verify that the Account data was changed in the database the way we expected
Verify that an expected outgoing message was published as part of the activity
It was a lot of initial set up to get to the point where we could write tests, but I’m going to argue in the next section that we’ve done a lot to reduce the friction in writing additional integration tests for our system in a reliable way.
Avoiding the Selenium as Golden Hammer Anti-Pattern
Playwright or Cypress.io may prove to be better options than Selenium over time (I’m bullish on Playwright myself), but the main point is really that only depending on end to end tests through the browser can easily be problematic and inefficient.
Before I go back to defending why I think the testing approach and tooling shown in this post is very effective, let’s build up an all too real strawman of inefficient and maybe even ineffective test automation:
All your integration tests are blackbox, end to end tests that use Selenium to drive a web browser
These tests can only be executed externally to the application when the application is deployed to a development or testing environment. In the worst case scenario — which is also unfortunately common — the Selenium tests cannot be easily executed locally on demand
The tests are prone to failures due to UI changes
The tests are prone to intermittent “blinking” failures due to asynchronous behavior in the UI where test assertions happen before actions are completed in the application. This is a source of major friction and poor results in large scale Selenium testing that has been endemic in every single shop or project where I’ve used or seen Selenium used over the past decade — including in my current role.
The end to end tests are slow compared to finer grained unit tests or smaller whitebox integration tests that do not have to use the browser
Test failures are often difficult to diagnose since the tests are running out of process without direct access to the actual application. Some folks try to alleviate this issue with screenshots of the browser or in more advanced usages, trying to correlate the application logs to the test runs
Test failures often happen because related test databases are not in the expected state
I’m laying it on pretty thick here, but I think that I’m getting my point across that only relying on Selenium based browser testing is potentially very inefficient and sometimes ineffective. Now, let’s consider how the “critter stack” tools and the testing approach I used up above solve some of the issues I raised just above:
Postgresql itself is very easy to run in Docker containers or if you have to, to deploy locally. That makes it friendly for automated testing where you really, really want to have isolated testing infrastructure and avoid sharing any kind of stateful resource between testing processes
Marten in particular has built in support for setting up known database states going into automated tests. This is invaluable for integration testing
Executing directly against HTTP API endpoints is much faster than browser testing with something like Selenium. Faster executing tests == faster feedback cycles == better development throughput and delivery period
Running the tests completely in process with the application such as we did with Alba makes debugging test failures much easier for developers than trying to solve Selenium failures in a CI environment
Using the Alba + xUnit.Net (or NUnit etc) approach means that the integration tests can live with the application code and can be executed on demand whenever. That shifts the testing “left” in the development cycle compared to the slower Selenium running on CI only cycle. It also helps developers quickly spot check potential issues.
By embedding the integration tests directly in the codebase, you’re much less likely to get the drift between the application itself and automated tests that frequently arises from Selenium centric approaches.
This approach makes developers be involved with the test automation efforts. I strongly believe that it’s impossible for large scale test automation to work whatsoever without developer involvement
Whitebox tests are simply much more efficient than the blackbox model. This statement is likely to get me yelled at by real testing professionals, but it’s still true
This post took way, way too long to write compared to how I thought it would go. I’m going to make a little bonus followup on using Lamar of all things for other random test state resets.