With the release of Wolverine 3.0 last week, we snuck in a small feature at the last minute that was a request from a JasperFx Software customer. Specifically, they had a couple instances of a logical message type that needed to be handled both from Wolverine’s Rabbit MQ message transport, and also from the request body of an HTTP endpoint inside their BFF application.
You can certainly beat this problem a couple different ways:
Use the Wolverine message handler as a mediator from within an HTTP endpoint. I’m not a fan of this approach because of the complexity, but it’s very common in .NET world of course.
Just delegate from an HTTP endpoint in Wolverine directly to the (in this case) static method message handler. Simpler mechanically, and we’ve done that a few times, but there’s a wrinkle coming of course.
One of the things that Wolverine’s HTTP endpoint model does is allow you to quickly make little one off validation rules using the ProblemDetails specification that’s great for one off validations that don’t fit cleanly into Fluent Validation usage (which is also supported by Wolverine for both message handlers and HTTP endpoints). Our client was using that pattern on HTTP endpoints, but wanted to expose the same logic — and validation logic — as a message handler while still retaining the validation rules and ProblemDetails response for HTTP.
As of the Wolverine 3.0 release last week, you can now use the ProblemDetails logic with message handlers as a one off validation test if you are using Wolverine.Http as well as Wolverine core. Let’s jump right to an example of a class to both handle a message as a message handler in Wolverine and handle the same message body as an HTTP web service with a custom validation rule using ProblemDetails for the results:
public record NumberMessage(int Number);
public static class NumberMessageHandler
{
// More likely, these one off validation rules do some kind of database
// lookup or use other services, otherwise you'd just use Fluent Validation
public static ProblemDetails Validate(NumberMessage message)
{
// Hey, this is contrived, but this is directly from
// Wolverine.Http test suite code:)
if (message.Number > 5)
{
return new ProblemDetails
{
Detail = "Number is bigger than 5",
Status = 400
};
}
// All good, keep on going!
return WolverineContinue.NoProblems;
}
// Look at this! You can use this as an HTTP endpoint too!
[WolverinePost("/problems2")]
public static void Handle(NumberMessage message)
{
Debug.WriteLine("Handled " + message);
Handled = true;
}
public static bool Handled { get; set; }
}
What’s significant about this class is that it’s a perfectly valid message handler that will be discovered by Wolverine as a message handler. Because of the presence of the [WolverinePost] attribute, Wolverine.HTTP will discover this as well and independently create an AspNetCore Endpoint route for this method.
If the Validate method returns a non-“No problems” response:
As a message handler, Wolverine will log a JSON serialized value of the ProblemDetails and stop all further processing
As an HTTP endpoint, Wolverine.HTTP will write the ProblemDetails out to the HTTP response, set the status code and content-type headers appropriately, and stop all further processing
Arguably, Wolverine’s entire schtick and raison d’être is to provide a much lower code ceremony development experience than other .NET server side development tools. I think the code above is a great example of how Wolverine really does this. Especially if you know that Wolverine.HTTP is able to glean and enhance the OpenAPI metadata created for the endpoint above to reflect the possible status code 400 and application/problem+json content type response, compare the Wolverine approach above to a more typical .NET “vertical slice architecture” approach that is probably using MVC Core controllers or Minimal API registrations with plenty of OpenAPI-related code noise to delegate to MediatR message handlers with all of its attendant code ceremony.
Besides code ceremony, I’d also point out that the functions you write for Wolverine up above are much more often going to be pure functions and/or synchronous for much easier unit testing than you can with other tools. Lastly, and I’ll try to show this in a follow up blog post about Wolverine’s middleware strategy, Wolverine’s execution pipeline results in fewer object allocations than IoC-centric tools like MediatR or MassTransit or MVC Core / Minimal API do at runtime.
Just as the title says, Wolverine 3.0 is live and published to Nuget! I believe that this release addresses some of Wolverine’s prior weaknesses and adds some powerful new features requested by our users. The journey for Wolverine right now is to be the singular most effective set of tooling for building robust, maintainable, and testable server side code in the .NET ecosystem. If you’re wondering about the value proposition of Wolverine as any combination of mediator, in process message bus, asynchronous messaging framework, or alternative HTTP web service framework, it’s that Wolverine will help you be successful with substantially less code because Wolverine helps you much more to simplify the code inside of message handlers or HTTP endpoint methods than other comparable .NET tooling.
Enough of the salesmanship, before I go any farther, let me thank quite a few folks for their contributions to Wolverine:
Babu Annamalai
JT for all his work on Rabbit MQ for this release and a whole host of other contributions to the “Critter Stack” including leveling us up on Discord usage
Jesse for making quite a few suggestions that wound up being usability improvements
Haefele for his contributions
Erik Shafer for helping with project communications
JasperFx Software‘s clients across the globe for making it possible for me to work on the “Critter Stack” and push it forward (a lot of features and functionality in this release were built at the behest of JasperFx clients)
And finally, even though this doesn’t show up in GitHub contributor numbers sometimes, everyone who has taken the time to write up actionable bug reports or feature requests. That is an absolutely invaluable element of successful OSS community projects
The major new features or changes in this release are:
Wolverine is no longer directly coupled to Lamar and can now used with at least ServiceProvider and theoretically any other IoC tool that conforms to the .NET DI standards — but I’d highly recommend that you stick to the well lit paths of ServiceProvider or Lamar. Not that many people cared, but the ones who did cared about this a lot
You can now bootstrap Wolverine with HostApplicationBuilder or any .NET bootstrapper that supports IServiceCollection some how, some way. Wolverine is no longer limited to only IHostBuilder
Wolverine’s leadership election and node assignment subsystem got a pretty substantial overhaul. The result is much simpler code and far, far better behavior and reliability. This was arguably the biggest weakness of Wolverine < 3.0
“Sticky” message handling when you need to handle a single message type in multiple handlers with “sticky” assignments to particular queues or listeners.
An options for RavenDb persistence including the transactional inbox/outbox, scheduled messaging, and saga persistence
Additions to the Rabbit MQ support including the ability to use header exchanges
Lightweight saga storage for either PostgreSQL or SQL Server that works without either Marten or EF Core
And plenty of small “reduce paper cuts and repetitive code” changes here and there. The documentation website also got some review and refinement as well.
What’s next, because there’s always a next…
There will be bug reports, and we’ll try to deal with them as quickly. There’s a GCP PubSub transport option brewing in the community that may hit soon. It’s somewhat likely there will be a CosmosDb integration for Wolverine message storage, sagas, and scheduled messages this year. There were some last minute scope cuts for productivity that maybe gets addressed with follow up releases to Wolverine 3.0, but more likely in 4.0.
Mostly though, Wolverine 3.0 might be somewhat short lived as Wolverine 4.0 work (and Marten 8) will hopefully start as early as next week as the “Critter Stack” community and JasperFx Software tries to implement what I’ve been calling the “Critter Stack 2025” goals heading into 1st quarter 2025.
I’m logging off for the rest of the night (at least from work), and I know there’ll be a list of questions or problems in the morning (the joy of being 5-7 hours behind most of your users and clients), but for now:
I’m working with a JasperFx Software client who is in the beginning stages of building a pretty complex, multi-step file import process that is going to involve several different services. For the sake of example code in this post, let’s say that we have the (much simplified from my client’s actual logical workflow) workflow from the diagram above:
External partners (or customers) are sending us an Excel sheet with records that our system will need to process and utilize within our downstream systems (invoices? payments? people? transactions?)
For the sake of improved throughput, the incoming file is broken into batches of records so the smaller batches can be processed in parallel
Each batch needs to be validated by the “Validation Service”
When each batch has been completely validated:
If there are any errors, send a rejection summary about the entire file to the original external partner
If there are no errors, try to send each record batch to “Downstream System #1”
When each batch has been completely accepted or rejected by “Downstream System #1”
If there are any rejections, send a rejection summary about the entire file to the original external partner
If all batches are accepted by “Downstream System #1”, try to send each record batch to “Downstream System #2”
When each batch has been completely accepted or rejected by “Downstream System #2”
If there are any rejections, send a rejection summary about the entire file to the original external partner and a message to “Downstream System #1” to reverse each previously accepted records in the file
If all batches are accepted by “Downstream System #2”, send a successful receipt message to the original external partner and archive the intermediate state
Right off the bat, I think we can identify a couple needs and challenges:
We need some way to track the current, in process state of an individual file and where all the various batches are in that process
At every point, make decisions about what to do next in the workflow based on the current state of the file based on incremental process. And to make this as clear as possible, I think it’s extremely valuable to be able to clearly write, read, unit test, and reason about this workflow code without any significant coupling to the surrounding infrastructure.
The whole system should be resilient in the face of the expected transient hiccups like a database getting overwhelmed or a downstream system being temporarily down and “work” should never get lost or hopefully even require human intervention at runtime
Especially for large files, we absolutely better be prepared for some challenging concurrency issues when lots of incoming messages attempt to update that central file import processing state
Make it all performance too of course!
Alright, so we’re definitely using both Marten for persistence and Wolverine for the workflow and messaging between services for all of this. The first basic approach for the state management is to use Wolverine’s stateful saga support with Marten. In that case we might have a saga type in Marten something like this:
// Again, express the stages in terms of your
// business domain instead of technical terms,
// but you'll do better than me on this front!
public enum FileImportStage
{
Validating,
Downstream1,
Downstream2,
Completed
}
// As long as it's JSON serialization friendly, you can happily
// tighten up the access here all you want, but I went for quick and simple
public class FileImportSaga :
// Only necessary marker type for Wolverine here
Saga,
// Opts into tracked version concurrency for Marten
// We probably want in this case
IRevisioned
{
// Identity for this saga within our system
public Guid Id { get; set; }
public string FileName { get; set; }
public string PartnerTrackingNumber { get; set; }
public DateTimeOffset Created { get; set; } = DateTimeOffset.UtcNow;
public List<RecordBatchTracker> RecordBatches { get; set; } = new();
public FileImportStage Stage { get; set; } = FileImportStage.Validating;
// Much more in just a bit...
}
Inside our system, we can start a new FileImportSaga and launch the first set of messages to validate each batch of records with this handler that reacts to a request to import a new file:
public record ImportFile(string fileName);
// This could have been done inside the FileImportSaga as well,
// but I think I'd rather keep that focused on the state machine
// and workflow logic
public static class FileImportHandler
{
public static async Task<(FileImportSaga, OutgoingMessages)> Handle(
ImportFile command,
IFileImporter importer,
CancellationToken token)
{
var saga = await importer.ReadAsync(command.fileName, token);
var messages = new OutgoingMessages();
messages.AddRange(saga.CreateValidationMessages());
return (saga, messages);
}
}
public interface IFileImporter
{
Task<FileImportSaga> ReadAsync(string fileName, CancellationToken token);
}
Let’s say that we’re receiving messages back from the Validation Message like this:
public record ValidationResult(Guid Id, Guid BatchId, ValidationMessage[] Messages);
public record ValidationMessage(int RecordNumber, string Message);
Quick note, if Wolverine is handling the messaging in the downstream systems, it’s helping make this easier by tracking the saga id in message metadata from upstream to downstream and back to the upstream through response messages. Otherwise you’d have to track the saga id on the incoming messages.
We could process the validation results in our saga one at a time like so:
// Use Wolverine's cascading message feature here for the next steps
public IEnumerable<object> Handle(ValidationResult validationResult)
{
var currentBatch = RecordBatches
.FirstOrDefault(x => x.Id == validationResult.BatchId);
// We'd probably rig up Wolverine error handling so that it either discards
// a message in this case or immediately moves it to the dead letter queue
// because there's no sense in trying to retry a message that can never be
// processed successfully
if (currentBatch == null) throw new UnknownBatchException(Id, validationResult.BatchId);
currentBatch.ReadValidationResult(validationResult);
var currentValidationStatus = determineValidationStatus();
switch (currentValidationStatus)
{
case RecordStatus.Pending:
yield break;
case RecordStatus.Accepted:
Stage = FileImportStage.Downstream1;
foreach (var batch in RecordBatches)
{
yield return new RequestDownstream1Processing(Id, batch.Id, batch.Records);
}
break;
case RecordStatus.Rejected:
// This saga is complete
MarkCompleted();
// Tell the original sender that this file is rejected
// I'm assuming that Wolverine will get the right information
// back to the original sender somehhow
yield return BuildRejectionMessage();
break;
}
}
private RecordStatus determineValidationStatus()
{
if (RecordBatches.Any(x => x.ValidationStatus == RecordStatus.Pending))
{
return RecordStatus.Pending;
}
if (RecordBatches.Any(x => x.ValidationStatus == RecordStatus.Rejected))
{
return RecordStatus.Rejected;
}
return RecordStatus.Accepted;
}
First off, I’m going to argue that the way that Wolverine supports its stateful sagas and its cascading message feature make the workflow logic pretty easy to unit test in isolation from all the infrastructure. That part is good, right? But what’s maybe not great is that we could easily be getting a bunch of those ValidationResult messages back for the same file at the same time because they’re handled in parallel, so we really need to be prepared for that.
We could rely on the Wolverine/Marten combination’s support for optimistic concurrency and just retry ValidationResult messages that fail because of caught ConcurrencyException, but that’s potentially thrashing the database and the application pretty hard. We could also solve this problem in a “sledgehammer to crack a nut” kind of way by using Wolverine’s strictly ordered listener approach that would force the file import status messages to be processed in order on a single running node:
builder.Host.UseWolverine(opts =>
{
opts.UseRabbitMq(builder.Configuration.GetConnectionString("rabbitmq"));
opts.ListenToRabbitQueue("file-import-updates")
// Single file, serialized access across the
// entire running application cluster!
.ListenWithStrictOrdering();
});
That solves the concurrency issue in a pretty hard core way, but it’s not going to terribly performant because you’ve eliminated all concurrency between different files and you’re making the system constantly load, then save the FileImportSaga data for intermediate steps. Let’s adjust this and incorporate Wolverine’s new message batching feature.
First off, let’s add a new validation batch message like so:
public record ValidationResultBatch(Guid Id, ValidationResult[] Results);
And a new message handler on our saga type for that new message type:
public IEnumerable<object> Handle(ValidationResultBatch batch)
{
var groups = batch.Results.GroupBy(x => x.BatchId);
foreach (var group in groups)
{
var currentBatch = RecordBatches
.FirstOrDefault(x => x.Id == group.Key);
foreach (var result in group)
{
currentBatch.ReadValidationResult(result);
}
}
return DetermineNextStepsAfterValidation();
}
// I pulled this out as a helper, but also, it's something
// you probably want to unit test in isolation on just the FileImportSaga
// class to nail down the workflow logic w/o having to do an integration
// test
public IEnumerable<object> DetermineNextStepsAfterValidation()
{
var currentValidationStatus = determineValidationStatus();
switch (currentValidationStatus)
{
case RecordStatus.Pending:
yield break;
case RecordStatus.Accepted:
Stage = FileImportStage.Downstream1;
foreach (var batch in RecordBatches)
{
yield return new RequestDownstream1Processing(Id, batch.Id, batch.Records);
}
break;
case RecordStatus.Rejected:
// This saga is complete
MarkCompleted();
// Tell the original sender that this file is rejected
// I'm assuming that Wolverine will get the right information
// back to the original sender somehhow
yield return BuildRejectionMessage();
break;
}
}
And lastly, we need to tell Wolverine how to do the message batching, which I’ll do first with this code:
public class ValidationResultBatcher : IMessageBatcher
{
public IEnumerable<Envelope> Group(IReadOnlyList<Envelope> envelopes)
{
var groups = envelopes
.GroupBy(x => x.Message.As<ValidationResult>().Id)
.ToArray();
foreach (var group in groups)
{
var message = new ValidationResultBatch(group.Key, group.OfType<ValidationResult>().ToArray());
// It's important here to pass along the group of envelopes that make up
// this batched message for Wolverine's transactional inbox/outbox
// tracking
yield return new Envelope(message, group);
}
}
public Type BatchMessageType => typeof(ValidationResultBatch);
}
Then lastly, in your Wolverine configuration in your Program file (or a helper method that’s called from Program), you’d tell Wolverine about the batching strategy like so:
builder.Host.UseWolverine(opts =>
{
// Other Wolverine configuration...
opts.BatchMessagesOf<ValidationResult>(x =>
{
x.Batcher = new ValidationResultBatcher();
x.BatchSize = 100;
});
});
With the message batching, you’re potentially putting less load on the database and improving performance by simply making fewer reads and writes over all. You might still have some concurrency concerns, so you have more options to control the parallelization of the ValidationResultBatch messages running locally like this in your UseWolverine() configuration:
opts.LocalQueueFor<ValidationResultBatch>()
// You *could* do this to completely prevent
// concurrency issues
.Sequential()
// Or depend on some level of retries on concurrency
// exceptions and let it parallelize work by file
.MaximumParallelMessages(5);
We could choose to accept some risk of concurrent access to an individual FileImportSaga (unlikely after the batching, but still), so let’s add some better optimistic concurrency checking with our friend Marten. For any given Saga type that’s persisted with Marten, just implement the IRevisioned interface to let Wolverine know to opt into Marten’s concurrency protection like so:
public class FileImportSaga :
// Only necessary marker type for Wolverine here
Saga,
// Opts into tracked version concurrency for Marten
// We probably want in this case
IRevisioned
That’s it, that’s all you need to do. What this does for you is create a check by Wolverine & Marten together that during the processing of any message on a FileImportSaga that no other message was successfully processed against that FileImportSaga between loading the initial copy of the saga at the time the transaction is committed. If Marten detects a concurrency violation upon the commit, it rejects the transaction and throws a ConcurrencyException. We can handle that with a series of retries to just have Wolverine retry the message from the new state with this error handling policy that I’m going to make specific to our FileImportSaga like so:
public class FileImportSaga :
// Only necessary marker type for Wolverine here
Saga,
// Opts into tracked version concurrency for Marten
// We probably want in this case
IRevisioned
{
public static void Configure(HandlerChain chain)
{
// Retry the message over again at least 3 times
// with the specified wait times
chain.OnException<ConcurrencyException>()
.RetryWithCooldown(100.Milliseconds(), 250.Milliseconds(), 250.Milliseconds());
}
// ... the rest of FileImportSaga
So now we’ve got the beginnings of a multi-step process using Wolverine’s stateful saga support. We’ve also taken some care to protect our file import process against concurrency concerns. And we’ve done all of this in a way where we can quite handily test the workflow logic by just doing state-based tests against the FileImportSaga with no database or message broker infrastructure in sight before we waste any time trying to debug the whole shebang.
Summary
The key takeaway I hope you get from this is that the full Critter Stack has some significant tooling to help you build complex, multi-step workflows. Pair that with the easy getting started stories that both tools have, and I think you have a toolset that allows you to quickly start while also scaling up to more complex needs when you need that.
As so very often happens, this blog post was bigger than I thought it would be, and I’m breaking it up into a series of a follow ups. In the next version of this post, we’ll take the same logical FileImportSaga and do the logical workflow tracking with Marten event sourcing to track the state and use some cool new Marten functionality for the workflow logic inside of Marten projections.
This might take a bit to get to, but I’ll also revisit this original implementation and talk about some extra Marten functionality to further optimize performance by baking in archiving through Marten soft-deletes and its support for PostgreSQL table partitioning.
So historically I’m actually pretty persnickety about being precise about technical terms and design pattern names, but I’m admittedly sloppy about calling something a “Saga” when maybe it’s technically a “Process Manager” and I got jumped online about that by a celebrity programmer. Sorry, not sorry?
The feature set shown in this post was built earlier this year at the behest of a JasperFx Software client who has some unusually high data throughput and wanted to have some significant ability to scale up Marten and Wolverine‘s ability to handle a huge number of incoming events. We originally put this into what was meant to be a paid add on product, but after consultation with the rest of the Critter Stack core team and other big users, we’ve decided that it would be best for this functionality to be in the OSS core of Wolverine.
JasperFx Software is currently working with a client who has a system with around 75 million events in their database and the expectation that that database could double soon. At the same time, they need to be running around 15-20 different event projections continuously running asynchronously to build read side views. To put it mildly, they’re going to want some serious ability for Marten (with a possible helping hand from Wolverine) to handle that data in a performant manner.
Before Marten 7.0, Marten could only run projections with a “hot/cold” ownership mode that resulted in every possible projection running on a single application node within the cluster. So, not that awesome for scalability to say the least. With 7.0, Marten can do some load distribution of different projections, but it’s not terribly predictable and has no guarantee of spreading the load out.
opts.Services.AddMarten(m =>
{
m.DisableNpgsqlLogging = true;
m.Connection(Servers.PostgresConnectionString);
m.DatabaseSchemaName = "csp";
// This was taken from Wolverine test code
// Imagine there being far more projections and
// subscriptions
m.Projections.Add<TripProjection>(ProjectionLifecycle.Async);
m.Projections.Add<DayProjection>(ProjectionLifecycle.Async);
m.Projections.Add<DistanceProjection>(ProjectionLifecycle.Async);
})
.IntegrateWithWolverine(m =>
{
// This makes Wolverine distribute the registered projections
// and event subscriptions evenly across a running application
// cluster
m.UseWolverineManagedEventSubscriptionDistribution = true;
});
Using the UseWolverineManagedEventSubscriptionDistribution() option in place of Marten’s own async daemon management will give you a load distribution more like this:
Using this model, Wolverine can spread the asynchronous load to more running nodes so you can hopefully get a lot more throughput in your asynchronous projections without overloading any one node.
With this option, Wolverine is going to ensure that every single known asynchronous event projection and every event subscription is running on exactly one running node within your application cluster. Moreover, Wolverine will purposely stop and restart projections or subscriptions to purposely spread the running load across your entire cluster of running nodes.
In the case of using multi-tenancy through separate databases per tenant with Marten, this Wolverine “agent distribution” will assign the work by tenant databases, meaning that all the running projections and subscriptions for a single tenant database will always be running on a single application node. This was done with the theory that this affinity would hopefully reduce the number of used database connections over all.
If a node is taken offline, Wolverine will detect that the node is no longer accessible and try to move start the missing projection/subscription agents on another active node.
If you run your application on only a single server, Wolverine will of course run all projections and subscriptions on just that one server.
Some other facts about this integration:
Wolverine’s agent distribution does indeed work with per-tenant database multi-tenancy
Wolverine does automatic health checking at the running node level so that it can fail over assigned agents
Wolverine can detect when new nodes come online and redistribute work
Wolverine is able to support blue/green deployment and only run projections or subscriptions on active nodes where a capability is present. This just means that you can add all new projections or subscriptions, or even just new versions of a projection or subscription on some application nodes in order to do try “blue/green deployment.”
This capability does depend on Wolverine’s built-in leadership election — which fortunately got a lot better in Wolverine 3.0
Future Plans
While this functionality will be in the OSS core of Wolverine 3.0, we plan to add quite a bit of support to further monitor and control this feature with the planned “Critter Watch” management console tool we (JasperFx) are building. We’re planning to allow users to:
Visualize and monitor which projections and/or subscriptions are running on which application node
See a correlation to performance metrics being emitted to the Open Telemetry tool of your choice — with Prometheus PromQL compatible tools being supported first
Be able to create affinity groups between projections or subscriptions that might be using the same event data as a possible optimization
Allow individual projections or subscriptions to be paused or restarted
Trigger manual projection rebuilds at runtime
Trigger “rewinds” of subscriptions at runtime
We’re also early in planning to port the Marten event sourcing support to additional database engines. The above functionality will be available for those other database engines when we get there.
This functionality was originally conceived of something like 5-6 years ago, and it’s personally very exciting to me to finally see it out in the wild!
I’ve been helping a JasperFx Software client with their test automation strategy on a new web application and surrounding suite of services. That makes this a perfectly good time to reevaluate how I think teams can succeed with automated testing as an update to what I thought a decade ago. I think you can justifiably describe this post as a stream of consciousness brain dump with just a modicum of editing.
Psych, it took me about four months to actually finish this post, but they’re doing fine as is!
First off, let’s talk about the desirable qualities of a successful test automation strategy.
The backing automated test suite gives you enough confidence to know when your code can be shipped. Mind you, this isn’t about 100% test coverage because that’s rarely practical or cost effective. Instead, this is feeling that there is an acceptably low risk of problems when we deploy if the automated tests are all currently passing. And sorry, I don’t have a hard and fast number to put on that “feeling,” but hopefully you could do so over time by tracking the actual rate of defects from releases.
It’s mechanically easy enough to write the automated tests for your system that the effort in doing so pays off. To some degree you can improve this equation by purposely choosing development tools that lend themselves to automated testing (like Marten and PostgreSQL!). Otherwise, you can also improve the value of the automated tests through some judicious usage of custom testing harnesses or possibly using BDD tools (like Gherkin, but I’ve also had success from time to time with old FIT/FitNesse style testing or even just some one off internal DSL tools) that might make the tests be more declarative.
The automated tests run fast enough to give us an effective feedback cycle — but that’s admittedly 100% subjective. If the tests are too slow, folks won’t run them often enough for the tests to be perfectly helpful and the tests will tend to drift apart from the code. In an ideal world, the tests are running often enough that regression test failures are caught at nearly the same time as the code change that introduced the regression so your teams have an easier time diagnosing the regression problems.
The automated tests are reliable, just meaning that there’s little to no flakiness and you can generally trust the test results as really being a success or failure. User interface testing or any testing involving asynchronous processes are notoriously hard to do reliably, and the flakiness can be a very real problem. Given a choice between having technically more test coverage of a system and the existing test suites being more reliable, I will purposely choose to delete flaky tests as a compromise if it’s not feasible to improve or rewrite the flaky tests first.
Now let’s talk about how to get to the qualities above by covering both some squishy people oriented process stuff and hard technical approaches that I think help lead to better results.
The test automation engineers should ideally be just part of the development team. It takes a lot of close collaboration between developers and test automation engineers to make a test automation strategy actually work. Most of the, let’s nicely say, less successful test automation efforts I’ve seen over time have been at least partially caused by insufficient collaboration between developers and test automation engineers.
There’s always been a sizable backlash and general weariness in regards to Agile Software methods (and was from the very beginning as I recall), but one thing early Agile methods like Extreme Programming got absolutely right was an emphasis on self-contained teams where everybody’s goal is to ship software rather than being narrow specialists on separate teams who only worried about writing code or testing or designing. Or as the Lean Development folks told us, look to optimize the whole process of shipping software rather than any one intermediate deliverable or artifact.
In practice, this “optimize the whole” probably means that developers are full participants in the automated testing, whether that’s simply adjusting the system to make testing easier (especially if your shop is going to make any investment into automating tests through the user interface) or getting their hands dirty helping write “socialable” integration tests. “Optimize the whole” to means that it’s absolutely worth developer’s time to help with test automation efforts and to even purposely make changes in the system architecture to facilitate easier testing if that extra work still results in shipping software faster through quicker testing.
Use the fastest feedback cycle that adequately tests whatever it is you’re trying to test. I’m sure many of you have seen some form of the test automation pyramid:
We could have a debate about exactly what mix of “solitary” unit tests to “sociable” integration tests to end to end, or truly black box end to end tests is ideal in any given situation, but I think the guiding rule is what I referred to years ago as Jeremy’s Only Rule of Testing:
Test with the finest grained mechanism that tells you something important
Let’s make this rule more concrete by considering a few cases and how we might go about automating testing.
First, let’s say that we have a business rule that says that attempting to create an overdraft of a banking account where an account isn’t allowed to do that should reject the requested transactions. That’s absolutely worth an integration test of some sort too, but I’d absolutely vote first for pretty isolated unit tests against just the business logic that doesn’t involve any kind of database or user interface.
On the other hand, one of my clients is utilizing GraphQL between their front end React.js components and the backend. In that case where you won’t really know for sure that the GraphQL sent from the TypeScript client works correctly with the .NET backend without some end to end tests — which is what they are doing with Playwright. All the same though, we did come up with a recipe for testing out the GraphQL endpoints in isolation from the HTTP request level down to the database as a way of testing the database wiring. I’d say that these two types of testing are highly complementary, as it also is to test business logic elements within their GraphQL mutations without the database. One point I recommended to these clients is to move toward, or at least add, more granular tests of some sort anytime the end to end tests are being hard to debug in the case of test failures. In more simpler terms, excessive trouble debugging problems is probably an indication that you might need more fine-grained tests.
Before I get out of this section, let’s just pick on Selenium overuse here as the absolute scourge of successful test automation in the wild (my client is going with Playwright for browser testing instead which would have been my recommendation anyway). End to end tests using Selenium to drive a web browser are naturally much slower and often more work to write than more focused white box integration tests or isolated unit tests would be — not to mention frequently much less reliable. For that reason, I’m personally a big fan of using white box integration tests much more than end to end, black box tests. Living in server side .NET, that to me means testing a lot more at the message handler level, or at the HTTP endpoint level (which is what Alba does for JasperFx clients and Wolverine.HTTP itself).
The test automation code should be in the same repository as the application code.
I’m not sure why this would be even remotely controversial, but I’ve frequently seen it both together with the system code and in completely separate repositories.
As a default approach, the test automation code should be written in the same language as the application code — with a preference for the server side language. I think this would be the first place I’d compromise though because there are so many testing tools that are coupled to the JavaScript world, so maybe never mind this one:)
It’s very advantageous for any automated integration tests to be easily executed locally by developers on demand. What I mean by this is that developers can easily take their current development branch, and run any part of the automated test suite on demand against their current code. There’s a couple major advantages when you can do this:
When tests are broken, and they will be, being able to run the tests locally is a much faster feedback cycle for investigating why the tests are broken that it would be to only be able to run the tests by deploying to a build or test server
It’s very helpful to be able to use automated tests to jump right into debugger session against the code
Developers will be much more likely to help keep the tests up to date with the system code if they at least occasionally run the tests themselves
It’s helpful to use the big end to end tests as a safety net for bigger restructuring work
I’ve seen multiple shops where the end to end tests were written by test automation engineers in a black box manner where the test suites could basically only be executed on centralized test servers and sometimes even only through CI (Continuous Integration) servers. That situation doesn’t seem to ever lead to successful test automation efforts.
Automated tests should be what old colleagues and I called “self-contained” tests. All I mean by this is that I want automated tests to be responsible for setting up the system state for the test within the expression of the test. You want to do this in my opinion for two reasons:
It will make the tests be much more reliable because you can count on the system being in the exact right state for the test
Having the system state set up by the test itself hopefully makes it easier to reason about the test itself and how the system state, action, and assertions all relate to each other
As an alternative, think about tests that depend on some kind of external script setting up a database through a shared data set. From experience, I can tell you that’s often very hard to reason about a failing test when you can’t easily see the test inputs.
No shared databases if you can help it. Again, this isn’t something I think should be controversial in the year 2024. You can easily get some horrendous false positives or false negatives from trying to execute automated tests against a shared database. Given even remotely a choice, I want an isolated database for each developer, tester, or formal testing environment to have isolated test data setup. This does put some onus on teams to have effective database scripting automation — but you want that anyway.
My preference these days is to rely hard on technologies that are friendly to being part of integration tests, which usually means some combination of being easy for developers to run locally and being relatively easy to configure or setup expected state in code within test harnesses. One of the reasons Marten exists in the first place was to have a NoSQL type workflow in development while being able to very easily spin up new databases and to quickly tear down database state between automated test runs.
Give a choice — and you won’t always have that choice, so don’t get too excited here — I strongly prefer to use technologies that have a great local development and testing story over “Cloud only” technologies. If you do need to utilize Cloud only technology (Azure Service Bus being a common example of that in my recent experience), you can ameliorate the problems that causes for testing by somehow letting each developer or testing environment get their own namespace or some other kind of resource isolation like prefixed resource names per environment. The point here is that automated testing always goes better when you have predictable system inputs that you can expect to lead to expected outcomes in tests. Using any kind of shared resource can sometimes lead to untrustworthy test results.
Older Writings
I’ve written a lot about automated testing over the years, and this post admittedly overlaps with a lot of previous writing — but it’s also kind of fun to see what has or hasn’t evolved in my own thinking:
I realize the title sounds a little too similar to somebody else’s 2025 platform proposals, but let’s please just overlook that
This is a “vision board” document I wrote up and shared with our core team (Anne, JT, Babu, and Jeffry) as well as some friendly users and JasperFx Software customers. I dearly want to step foot into January 2025 with the “Critter Stack” as a very compelling choice for any shop about to embark on any kind of Event Driven Architecture — especially with the usage of Event Sourcing as part of a system’s persistence strategy. Moreover, I want to arrive at a point where the “Critter Stack” actually convinces organizations to choose .NET just to take advantage of our tooling.I’d be grateful for any feedback.
As of now, the forthcoming Wolverine 3.0 release is almost to the finish line, Marten 7 is probably just about done growing, and work on “Critter Watch” (JasperFx Software’s envisioned management console tooling for the “Critter Stack”) is ramping up. Now is a good time to detail a technical vision for the “Critter Stack” moving into 2025.
The big goals are:
Simplify the “getting started” story for using the “Critter Stack”. Not just in getting a new codebase up, but going all the way to how a Critter Stack app could be deployed and opting into all the best practices. My concern is that there are getting to be way too many knobs and switches scattered around that have to be addressed to really make performance and deployment robust.
Deliver a usable “Critter Watch” MVP
Expand the “Critter Stack” to more database options, with Sql Server and maybe CosmosDb being the leading contenders and DynamoDb or CockroachDb being later possibilities
Streamline the dependency tree. Find a way to reduce the number of GitHub repositories and Nugets if possible. Both for our maintenance overhead and also to try to simplify user setup
“Critter Watch” and CritterStackPro.Projections (actually scratch the second part, that’s going to roll into the Wolverine OSS core, coming soon)
Ermine 1.0 – the Sql Server port of the Marten event store functionality
Out of the box project templates for Wolverine/Marten/Ermine usages – following the work done already by Jeffry Gonzalez
Future CosmosDb backed event store and Wolverine integration — but I’m getting a lot of mixed feedback about whether Sql Server or CosmosDb should be a higher priority
Opportunities to grow the Critter Stack user base:
Folks who are concerned about DevOps issues. “Critter Watch” and maybe more templates that show how to apply monitoring, deployment steps, and Open Telemetry to existing Critter Stack systems. The key point here is a whole lot of focus on maintainability and sustainability of the event sourcing and messaging infrastructure
Get more interest from mainstream .NET developers. Improve the integration of Wolverine and maybe Marten/Ermine as well with EF Core. This could include reaching parity with Marten for middleware support, side effects, and multi-tenancy models using EF Core. Also, maybe, hear me out, take a heavy drink, there could be an official Marten/Ermine projection integration to write projection data to EF Core? I know of at least one Critter Stack user who would use that. At this point, I’m leaning heavily toward getting Wolverine 3.0 out and mostly tackle this in the Wolverine 4.0 timeframe this fall
Expand to Sql Server for more “pure” Microsoft shops. Adding databases to the general Wolverine / Event Sourcing support (the assumption here is that the document database support in Marten would be too much work to move)
Introduce Marten and Wolverine to more people, period. Moar “DevRel” type activity! More learning videos. I’ll keep trying to do more conferences and podcasts. More sample applications. Some ideas for new samples might be a sample application with variations using each transport, using Wolverine inside of a modular monolith with multiple Marten stores and/or EF DbContexts, HTTP services, background processing. Maybe actually invest in some SEO for the websites.
Ecosystem Realignment
With major releases coming up with both Marten 8.0 and Wolverine 4.0 and the forthcoming Ermine, there’s an “opportunity” to change the organization of the code to streamline the number of GitHub repositories and Nugets floating around while also centralizing more code. There’s also an opportunity to centralize a lot of infrastructure code that could help the Ermine effort go much faster. Lastly, there are some options like code generation settings and application assembly determination that are today independently configured for Marten and Wolverine which repeatedly trips up our users (and flat out annoys me when I build sample apps).
We’re actively working to streamline the configuration code, but in the meantime, the current thinking about some of this is in the GitHub issue for JasperFx Ecosystem Dependency Reorganization. The other half of that is the content in the next section.
Projection Model Reboot
This refers to the “Reboot Projection Model API” in the Marten GitHub issue list. The short tag line is to move toward enabling easier usage of folks just writing explicit code. I also want us to tackle the absurdly confusing API for “multi-stream projections” as well. This projection model will be shared across Marten, Ermine (Sql Server-backed event store), and any future CosmosDb/DynamoDb/CockroachDb event stores.
Wrapping up Marten 7.0
Marten 7 introduced a crazy amount of new functionality on top of the LINQ rewrite, the connection management rewrite, and introduction of Polly into the core. Besides some (important) ongoing work for JasperFx clients, the remainder of Marten 7 is hopefully just:
Mark all synchronous APIs that invoke database access as [Obsolete]
Make a pass over the projection model and see how close to the projection reboot you could get. Make anything that doesn’t conform to the new ideal be [Obsolete] with nudges
Introduce the new standard code generation / application assembly configuration in JasperFx.CodeGeneration today. Mark Marten’s version of that as [Obsolete] with a pointer to using the new standard – which is hopefully very close minus namespaces to where it will be in the end
Wrapping up Wolverine 3.0
Introduce the new standard code generation / application assembly configuration in JasperFx.CodeGeneration today. Mark Marten’s version of that as [Obsolete] with a pointer to using the new standard – which is hopefully very close minus namespaces to where it will be in the end
Put a little more error handling in for code generation problems just to make it easier to fix issues later
Maybe, reexamine what work could be done to make modular monoliths easier with Wolverine and/or Marten
Maybe, consider adding back into scope improvements for EF Core with Wolverine – but I’m personally tempted to let that slide to the Wolverine 4 work
Summary
The Critter Stack core & I plus the JasperFx Software folks have a pretty audaciously ambitious plan for next year. I’m excited for it, and I’ll be talking about it in public as much as y’all will let me get away with it!
I know, command line parsing libraries are about the least exciting tooling in the entire software universe, and there are dozens of perfectly competent ones out there. Oakton though, is heavily used throughout the entire “Critter Stack” (Marten, Weasel, and Wolverine plus other tools) to provide command line utilities directly to any old .NET Core application that happens to be bootstrapped with one of the many ways to arrive at an IHost. Oakton’s key advantage over other command line parsing tools is its ability to easily add extension commands to a .NET application in external assemblies. And of course, as part of the entire JasperFx / Critter Stack philosophy of developer tooling, Oakton’s very concept was originally created to enhance the testability of custom command line tooling. Unlike some other tools *cough* System.CommandLine *cough*.
Oakton also has some direct framework-ish elements for environment checks and the stateful resource model used very heavily all the way through Marten and Wolverine to provide the very best development time experience possible when using our tools.
Today the extended JasperFx / Critter Stack community released Oakton 6.2 with some new, hopefully important use cases. First off, the stateful resource model that we use to setup, teardown, or just check “configured stateful resources” in our system like database schemas or message broker queues just got the concept of dependencies between resources such that you can control which resources are setup first.
Next, Oakton finally got a couple easy to use recipes for utilizing IoC services in Oakton commands (it was possible, just maybe a little higher ceremony that some folks prefer). The first way, assuming that you’re running Oakton from one of the many flavors of IHostBuilder or IHost like so:
// This would be the last line in your Program.Main() method
// "app" in this case is a WebApplication object, but there
// are other extension methods for headless services
return await app.RunOaktonCommands(args);
You can build an Oakton command class that uses “setter injection” to get IoC services like so:
public class MyDbCommand : OaktonAsyncCommand<MyInput>
{
// Just assume maybe that this is an EF Core DbContext
[InjectService]
public MyDbContext DbContext { get; set; }
public override Task<bool> Execute(MyInput input)
{
// do stuff with DbContext from up above
return Task.FromResult(true);
}
}
Just know that when you do this and execute a command that has decorated properties for services, Oakton is:
Building your system’s IHost
Creating a new IServiceScope from your application’s DI container, or in other words, a scoped container
Building your command object and setting all the dependencies on your command object by resolving each dependency from the scoped container created in the previous step
Executing the command as normal
Disposing the scoped container and the IHost, effectively in a try/finally so that Oakton is always cleaning up after the application
In other words, Oakton is largely taking care of annoying issues like object disposal cleanup, scoping, and actually building the IHost if necessary.
Oakton’s Future
The Critter Stack Core team & I are charting a course for our entire ecosystem I’m calling “Critter Stack 2025” that’s hoping to greatly reduce the technical challenges in adopting our tool set. As part of that, what’s now Oakton is likely to move into a new shared library (I think it’s just going to be called “JasperFx”) between the various critters (and hopefully new critters for 2025!). Oakton itself will probably get a temporary life as a shim to the new location as a way to ease the transition for existing users. There’s a balance between actively improving your toolset for potential new users and not disturbing existing users too much. We’re still working on whatever that balance ends up being.
Building and maintaining a large, hosted system that requires multi-tenancy comes with a fair number of technical challenges. JasperFx Software has helped several of our clients achieve better results with their particular multi-tenancy challenges with Marten and Wolverine, and we’re available to do the same for your shop! Drop us a message on our Discord server or email us at sales@jasperfx.net to start a conversation.
This is continuing a series about multi-tenancy with Marten, Wolverine, and ASP.Net Core:
Using Partitioning for Better Performance with Multi-Tenancy and Marten (future)
Multi-Tenancy in Wolverine with EF Core & Sql Server (future, and honestly, future functionality as part of Wolverine 4.0)
Dynamic Tenant Creation and Retirement in Marten and Wolverine (definitely in the future)
Let’s say that you’re using the Marten + PostgreSQL combination for your system’s persistence needs in a web service application. Let’s also say that you want to keep the customer data within your system in completely different databases per customer company (or whatever makes sense in your system). Lastly, let’s say that you’re using Wolverine for asynchronous messaging and as a local “mediator” tool. Fortunately, Wolverine by itself has some important built in support for multi-tenancy with Marten that’s going to make your system a lot easier to build.
Let’s get started by just showing a way to opt into multi-tenancy with separate databases using Marten and its integration with Wolverine for middleware, saga support, and the all important transactional outbox support:
// Adding Marten for persistence
builder.Services.AddMarten(m =>
{
// With multi-tenancy through a database per tenant
m.MultiTenantedDatabases(tenancy =>
{
// You would probably be pulling the connection strings out of configuration,
// but it's late in the afternoon and I'm being lazy building out this sample!
tenancy.AddSingleTenantDatabase("Host=localhost;Port=5433;Database=tenant1;Username=postgres;password=postgres", "tenant1");
tenancy.AddSingleTenantDatabase("Host=localhost;Port=5433;Database=tenant2;Username=postgres;password=postgres", "tenant2");
tenancy.AddSingleTenantDatabase("Host=localhost;Port=5433;Database=tenant3;Username=postgres;password=postgres", "tenant3");
});
m.DatabaseSchemaName = "mttodo";
})
.IntegrateWithWolverine(masterDatabaseConnectionString:connectionString);
Just for the sake of completion, here’s some sample Wolverine configuration that pairs up with the above:
// Wolverine usage is required for WolverineFx.Http
builder.Host.UseWolverine(opts =>
{
// This middleware will apply to the HTTP
// endpoints as well
opts.Policies.AutoApplyTransactions();
// Setting up the outbox on all locally handled
// background tasks
opts.Policies.UseDurableLocalQueues();
});
Now that we’ve got that basic setup for Marten and Wolverine, let’s move on to the first issue, how the heck does Wolverine “know” which tenant should be used? In a later post I’ll show how Wolverine.HTTP has built in tenant id detection, but for now, let’s pretend that you’re already taking care of tenant id detection from incoming HTTP requests some how within your ASP.Net Core pipeline and you just need to pass that into a Wolverine message handler that is being executed from within an MVC Core controller (“Wolverine as Mediator”):
[HttpDelete("/todoitems/{tenant}/longhand")]
public async Task Delete(
string tenant,
DeleteTodo command,
IMessageBus bus)
{
// Invoke inline for the specified tenant
await bus.InvokeForTenantAsync(tenant, command);
}
By using the IMessageBus.InvokeForTenantAsync() method, we’re invoking a command inline, but telling Wolverine what the tenant id is. The command handler might look something like this:
// Keep in mind that we set up the automatic
// transactional middleware usage with Marten & Wolverine
// up above, so there's just not much to do here
public static class DeleteTodoHandler
{
public static void Handle(DeleteTodo command, IDocumentSession session)
{
session.Delete<Todo>(command.Id);
}
}
Not much going on there in our code, but Wolverine is helping us out here by:
Seeing the tenant id value that we passed in before that Wolverine is tracking in its own Envelope structure (Wolverine’s version of Envelope Wrapper from the venerable EIP book)
Creates the Marten IDocumentSession for that tenant id value, which will be reading and writing to the correct tenant database underneath Marten
Now, let’s make this a little more complex by also publishing an event message in that message handler for the DeleteTodo message:
public static class TodoCreatedHandler
{
public static TodoDeleted Handle(DeleteTodo command, IDocumentSession session)
{
session.Delete<Todo>(command.Id);
// This
return new TodoDeleted(command.Id);
}
}
public record TodoDeleted(int TodoId);
Assuming that the TodoDeleted message is being published to a “durable” endpoint, Wolverine is using its transactional outbox integration with Marten to persist the outgoing message in the same tenant database and same transaction as the deletion we’re doing in that command handler. In other words, Wolverine is able to use the tenant databases for its outbox support with no other configuration necessary than what we did up above in the calls to AddMarten() and UseWolverine().
Moreover, Wolverine is even able to use its “durability agent” against all the tenant databases to ensure that any work that is somehow stranded by crashed processes.
Lastly, the TodoDeleted event message cascaded above from our message handler would be tracked throughout Wolverine with the tenant id of the original DeleteToDo command message so that you can do multi-part workflows through Wolverine while tracks the tenant id and utilizes the correct tenant database through Marten all along the way.
Summary
Building solutions with multi-tenancy can be complicated, but the Wolverine + Marten combination can make it a lot easier.
Hey, did you know that JasperFx Software offers both consulting services and support plans for the “Critter Stack” tools? Or for architectural or test automation help with any old server side .NET application. One of the other things we do is to build out custom features that our customers need in the “Critter Stack” — like the Marten-managed table partitioning for improved scaling and performance in this release!
A fairly sizable Marten 7.28 release just went live — or will at least be available on Nuget by the time you read this with a mix of new features and usability improvements. The biggest new feature is “Marten-Managed Table Partitioning by Tenant.” Lots of words! Consider this scenario:
You have a system with a huge number of events
You also need to use Marten’s support for multi-tenancy
For historical reasons and for the easy of deployment and management, you are using Marten’s “conjoined” multi-tenancy model and keeping all of your tenant data in the same database (this might have some very large cloud hosting cost saving benefits as well)
You want to be able to scale the database performance for all the normal reasons
PostgreSQL table partitioning to the rescue! In recent Marten releases, we’ve added support to take advantage of postgres table sharding as a way to improve performance in many operations — with one of the obvious first usages using table sharding per tenant id for Marten’s “conjoined” tenancy model. Great! Just tell Marten exactly what the tenant ids are and the matching partition configuration and go!
But wait, what if you have a very large number of tenants and might need to even add new tenants at runtime and without incurring any kind of system downtime? Marten now has a partitioning feature for multi-tenancy that can dynamically create per-tenant shards at runtime and manage the list of tenants in its own database storage like so:
var builder = Host.CreateApplicationBuilder();
builder.Services.AddMarten(opts =>
{
opts.Connection(builder.Configuration.GetConnectionString("marten"));
// Make all document types use "conjoined" multi-tenancy -- unless explicitly marked with
// [SingleTenanted] or explicitly configured via the fluent interfce
// to be single-tenanted
opts.Policies.AllDocumentsAreMultiTenanted();
// It's required to explicitly tell Marten which database schema to put
// the mt_tenant_partitions table
opts.Policies.PartitionMultiTenantedDocumentsUsingMartenManagement("tenants");
});
With some management helpers of course:
await theStore
.Advanced
// This is ensuring that there are tenant id partitions for all multi-tenanted documents
// with the named tenant ids
.AddMartenManagedTenantsAsync(CancellationToken.None,"a1", "a2", "a3");
If you’re familiar with the pg_partman tool, this was absolutely meant to fulfill a similar role within Marten for per-tenant table partitioning.
Aggregation Projections with Explicit Code
This is probably long overdue, but the other highlight that’s probably much more globally applicable is the ability to write more Marten event aggregation projections with strictly explicit code for folks who don’t care for the Marten conventional method approaches — or just want a more complicated workflow than what the conventional approaches can do.
You still need to use the CustomProjection<TDoc, TId> base class for your logic, but now there are simpler methods that can be overloaded to express explicit “left fold over events to create an aggregated document” logic as shown below:
public class ExplicitCounter: CustomProjection<SimpleAggregate, Guid>
{
public override SimpleAggregate Apply(SimpleAggregate snapshot, IReadOnlyList<IEvent> events)
{
snapshot ??= new SimpleAggregate();
foreach (var e in events.Select(x => x.Data))
{
if (e is AEvent) snapshot.ACount++;
if (e is BEvent) snapshot.BCount++;
if (e is CEvent) snapshot.CCount++;
if (e is DEvent) snapshot.DCount++;
}
// You have to explicitly return the new value
// of the aggregated document no matter what!
return snapshot;
}
}
The explicitly coded projections can also be used for live aggregations (AggregateStreamAsync()) and within FetchForWriting() as well. This has been a longstanding request, and will receive even stronger support in Marten 8.
LINQ Improvements
Supporting a LINQ provider is the gift that never stops giving. There’s some small improvements this time around for some minor things:
// string.Trim()
session.Query<SomeDoc>().Where(x => x.Description.Trim() = "something");
// Select to TimeSpan out of a document
session.Query<SomeDoc>().Select(x => x.Duration).ToListAsync();
// Query the raw event data by event types
var raw = await theSession.Events.QueryAllRawEvents()
.Where(x => x.EventTypesAre(typeof(CEvent), typeof(DEvent)))
.ToListAsync();
Hey, did you know that JasperFx Software offers both consulting services and support plans for the “Critter Stack” tools? Or for architectural or test automation help with any old server side .NET application. One of the other things we do is to build out custom features that our customers need in the “Critter Stack” — like the RavenDb integration from this post!
Wolverine will depend on having RavenDb integrated with your application’s DI container, so make sure you’re also using RavenDB.DependencyInjection. With those two dependencies, the code set up is just this:
var builder = Host.CreateApplicationBuilder();
// You'll need a reference to RavenDB.DependencyInjection
// for this one
builder.Services.AddRavenDbDocStore(raven =>
{
// configure your RavenDb connection here
});
builder.UseWolverine(opts =>
{
// That's it, nothing more to see here
opts.UseRavenDbPersistence();
// The RavenDb integration supports basic transactional
// middleware just fine
opts.Policies.AutoApplyTransactions();
});
// continue with your bootstrapping...
And that’s it. Adding that line of UseRavenDbPersistence() to the Wolverine set up adds in support for Wolverine to use RavenDb as:
This also includes a RavenDb-specific set of Wolverine “side effects” you can use to build synchronous, pure function handlers using RavenDb like so:
public record RecordTeam(string Team, int Year);
public static class RecordTeamHandler
{
public static IRavenDbOp Handle(RecordTeam command)
{
return RavenOps.Store(new Team { Id = command.Team, YearFounded = command.Year });
}
}
This code is of course in early stages and will surely be adapted after some load testing and intended production usage by our client, but the RavenDb integration with Wolverine is now “officially” supported.
I can’t speak to any kind of timing, but there will be more options for database integration with Wolverine in the somewhat near future as well. This effort helped us break off some reusable “compliance” tests that should help speed up the development of future database integrations with Wolverine.