Why should you care about this Wolverine tool anyway? I’d say that if you’re building just about any kind of server side .NET application, that Wolverine will do more than any other existing server side .NET framework in the mediator, background processing, HTTP, or asynchronous messaging space to simplify your code, maximize the testability of your system code, and do so while still helping you write robust, well performing systems.
There are some additive changes to address some previous limitations of the Wolverine tooling I’ll get to below, but the two big ticket items in 3.0 are:
Wolverine is now completely decoupled from Lamar and happily able to run with the built in ServiceProvider now. Before, Wolverine was quietly replacing your IoC container with Lamar because it heavily relied on Lamar internal behavior for its runtime code generation. 3.0 ended that particular limitation. Not everyone cared, but the folks who did care, were particularly loud about their unhappiness about that and Lamar is probably heading into the subset anyway in the future. I felt like this was a very important limitation of Wolverine to address in this release. It’s also a precursor to further usage of .NET Aspire and enabling Wolverine to play nicely with just about any common recipe for bootstrapping .NET applications (Blazor, WPF, Orleans, you name it).
The leader election subsystem in Wolverine was pretty close to 100% rewritten to a much simpler, and so far as the internal testing shows, far more reliable and performant mechanism. This subsystem has been way too problematic in real usage, and I’m beyond relieved that there’s some serious improvements coming for this
As for smaller things so far, some other highlights are:
The stateful saga support in Wolverine got some necessary optimistic concurrency protection at the behest of a JasperFx Software client
New “lightweight” saga options to utilize either PostgreSQL or Sql Server as JSON storage mechanisms so you don’t have to suffer the pain of EF Core mapping just to persist sagas if you aren’t using Marten
The Rabbit MQ integration is using the new version of the Rabbit MQ client that is finally async all the way through to prevent deadlock issues. There is also some significant improvement to the Rabbit MQ transport for header exchanges and more control over messaging conventions
There is a new Nuget of compliance tests for Wolverine to hopefully speed up the construction of new saga persistence providers, messaging transports, or message storage options that I hope will unlock new functionality in the following months
I’m actually hopeful that the final 3.0 release goes out early next week. I’m not sure what the remaining work will be to make it in, but I’m wanting to tackle:
Message batching, because that comes up fairly often
A round of enhancements to the EF Core integration with Wolverine to try to increase Wolverine utilization for folks who don’t use Marten for some bizarre reason
I had earlier said that “full” support for .NET Aspire would be a key part of the Wolverine 3.0 plans. After kicking the tires more on .NET Aspire and seeing where user priorities and our own road map is, I’m going to back off that statement quite a bit. Here’s what’s definitely in and actually ready to go for 3.0 as it pertains to Wolverine + Aspire:
Wolverine was decoupled from Lamar such that it can run with the built in ServiceProvider. We’ll add an add in adapter to still use Lamar as well so folks don’t have to switch out IoC tools for the new Wolverine (Lamar is much more forgiving and supports a lot of use cases that ServiceProvider does not, so you may not want to switch)
That last point was important because the changes to the internals also made it possible for Wolverine to be able to use any flavor of application bootstrapping like Host.CreateApplicationBuilder() whereas before Wolverine was limited to IHostBuilder (there were internal reasons for that). Some of the .NET Aspire client libraries depend on different versions of the application builders, so Wolverine needed to adapt. And folks wanted that anyway, so there we go.
Now, as to what else Wolverine will support, it’s perfectly possible to use Aspire to launch Wolverine systems and Wolverine (and Marten) can happily export their Open Telemetry tracing and metrics to the .NET Aspire dashboard at runtime. You can see an example of that in my earlier post Marten, Metrics, and Open Telemetry Support.
Now on to the trickier parts. One of the things that .NET Aspire does is act as a replacement for docker compose for infrastructural concerns like SQL Server, PostgreSQL, Rabbit MQ, or Kafka and acts as a global configuration element for other infrastructure things like Azure Service Bus or AWS SQS. Somebody might have to correct me, but more or less, Aspire is launching the various applications and poking through environment variables for the configuration data that Aspire itself is defining and controlling (like PostgreSQL connection strings for example). To make that information easier to consume, the Aspire team and community have built a bunch of client adapter libraries like Aspire.RabbitMQ.Client or Aspire.Npgsql that are meant to hook your application to the resources configured by Aspire by adding service registrations to the application’s underlying IoC container.
After some research earlier this week as I get to work toward the Wolverine 3.0 release, I think that:
Aspire.Npgsql can already be used as is with Marten at least, and with the Wolverine + Marten integration. A little more work could enable Aspire.Npgsql to be used with PostgreSQL storage within Wolverine for use by itself or with EF Core. There’s no need for us to take a direct dependency on this library though
Aspire.RabbitMQ.Client creates a version conflict with the RabbitMQ.Client library for us right now, so that’s out. I’m leery of taking on the potential diamond dependency issue anyway, so we’ll probably never take a dependency on it
Aspire.Microsoft.Data.SqlClient registers a scoped dependency for SqlConnection, but doesn’t expose the connection information any other way. This would require quite a few changes to the Wolverine internals that I don’t think would pay off. We won’t use this, again partially because of the fear of diamond dependencies
Aspire.Azure.Messaging.ServiceBus just isn’t usable. It precludes several options for authentication to Azure Service Bus, and using it would knock out Wolverine’s ability to set up or tear down resources on the fly — which I think is a competitive advantage of Wolverine over other alternatives, so I’m not enthusiastic about this one either
Aspire.Confluent.Kafka doesn’t fit Wolverine at all where we want to have the broker connection information upfront, and where Wolverine is completely responsible for setting up consumers and producers
All told though, I don’t think that any of the Aspire.* client libraries are usable out of the box. I guess I’m not sure if these libraries were even meant to be used or are just example code that folks like me should use to build in more specific support. In all cases, I’m voting to hold off for now on any new, direct Aspire support until someone — hopefully a contributor or JasperFx Software client — directly asks for it.
If you’re planning on coming to my workshop, you’ll want .NET 8, Git, and some kind of Docker Desktop on your box to run the sample code I’ll use in the workshop. If Docker doesn’t work for you, you maybe want a local install of PostgreSQL and Rabbit MQ.
Hey folks, I’ll be giving the first ever workshop on building an Event Driven Architecture with the full “Critter Stack” at DevUp 2024 in St. Louis next week on Wednesday the 14th bright and early at 8:30 AM.
We’ll be working through a sample backend web service that also communicates with other headless services using Event Sourcing within a general CQRS architectural approach with the whole “Critter Stack. We’ll use Marten (over PostgreSQL) for our persistence strategy using both its event sourcing support and as a document database. We’ll combine that with Wolverine as a server side framework for background processing, asynchronous messaging, and even as an alternative HTTP endpoint framework. Lastly, just for fun, there’ll be guest appearances from other JasperFx tools like Alba and Oakton for automated integration testing and command line execution respectively.
So why would you want to come to this and what might you get out of it? I’m hoping the takeaways — even if you don’t intend to use Marten or Wolverine — will be:
A good introduction to event sourcing as a technical approach and some of the real challenges you’ll face when building a system using event sourcing as a persistence strategy
An understanding of what goes into building a robust CQRS system including dealing with transient errors, observability, concurrency, and how to best segment message processing to achieve self-healing systems
Challenging the industry conventional wisdom about the efficacy of Hexagonal/Clean/Onion Architecture approaches really are when I show what a very low ceremony “vertical slice architecture” approach can be like with the Wolverine + Marten combination while still being robust, observable, highly testable, and still keeping infrastructure concerns out of the business logic
Some exposure to Open Telemetry and general observability tooling for distributed systems you absolutely want if you don’t already have that
Techniques for automating integration tests against an Event Driven Architecture
Because I’m absolutely in the business of promoting the “Critter Stack” tools, I’ll try to convince you that:
Marten is already the most robust and feature rich solution for event sourcing in the .NET ecosystem while also being arguably the easiest to get up and going with
How the Wolverine + Marten combination makes CQRS with Event Sourcing a much easier architectural pattern to use
Wolverine’s emphasis on low ceremony code approaches can help systems be more successfully maintained over time by simply having much less noise code and layering in your systems while still being robust
The “Critter Stack” has an excellent story for automated integration testing support that can do a lot to make your development efforts more successful
Both Marten & Wolverine can help your teams achieve a low “time to first pull request” by doing a lot to configure necessary infrastructure like databases or message brokers on the fly for a better development experience
I’m excited, because this is my first opportunity to do a workshop on the “Critter Stack” tools, and I think we’ve got a very compelling technical story to tell about the tools! And if nothing else, I’m looking forward to any feedback that might help us improve the tools down the line.
And for any *ahem* older folks from St. Louis in my talk, I personally at the time that Jorge Orta was out at first and the Cards should have won that game.
It’s been a little bit since I’ve written any kind of update on the unofficial “Critter Stack” roadmap, with the last update in February. A ton of new, important strategic features have been added to especially Marten since then, with plenty of expansion of Wolverine to boot. Before jumping into what’s to come, let me indulge in a bit of retrospective about what new features or improvements have been delivered in 2024 so far before getting into the road map in the next section.
More parsimonious usage of database connections for better scalability and improved integration with GraphQL via Hot Chocolate
Some ability to do zero down time deployments event with asynchronous projections
Blue/green deployment capabilities for “write” model projections (that’s a big deal)
Ability to add new tenanted databases at runtime for both Marten and Wolverine with zero downtime
There’s only one user so far, but CritterStackPro.Projections is the first paid add on model for Marten & Wolverine that allows for better load distribution of asynchronous projections and event subscriptions within a clustered application
Marten has first class support for strong typed identifiers now — which was a long requested feature that got put off because of how much effort I feared that would require (rightly as it turned out, but it’s all done now)
At this point I feel like we’ve crossed off the mass majority of the features I thought we needed to add to Marten this year to be able to stand Marten up against basically any other event store infrastructure tooling on the whole damn planet. What that also means is that I think that Marten development probably slows down to nothing but bug fixes and community contributions as folks run into things. There are still some features in the backlog that I might personally work on, but that will be in the course of some ongoing and potential JasperFx client work.
That being said, let’s talk about the rest of the year!
The Roadmap for the Back Half of 2024
Obviously, this roadmap is just a snapshot in time and client needs, community requests, and who knows what changes from Microsoft or other related tools could easily change priorities from any of this. All that being said, this is the Critter Stack core team & I’s current vision of the next big steps.
RavenDb integration with Wolverine. This is some client sponsored work that I’m hoping will set Wolverine up for easier integration with other database engines in the near future
“Critter Watch” — an ongoing effort to build out a management and monitoring console application for any combination of Marten, Wolverine, and future critters. This will be a paid product. We’ve already had a huge amount of feedback from Marten & Wolverine users, and I’m personally eager to get this moving in the 3rd quarter
Marten 8.0 and Wolverine 4.0 — the goal here is mostly a rearrangement of dependencies underneath both Marten & Wolverine to eliminate duplication and spin out a lot of the functionality around projections and the async daemon. This will also be a significant effort to spin off some new helper libraries for the “Critter Stack” to enable the next bullet point
“Ermine” — a port of Marten’s event store capabilities and a subset of its document database capabilities to SQL Server. My thought is that this will share a ton of guts with Marten. I’m voting that Ermine will have direct integration with Wolverine from the very beginning as well for subscriptions and middleware similar to the existing Wolverine.Marten integration
If Ermine goes halfway well, I’d love to attempt a CosmosDb and maybe a DynamoDb backed event store in 2025
As usual, that list is a guess and unlikely to ever play out exactly that way. All the same though, there’s my hopes and dreams for the next 6 months or so.
Did I miss something you were hoping for? Does any of that concern you? Let me and the rest of the Critter Stack community know either here or anywhere in our Discord room!
There’s been a definite theme lately about increasing the performance and scalability of Marten, as evident (I hope) in my post last week describing new optimization options in Marten 7.25. Today I was able to push a follow up feature that got missed in that release that allows Marten users to utilize PostgreSQL table partitioning behind the scenes for document storage (7.25 added a specific utilization of table partitioning for the event store). The goal here is in selected scenarios, this would enable PostgreSQL to be mostly working with far smaller tables than it would otherwise, and hence perform better in your system.
Think of these common usages of Marten:
You’re using soft deletes in Marten against a document type, and the mass majority of the time Marten is putting a default filter in for you to only query for “not deleted” documents
You are aggressively using the Marten feature to mark event streams as archived when whatever process they model is complete. In this case, Marten is usually querying against the event table using a value of is_archived = false
You’re using “conjoined” multi-tenancy within a single Marten database, and most of the time your system is naturally querying for data from only one tenant at a time
Maybe you have a table where you’re frequently querying against a certain date property or querying for documents by a range of expected values
In all of those cases, it would be more performant to opt into PostgreSQL table partitioning where PostgreSQL is separating the storage for a single, logical table into separate “partition” tables. Again, in all of those cases above we can enable PostgreSQL + Marten to largely be querying against a much smaller table partition than the entire table would be — and querying against smaller database tables can be hugely more performant than querying against bigger tables.
The Marten community has been kicking around the idea of utilizing table partitioning for years (since 2017 by my sleuthing last week through the backlog), but it always got kicked down the road because of the perceived challenges in supporting automatic database migrations for partitions the same way we do today in Marten for every other database schema object (and in Wolverine too for that matter).
Thanks to an engagement with a JasperFx customer who has some pretty extreme scalability needs, I was able to spend the time last week to break through the change management challenges with table partitioning, and finally add table partitioning support for Marten.
As for what’s possible, let’s say that you want to create table partitioning for a certain very large table in your system for a particular document type. Here’s the new option for 7.26:
var store = DocumentStore.For(opts =>
{
opts.Connection("some connection string");
// Set up table partitioning for the User document type
opts.Schema.For<User>()
.PartitionOn(x => x.Age, x =>
{
x.ByRange()
.AddRange("young", 0, 20)
.AddRange("twenties", 21, 29)
.AddRange("thirties", 31, 39);
});
// Or use pg_partman to manage partitioning outside of Marten
opts.Schema.For<User>()
.PartitionOn(x => x.Age, x =>
{
x.ByExternallyManagedRangePartitions();
// or instead with list
x.ByExternallyManagedListPartitions();
});
// Or use PostgreSQL HASH partitioning and split the users over multiple tables
opts.Schema.For<User>()
.PartitionOn(x => x.UserName, x =>
{
x.ByHash("one", "two", "three");
});
opts.Schema.For<Issue>()
.PartitionOn(x => x.Status, x =>
{
// There is a default partition for anything that doesn't fall into
// these specific values
x.ByList()
.AddPartition("completed", "Completed")
.AddPartition("new", "New");
});
});
To use the “hot/cold” storage on soft-deleted documents, you have this new option:
var store = DocumentStore.For(opts =>
{
opts.Connection("some connection string");
// Opt into partitioning for one document type
opts.Schema.For<User>().SoftDeletedWithPartitioning();
// Opt into partitioning and and index for one document type
opts.Schema.For<User>().SoftDeletedWithPartitioningAndIndex();
// Opt into partitioning for all soft-deleted documents
opts.Policies.AllDocumentsSoftDeletedWithPartitioning();
});
storeOptions.Policies.AllDocumentsAreMultiTenantedWithPartitioning(x =>
{
// Selectively by LIST partitioning
x.ByList()
// Adding explicit table partitions for specific tenant ids
.AddPartition("t1", "T1")
.AddPartition("t2", "T2");
// OR Use LIST partitioning, but allow the partition tables to be
// controlled outside of Marten by something like pg_partman
// https://github.com/pgpartman/pg_partman
x.ByExternallyManagedListPartitions();
// OR Just spread out the tenant data by tenant id through
// HASH partitioning
// This is using three different partitions with the supplied
// suffix names
x.ByHash("one", "two", "three");
// OR Partition by tenant id based on ranges of tenant id values
x.ByRange()
.AddRange("north_america", "na", "nazzzzzzzzzz")
.AddRange("asia", "a", "azzzzzzzz");
// OR use RANGE partitioning with the actual partitions managed
// externally
x.ByExternallyManagedRangePartitions();
});
Summary
Your mileage will vary of course depending on how big your database is and how you really query the database, but at least in some common cases, the Marten community is pretty excited for the potential of table partitioning to improve Marten performance and scalability.
Just a reminder, JasperFx Software offers support contracts and consulting services to help you get the most out of the “Critter Stack” tools (Marten and Wolverine).If you’re building server side applications on .NET, the Critter Stack is the most feature rich tool set for Event Sourcing and Event Driven Architectures around.
The theme of the last couple months for the Marten community and I has been a lot of focus on improving Marten’s event sourcing feature set to be able to reliably handle very large data loads. With that being said, Marten 7.25 was released today with a huge amount of improvements around its performance, scalability, and reliability under very heavy loads (we’re talking about databases with hundreds of millions of events).
Before I get into the details, there’s a lot of thanks and credit to go around:
Core team member JT made several changes to reduce the amount of object allocations that Marten does at runtime in SQL generation — and basically every operation it does involves SQL generation
Ben Edwards contributed several ideas, important feedback, and some optimization pull requests toward this release
Babu made some improvements to our CI pipeline that made it a lot easier for me to troubleshoot the work I was doing
a-shtifanov-laya did some important load testing harness work that helped quite a bit to validate the work in this release
Urbancsik Gergely for doing a lot of performance and load testing with Marten that helped tremendously
And I’ll be giving some personal thanks to a couple JasperFx clients who enabled me to spend so much time on this effort
And now, the highlights for event store performance, scalability, and reliability improvements — most of which are “opt in” configuration items so as to not disturb existing users:
The new “Quick Append” option is completely usable and appears from testing to be about 2X as fast as the V4-V7 “Rich” appending process. More than that, opting into the quick append mechanism appears to eliminate the event “skipping” problem with asynchronous projections or event subscriptions that some people have experienced in very heavy loads. Lastly, I originally meant to play this work because I think it will alleviate issues that some people run into with concurrent operations trying to append events to the same event streams
Marten can create a Hot/Cold Storage mechanism around its event store by leveraging PostgreSQL native table partitioning. There’s work on users part to mark event streams as archived for this to matter, but this is potentially a huge win for Marten scalability. A later Marten release will shortly add partitioning support to Marten document tables
There’s several optimizations inside of even the classic, “rich” event appending that reduce the number of network round trips happening at runtime — and thats a good thing because network round trips are evil!
Outside of the event store improvements, Marten also got a new “Specification” alternative called “query plans” for reusable query logic for when Marten’s compiled query feature won’t work. The goal with this feature is to help a JasperFx client migrate off of Clean Architecture style repository wrapper abstractions in a way that doesn’t cause code duplication while also setting them up to utilize Marten’s batch query feature for a much more performant code.
Summary
I’m still digging out from a very good family vacation, but man, getting this stuff out feels really good. The Marten community is very vibrant right now, with a lot of community engagement that’s driving the tool’s capabilities into much more serious system territory. The “hot/cold storage” feature that just went in has been in the Marten backlog since 2017, and I’m thrilled to finally see that make it in.
As Houston gets drenched by Hurricane Beryl as I write this, I’m reminded of a formative set of continuing education courses I took when I was living in Houston in the late 90’s and plotting my formal move into software development. Whatever we learned about VB6 in those MSDN classes is long, long since obsolete, but one pithy saying from one of our instructors (who went on to become a Marten user and contributor!) stuck with me all these years later:
His point then, and my point now quite frequently working with JasperFx Software clients, is that round trips between browsers to backend web servers or between application servers and the database need to be treated as expensive operations and some level of request, query, or command batching is often a very valuable optimization in systems design.
Consider my family’s current kitchen predicament as diagrammed above. The very expensive, original refrigerator from our 20 year old house finally gave up the ghost, and we’ve had it completely removed while we wait on a different one to be delivered. Fortunately, we have a second refrigerator in the garage. When cooking now though, it’s suddenly a lot more time consuming to go to the refrigerator for an ingredient since I can’t just turn around and grab something when the kitchen refrigerator was just a step away. Now that we have to walk across the house from the kitchen to the garage to get anything from the other refrigerator, it’s becoming very helpful to try to grab as many things as you can at one time so you’re not constantly running back and forth.
While this issue certainly arises from user interfaces or browser applications making a series of little requests to a backing server, I’m going to focus on database access for the rest of this post. Using a simple example from Marten usage, consider this code where I’m just creating five little documents and persisting them to a database:
public static async Task storing_many(IDocumentSession session)
{
var user1 = new User { FirstName = "Magic", LastName = "Johnson" };
var user2 = new User { FirstName = "James", LastName = "Worthy" };
var user3 = new User { FirstName = "Michael", LastName = "Cooper" };
var user4 = new User { FirstName = "Mychal", LastName = "Thompson" };
var user5 = new User { FirstName = "Kurt", LastName = "Rambis" };
session.Store(user1);
session.Store(user2);
session.Store(user3);
session.Store(user4);
session.Store(user5);
// Marten will *only* make a single database request here that
// bundles up "upsert" statements for all five users added above
await session.SaveChangesAsync();
}
In the code above, Marten is only issuing a single batched command to the backing database that performs all five “upsert” operations in one network round trip. We were very performance conscious in the very early days of Marten development and did quite a bit of experimentation with different options for JSON serialization or how exactly to write SQL that queried inside of JSONB or even table structure. Consistently and unsurprisingly though, the biggest jump in performance was when we introduced command batching to reduce the number of network round trips between code using Marten and the backing PostgreSQL database. That early performance testing also led us to early investments in Marten’s batch querying support and the Include() query functionality that allows Marten users to fetch related data with fewer network hops to the database.
Just based on my own experience, here are two trends I see about interacting with databases in real world systems:
There’s a huge performance gain to be made by finding ways to batch database queries
It’s very common for systems in the real world to suffer from performance problems that can at least partially be traced to unnecessary chattiness between an application and its backing database(s)
At a guess, I think the underlying reasons for the chattiness problem are something like:
Developers who just aren’t aware of the expense of network round trips or aren’t aware of how to utilize any kind of database query batching to reduce the problems
Wrapper abstractions around the raw database persistence tooling that hides more powerful APIs that might alleviate the chattiness problem
Wrapper abstractions that encourage a pattern of only loading data by keys one row/object/document at a time
Wrapper abstractions around the raw database persistence that discourage developers from learning more about the underlying persistence tooling they’re using. Don’t underestimate how common that problem is. And I’ve absolutely been guilty of causing that issue as a younger “architect” in the past who created those abstractions.
Complicated architectural layering that can make it quite difficult to easily reason about the cause and effect between system inputs and the database queries that those inputs spawn. Big call stacks of a controller calling a mediator tool that calls one service that calls other services that call different repository abstractions that all make database queries is a common source of chattiness because it’s hard to even see where all the chattiness is coming from by reading the code.
As you might know if you’ve stumbled across any of my writings or conference talks from the last couple years, I’m not a big fan of typical Clean/Onion Architecture approaches. I think these approaches introduce a lot of ceremony code into the mix that I think causes more harm overall than whatever benefits they bring.
Here’s an example that’s somewhat contrived, but also quite typical in terms of the performance issues I do see in real life systems. Let’s say you’ve got a command handler for a ShipOrder command that will need to access data for both a related Invoice and Order entity that could look something like this:
public class ShipOrderHandler
{
private readonly IInvoiceRepository _invoiceRepository;
private readonly IOrderRepository _orderRepository;
private readonly IUnitOfWork _unitOfWork;
public ShipOrderHandler(
IInvoiceRepository invoiceRepository,
IOrderRepository orderRepository,
IUnitOfWork unitOfWork)
{
_invoiceRepository = invoiceRepository;
_orderRepository = orderRepository;
_unitOfWork = unitOfWork;
}
public async Task Handle(ShipOrder command)
{
// Making one round trip to get an Invoice
var invoice = await _invoiceRepository.LoadAsync(command.InvoiceId);
// Then a second round trip using the results of the first pass
// to get follow up data
var order = await _orderRepository.LoadAsync(invoice.OrderId);
// do some logic that changes the state of one or both of these entities
// Commit the transaction that spans the two entities
await _unitOfWork.SaveChangesAsync();
}
}
The code is pretty simple in this case, but we’re still making more database round trips than we absolutely have to — and real enterprise systems can get much, much bigger than my little contrived example and incur a lot more overhead because of the chattiness problem that the repository abstractions naturally let in.
Let’s try this functionality again, but this time just depending on the raw persistence tooling (Marten’s IDocumentSession and use a Wolverine-style command handler to boot to further reduce the code noise:
public static class ShipOrderHandler
{
// We're still keeping some separation of concerns to separate the infrastructure from the business
// logic, but Wolverine lets us do that just through separate functions instead of having to use
// all the limiting repository abstractions
public static async Task<(Order, Invoice)> LoadAsync(IDocumentSession session, ShipOrder command)
{
// This is important (I think:)), the admittedly complicated
// Marten usage below fetches both the invoice and its related order in a
// single network round trip to the database and can lead to substantially
// better system performance
Order order = null;
var invoice = await session
.Query<Invoice>()
.Include<Order>(i => i.OrderId, o => order = o)
.Where(x => x.Id == command.InvoiceId)
.FirstOrDefaultAsync();
return (order, invoice);
}
public static void Handle(ShipOrder command, Order order, Invoice invoice)
{
// do some logic that changes the state of one or both of these entities
// I'm assuming that Wolverine is handling the transaction boundaries through
// middleware here
}
}
In the second code sample, we’ve been able to go right at the Marten tooling to take advantage of its more advanced functionality to batch up data fetching for better performance that wasn’t easily possible when we were putting repository abstractions between our command handler and the underlying persistence tooling. Moreover, we can even reason about the resulting database operations that are happening as a result of our command that can be somewhat obfuscated by more layers and more code separation as is common in Onion/Clean/Ports and Adapters style approaches.
It’s not just repository abstractions that cause problems, sometimes it’s just happily useful little extension methods that can be the source of chattiness. Here’s a pair of helper extension methods around Marten’s event store functionality that help you start a new event stream in a single line of code or append a single event to an existing event stream in a single line of code:
public static class DocumentSessionExtensions
{
public static Task Add<T>(this IDocumentSession documentSession, Guid id, object @event, CancellationToken ct)
where T : class
{
documentSession.Events.StartStream<T>(id, @event);
return documentSession.SaveChangesAsync(token: ct);
}
public static Task GetAndUpdate<T>(
this IDocumentSession documentSession,
Guid id,
int version,
// If we're being finicky about performance here, these kinds of inline
// lambdas are NOT cheap at runtime and I'm recommending against
// continuation passing style APIs in application hot paths for
// my clients
Func<T, object> handle,
CancellationToken ct
) where T : class =>
documentSession.Events.WriteToAggregate<T>(id, version, stream =>
stream.AppendOne(handle(stream.Aggregate)), ct);
}
Fine, right? These potentially make your code cleaner and simpler but of course, they’re also potentially harmful. Here’s an example of these two extension methods that were similar to some code I saw in the wild last week:
public static class Handler
{
public static async Task Handle(Command command, IDocumentSession session, CancellationToken token)
{
var id = CombGuidIdGeneration.NewGuid();
// One round trip
await session.Add<Aggregate>(id, new FirstEvent(), token);
if (command.SomeCondition)
{
// This actually makes a pair of round trips, one to fetch the current state
// of the Aggregate compiled from the first event appended above, then
// a second to append the SecondEvent
await session.GetAndUpdate<Aggregate>(id, 1, _ => new SecondEvent(), token);
}
}
}
I got involved with this code in reaction to some load testing that was resulting in disappointing results. When I was pulled in, I saw the extra round trips that snuck in because of the usage of the convenience extension methods they had been using, and suggested a change to something like this (but with Wolverine’s aggregate handler workflow that simplified the code more than this):
public static class Handler
{
public static async Task Handle(Command command, IDocumentSession session, CancellationToken token)
{
var events = determineEvents(command).ToArray();
var id = CombGuidIdGeneration.NewGuid();
session.Events.StartStream<Aggregate>(id, events);
await session.SaveChangesAsync(token);
}
// This was isolated so you can easily unit test the business
// logic that "decides" what events to append
public static IEnumerable<object> determineEvents(Command command)
{
yield return new FirstEvent();
if (command.SomeCondition)
{
yield return new SecondEvent();
}
}
}
The code above cut down the number of network round trips to the database and greatly improved the results of the load testing.
Summary
If system performance is a concern in your system (it’s not always), you probably need to be cognizant of how chatty your application is in regards to its communication and interaction with the backing database. Or any other remote system or infrastructure that your system interacts with at runtime.
Personally, I think that higher ceremony code structures make it much more likely to incur issues with database chattiness especially by first obfuscating your code so you don’t even easily recognize where there’s chattiness, then second by wrapping simplifying abstractions around your database persistence tooling that eliminate the usage of more advanced functionality for query batching.
And of course, both Wolverine and Marten put a heavy emphasis on reducing code ceremony and generally on code noise in general because I personally think that’s very valuable to help teams succeed over time with software systems in the wild. My theory of the case is that even at the cost of a little bit of “magic”, simply reducing the amount of code you have to wade through in existing systems will make those systems easier to maintain and troubleshoot over time.
And on that note, I’m basically on vacation for the next week, and you can address your complaints about my harsh criticism of Clean/Onion Architectures to the ether:-)
By the way, JasperFx Software offers support contracts or tailored consulting engagements to help your shop maximize your success with Wolverine.
It’s an unfortunate fact of long running software tooling projects like Wolverine that decisions made years earlier won’t hold up perfectly as the usage requirements, underlying platform, and other tools it’s integrated with change over time. In the case of Wolverine, it’s time for a medium sized 3.0 release to make some potentially breaking API changes and changes to its original model to accommodate some use cases that weren’t part of the original vision.
This is all subject to change based on whatever feedback comes in, but right now I think the items that are absolutely in scope are:
De-coupling Wolverine from Lamar. That work has been unfortunately huge so far, but almost complete as I type this. At a minimum, Wolverine is going to be functional with both the build in ServiceProvider container and still Lamar itself. To a large degree, this was actually a prerequisite for the next bullet point
Get Wolverine completely on the .NET Aspire train. Really just making sure that all the external infrastructure integrations with Wolverine that also have Aspire integrations like PostgreSQL, Sql Server, Rabbit MQ, Azure Service Bus, Kafka, and AWS SQS are able to correctly use the Aspire configuration. That’s really just a matter of using some IoC container integration and not a huge deal. I think that testing and documentation will be more work on that front than the actual development. I know that there are mixed opinions about whether or not Aspire is valuable, but this can’t be a reason why folks won’t consider using Wolverine in the future, so here we go.
Beyond that, I think there are some additive features I just didn’t want to work on right now until the 3.0 work is complete:
An HTTP messaging transport — which I think we really want inside of Wolverine as a precursor to the long planned “Critter Stack Pro” add on tooling. Which also might help enable:
Some improvements for dynamic multi-tenancy where you want to spin up or down tenants in a running application without downtime
Improving the EF Core integration with Wolverine to bring it a bit up to parity with the existing Marten integrations. That would potentially include multi-tenancy support and more middleware and productivity shortcuts for Wolverine
I’d like to completely reconsider our bootstrapping, especially in an application that combines both Marten & Wolverine. I think there’s some room for a customized CritterStackApplicationBuilder or CritterStackWebApplication model. More on this later.
I don’t really want this to be a huge release that takes very long, and I absolutely don’t want this to require very many changes at all for our users to adopt this.
I’m a huge fan of alt country or Americana music, definitely appreciate some classic country like Johnny Cash, Lefty Frizzell or Buck Owens, and I live in Austin so it’s mandatory to be into Willie Nelson, Robert Earl Keen, Hays Carll, and the late, great Billy Joe Shaver (maybe my favorite musician of all time). Mainstream country though? That’s a totally different animal. I like *some* newer country artists like Maren Morris or Kacey Musgraves (i.e. not really typical country at all), but I was only really into mainstream country during its 90’s boom when I was in or right out of college.
Just for fun and out of curiosity, I’ve been on a couple year mission to reevaluate what out of the country music of that that time is still worth listening to and what was “Mighty Morphin Power Rangers” level cheesy. I’m going hard into Comic Book Guy mode here as I make up some categories and lists:
Albums in rotation for me
I have plenty of playlists with 90’s country, but what albums are still worthwhile to me for a full play? I only came up with a few:
This Time by Dwight Yoakum. Such an awesome, timeless album. It’s the “Sharon Stone dumped me and I’m sad” album.
What a Crying Shame by the Mavericks. One of my all time favorite bands, and I even proposed to my wife right before seeing a Raul Malo concert at Gruene Hall. Trampoline from ’98 is my overall favorite Mavericks album but I’m saying they’d moved long past country by that point anyway.
Killin’ Time by Clint Black. Never got into anything else he ever did though
Weirdly enough, I still like Thinkin’ Problem by David Ball. Still holds up for me even though it wasn’t any kind of big hit
Born to Fly by Sara Evans. That might have been the very last mainstream country CD I ever purchased. Think it was in the 2000’s, but still including it here!
What surprisingly holds up for me
I remember liking him at the time, but I will happily pull out Joe Diffie’s (RIP, and an early COVID casualty:( ) greatest hits and play that. The humor still works, and I think he had a genuinely great voice. I’ll see your John Deere Green, but give me Prop Me Up By the Jukebox When I Die.
Mark Chesnutt. I liked him at the time, remember him always being in the background on the radio, but I don’t think I appreciated how good he was until I tried out his Greatest Hits collection for this post. Same kind of humor (Bubba Shot the Jukebox) as Joe Diffie, but his ballads hold up for me too. Awesome voice too.
Sammy Kershaw. I’ll still pull out his greatest hits once in awhile. I loved him at the time of course
Um, what about the women?
This post has been a bit of a sausage fest so far, and that’s not fair. Let’s talk about the women too! I think just like today, that the female artists hold up better over all somehow.
Her biggest stuff came later, but I was Sara Evans fan from the start and I’ll still play her music sometimes
Joe Dee Messina was awfully good in the late 90’s, and Head’s Caroline, Tails California still merits cranking up the volume if it ever comes on the radio
Faith Hill was cheesy to me when she first got started, but I like her later, admittedly poppy stuff.
Shania Twain. I absolutely remember why 20-something guys like me were into her, a couple songs were fun, but man, that stuff is cheesy
I still like Suzy Bogguss
I don’t mind Trisha Yearwood or Mary Chapin Carpenter, but I think they were products of their time and they sound really dated to me now
The Chicks were and are the real thing. I liked their first couple albums, but Home from ’02 is their best in my opinion. Doesn’t hurt that there was a strong Austin influence on that album from the song writers
Doug Stone ballads walked right up to the line between “good” and “cheesy”.
What about Garth Brooks or George Strait?
Just like Alan Jackson, the two biggest country guys of the 90’s could easily swerve from “hey, I really like that” to “zomg, that’s so cheesy I can’t believe anybody would ever listen to that on purpose.”
For Garth Brooks, give me Friends in Low Places & Shameless, and keep The Dance or Unanswered Prayers on the sideline. For George Strait, I think I’d throw out most of his 90’s music, but keep his earlier stuff like Amarillo by Morning or Baby Blue as a guilty pleasure.
Not counting Americana, but if I was…
For the sake of this post, I’m only going to consider mainstream artists and not bands that I think of as primarily being Americana or Alt Country. That being said, I think the Alt Country music of that era absolutely holds up:
Every single album that Shaver put out in his 90’s renaissance was fantastic, but I’m calling out Unshaven: Live at Smith’s Olde Bar as one of my all time favorite albums of any era or genre. Check it out, but make sure you play it loud. I cannot overstate how good those guys were live in the 90’s before Eddie Shaver passed away.
Joe Ely put out some great albums in the 90’s too, with Letter to Laredo being my favorite of his
Robert Earl Keen was prolific, and I’ll toss up No. 2 Live Dinner as my favorite of that time, but I would accept A Bigger Piece of Sky, Gringo Honeymoon, or Walking Distance as very strong contenders. With Feelin’ Good Again being one of my favorite songs of all time
The Mavericks are one of my all time favorite bands, and they’ve long since transcended country, but they started as a very good traditional country band
Alan Jackson is across the whole spectrum between genuinely great stuff (Chasing that Neon Rainbow) and oh my gosh, that’s absurdly cheesy and I’m embarrassed for people that like that (Small Town Country Man)
Sawyer Brown, but some of their stuff is too maudlin
I have a soft spot for Sara Evans partially since she’s from Missouri, but also,
Faith Hill’s later music when she admittedly got a little poppier
In previous posts I’ve described first why multi-tenancy might be useful, how to do it with Marten’s “conjoined” tenancy model, and lastly how to do it with separate databases for each tenant.
Marten is more flexible than that though, and it’s sometimes valid to mix and match the two tenancy modes. Offhand, I can think of a couple reasons that JasperFx clients or other Marten users have hit:
Multi-level tenancy. In one case I’m aware of, a Marten system was using a separate database for each country where the system was operational, but used conjoined tenancy by company doing business within that country. That was beneficial for scaling, but also to be able to easily query across all businesses within that country and to keep reference data centralized by country database as well
Being able to reduce hosting costs by centralizing one larger customer on their own database while keeping other customers on a second database. Or simply spreading customer data around to more equitably spread database load rather than incurring the hosting cost of having so many separate databases
The simplest model for this is the static multi-tenancy recipe in Marten that simply makes you specify the tenant ids and databases upfront like so:
var builder = Host.CreateApplicationBuilder();
builder.Services.AddMarten(opts =>
{
opts.MultiTenantedDatabases(tenancy =>
{
// Put data for these three tenants in the "main" database
// The 2nd argument is just an identifier for logs and diagnostics
tenancy.AddMultipleTenantDatabase(builder.Configuration.GetConnectionString("main"), "main")
.ForTenants("company1", "company2", "company3");
// Put the single "big" tenant in the "group2" database
tenancy.AddSingleTenantDatabase(builder.Configuration.GetConnectionString("group2"), "big");
});
});
await builder.Build().StartAsync();
When fetching data, you still use the same mechanism to create a session for the tenant id like:
var session = store.LightweightSession("company1");
Marten will select the proper database for the “company1” tenant id above, and also set default filters on tenant_id = 'company1' on that session for all LINQ queries unless you explicitly tell Marten to query against all tenants or a different, specific tenant id.
Summary and What’s Next
You’ve got options with Marten! I think for the next post in this series I’ll move to Wolverine instead and how it can track work for a tenant across asynchronous message handling.