I know, command line parsing libraries are about the least exciting tooling in the entire software universe, and there are dozens of perfectly competent ones out there. Oakton though, is heavily used throughout the entire “Critter Stack” (Marten, Weasel, and Wolverine plus other tools) to provide command line utilities directly to any old .NET Core application that happens to be bootstrapped with one of the many ways to arrive at an IHost. Oakton’s key advantage over other command line parsing tools is its ability to easily add extension commands to a .NET application in external assemblies. And of course, as part of the entire JasperFx / Critter Stack philosophy of developer tooling, Oakton’s very concept was originally created to enhance the testability of custom command line tooling. Unlike some other tools *cough* System.CommandLine *cough*.
Oakton also has some direct framework-ish elements for environment checks and the stateful resource model used very heavily all the way through Marten and Wolverine to provide the very best development time experience possible when using our tools.
Today the extended JasperFx / Critter Stack community released Oakton 6.2 with some new, hopefully important use cases. First off, the stateful resource model that we use to setup, teardown, or just check “configured stateful resources” in our system like database schemas or message broker queues just got the concept of dependencies between resources such that you can control which resources are setup first.
Next, Oakton finally got a couple easy to use recipes for utilizing IoC services in Oakton commands (it was possible, just maybe a little higher ceremony that some folks prefer). The first way, assuming that you’re running Oakton from one of the many flavors of IHostBuilder or IHost like so:
// This would be the last line in your Program.Main() method
// "app" in this case is a WebApplication object, but there
// are other extension methods for headless services
return await app.RunOaktonCommands(args);
You can build an Oakton command class that uses “setter injection” to get IoC services like so:
public class MyDbCommand : OaktonAsyncCommand<MyInput>
{
// Just assume maybe that this is an EF Core DbContext
[InjectService]
public MyDbContext DbContext { get; set; }
public override Task<bool> Execute(MyInput input)
{
// do stuff with DbContext from up above
return Task.FromResult(true);
}
}
Just know that when you do this and execute a command that has decorated properties for services, Oakton is:
Building your system’s IHost
Creating a new IServiceScope from your application’s DI container, or in other words, a scoped container
Building your command object and setting all the dependencies on your command object by resolving each dependency from the scoped container created in the previous step
Executing the command as normal
Disposing the scoped container and the IHost, effectively in a try/finally so that Oakton is always cleaning up after the application
In other words, Oakton is largely taking care of annoying issues like object disposal cleanup, scoping, and actually building the IHost if necessary.
Oakton’s Future
The Critter Stack Core team & I are charting a course for our entire ecosystem I’m calling “Critter Stack 2025” that’s hoping to greatly reduce the technical challenges in adopting our tool set. As part of that, what’s now Oakton is likely to move into a new shared library (I think it’s just going to be called “JasperFx”) between the various critters (and hopefully new critters for 2025!). Oakton itself will probably get a temporary life as a shim to the new location as a way to ease the transition for existing users. There’s a balance between actively improving your toolset for potential new users and not disturbing existing users too much. We’re still working on whatever that balance ends up being.
Building and maintaining a large, hosted system that requires multi-tenancy comes with a fair number of technical challenges. JasperFx Software has helped several of our clients achieve better results with their particular multi-tenancy challenges with Marten and Wolverine, and we’re available to do the same for your shop! Drop us a message on our Discord server or email us at sales@jasperfx.net to start a conversation.
This is continuing a series about multi-tenancy with Marten, Wolverine, and ASP.Net Core:
Using Partitioning for Better Performance with Multi-Tenancy and Marten (future)
Multi-Tenancy in Wolverine with EF Core & Sql Server (future, and honestly, future functionality as part of Wolverine 4.0)
Dynamic Tenant Creation and Retirement in Marten and Wolverine (definitely in the future)
Let’s say that you’re using the Marten + PostgreSQL combination for your system’s persistence needs in a web service application. Let’s also say that you want to keep the customer data within your system in completely different databases per customer company (or whatever makes sense in your system). Lastly, let’s say that you’re using Wolverine for asynchronous messaging and as a local “mediator” tool. Fortunately, Wolverine by itself has some important built in support for multi-tenancy with Marten that’s going to make your system a lot easier to build.
Let’s get started by just showing a way to opt into multi-tenancy with separate databases using Marten and its integration with Wolverine for middleware, saga support, and the all important transactional outbox support:
// Adding Marten for persistence
builder.Services.AddMarten(m =>
{
// With multi-tenancy through a database per tenant
m.MultiTenantedDatabases(tenancy =>
{
// You would probably be pulling the connection strings out of configuration,
// but it's late in the afternoon and I'm being lazy building out this sample!
tenancy.AddSingleTenantDatabase("Host=localhost;Port=5433;Database=tenant1;Username=postgres;password=postgres", "tenant1");
tenancy.AddSingleTenantDatabase("Host=localhost;Port=5433;Database=tenant2;Username=postgres;password=postgres", "tenant2");
tenancy.AddSingleTenantDatabase("Host=localhost;Port=5433;Database=tenant3;Username=postgres;password=postgres", "tenant3");
});
m.DatabaseSchemaName = "mttodo";
})
.IntegrateWithWolverine(masterDatabaseConnectionString:connectionString);
Just for the sake of completion, here’s some sample Wolverine configuration that pairs up with the above:
// Wolverine usage is required for WolverineFx.Http
builder.Host.UseWolverine(opts =>
{
// This middleware will apply to the HTTP
// endpoints as well
opts.Policies.AutoApplyTransactions();
// Setting up the outbox on all locally handled
// background tasks
opts.Policies.UseDurableLocalQueues();
});
Now that we’ve got that basic setup for Marten and Wolverine, let’s move on to the first issue, how the heck does Wolverine “know” which tenant should be used? In a later post I’ll show how Wolverine.HTTP has built in tenant id detection, but for now, let’s pretend that you’re already taking care of tenant id detection from incoming HTTP requests some how within your ASP.Net Core pipeline and you just need to pass that into a Wolverine message handler that is being executed from within an MVC Core controller (“Wolverine as Mediator”):
[HttpDelete("/todoitems/{tenant}/longhand")]
public async Task Delete(
string tenant,
DeleteTodo command,
IMessageBus bus)
{
// Invoke inline for the specified tenant
await bus.InvokeForTenantAsync(tenant, command);
}
By using the IMessageBus.InvokeForTenantAsync() method, we’re invoking a command inline, but telling Wolverine what the tenant id is. The command handler might look something like this:
// Keep in mind that we set up the automatic
// transactional middleware usage with Marten & Wolverine
// up above, so there's just not much to do here
public static class DeleteTodoHandler
{
public static void Handle(DeleteTodo command, IDocumentSession session)
{
session.Delete<Todo>(command.Id);
}
}
Not much going on there in our code, but Wolverine is helping us out here by:
Seeing the tenant id value that we passed in before that Wolverine is tracking in its own Envelope structure (Wolverine’s version of Envelope Wrapper from the venerable EIP book)
Creates the Marten IDocumentSession for that tenant id value, which will be reading and writing to the correct tenant database underneath Marten
Now, let’s make this a little more complex by also publishing an event message in that message handler for the DeleteTodo message:
public static class TodoCreatedHandler
{
public static TodoDeleted Handle(DeleteTodo command, IDocumentSession session)
{
session.Delete<Todo>(command.Id);
// This
return new TodoDeleted(command.Id);
}
}
public record TodoDeleted(int TodoId);
Assuming that the TodoDeleted message is being published to a “durable” endpoint, Wolverine is using its transactional outbox integration with Marten to persist the outgoing message in the same tenant database and same transaction as the deletion we’re doing in that command handler. In other words, Wolverine is able to use the tenant databases for its outbox support with no other configuration necessary than what we did up above in the calls to AddMarten() and UseWolverine().
Moreover, Wolverine is even able to use its “durability agent” against all the tenant databases to ensure that any work that is somehow stranded by crashed processes.
Lastly, the TodoDeleted event message cascaded above from our message handler would be tracked throughout Wolverine with the tenant id of the original DeleteToDo command message so that you can do multi-part workflows through Wolverine while tracks the tenant id and utilizes the correct tenant database through Marten all along the way.
Summary
Building solutions with multi-tenancy can be complicated, but the Wolverine + Marten combination can make it a lot easier.
Hey, did you know that JasperFx Software offers both consulting services and support plans for the “Critter Stack” tools? Or for architectural or test automation help with any old server side .NET application. One of the other things we do is to build out custom features that our customers need in the “Critter Stack” — like the Marten-managed table partitioning for improved scaling and performance in this release!
A fairly sizable Marten 7.28 release just went live — or will at least be available on Nuget by the time you read this with a mix of new features and usability improvements. The biggest new feature is “Marten-Managed Table Partitioning by Tenant.” Lots of words! Consider this scenario:
You have a system with a huge number of events
You also need to use Marten’s support for multi-tenancy
For historical reasons and for the easy of deployment and management, you are using Marten’s “conjoined” multi-tenancy model and keeping all of your tenant data in the same database (this might have some very large cloud hosting cost saving benefits as well)
You want to be able to scale the database performance for all the normal reasons
PostgreSQL table partitioning to the rescue! In recent Marten releases, we’ve added support to take advantage of postgres table sharding as a way to improve performance in many operations — with one of the obvious first usages using table sharding per tenant id for Marten’s “conjoined” tenancy model. Great! Just tell Marten exactly what the tenant ids are and the matching partition configuration and go!
But wait, what if you have a very large number of tenants and might need to even add new tenants at runtime and without incurring any kind of system downtime? Marten now has a partitioning feature for multi-tenancy that can dynamically create per-tenant shards at runtime and manage the list of tenants in its own database storage like so:
var builder = Host.CreateApplicationBuilder();
builder.Services.AddMarten(opts =>
{
opts.Connection(builder.Configuration.GetConnectionString("marten"));
// Make all document types use "conjoined" multi-tenancy -- unless explicitly marked with
// [SingleTenanted] or explicitly configured via the fluent interfce
// to be single-tenanted
opts.Policies.AllDocumentsAreMultiTenanted();
// It's required to explicitly tell Marten which database schema to put
// the mt_tenant_partitions table
opts.Policies.PartitionMultiTenantedDocumentsUsingMartenManagement("tenants");
});
With some management helpers of course:
await theStore
.Advanced
// This is ensuring that there are tenant id partitions for all multi-tenanted documents
// with the named tenant ids
.AddMartenManagedTenantsAsync(CancellationToken.None,"a1", "a2", "a3");
If you’re familiar with the pg_partman tool, this was absolutely meant to fulfill a similar role within Marten for per-tenant table partitioning.
Aggregation Projections with Explicit Code
This is probably long overdue, but the other highlight that’s probably much more globally applicable is the ability to write more Marten event aggregation projections with strictly explicit code for folks who don’t care for the Marten conventional method approaches — or just want a more complicated workflow than what the conventional approaches can do.
You still need to use the CustomProjection<TDoc, TId> base class for your logic, but now there are simpler methods that can be overloaded to express explicit “left fold over events to create an aggregated document” logic as shown below:
public class ExplicitCounter: CustomProjection<SimpleAggregate, Guid>
{
public override SimpleAggregate Apply(SimpleAggregate snapshot, IReadOnlyList<IEvent> events)
{
snapshot ??= new SimpleAggregate();
foreach (var e in events.Select(x => x.Data))
{
if (e is AEvent) snapshot.ACount++;
if (e is BEvent) snapshot.BCount++;
if (e is CEvent) snapshot.CCount++;
if (e is DEvent) snapshot.DCount++;
}
// You have to explicitly return the new value
// of the aggregated document no matter what!
return snapshot;
}
}
The explicitly coded projections can also be used for live aggregations (AggregateStreamAsync()) and within FetchForWriting() as well. This has been a longstanding request, and will receive even stronger support in Marten 8.
LINQ Improvements
Supporting a LINQ provider is the gift that never stops giving. There’s some small improvements this time around for some minor things:
// string.Trim()
session.Query<SomeDoc>().Where(x => x.Description.Trim() = "something");
// Select to TimeSpan out of a document
session.Query<SomeDoc>().Select(x => x.Duration).ToListAsync();
// Query the raw event data by event types
var raw = await theSession.Events.QueryAllRawEvents()
.Where(x => x.EventTypesAre(typeof(CEvent), typeof(DEvent)))
.ToListAsync();
Hey, did you know that JasperFx Software offers both consulting services and support plans for the “Critter Stack” tools? One of the common areas where we’ve helped our clients is in using Marten or Wolverine when the usage involves quite a bit of potential concerns about concurrency. As I write this, I’m currently working with a JasperFx client to implement the FetchForWriting API shown in this post as a way of improving their system’s resiliency to concurrency problems.
You’ve decided to use event sourcing as your persistence strategy, so that your persisted state of record are the actual business events segregated by streams that represent changes in state to some kind of logical business entity (an invoice? an order? an incident? a project?). Of course there will have to be some way of resolving or “projecting” the raw events into a usable view of the system state, but we’ll get to that.
You’ve also decided to organize your system around a CQRS architectural style (Command Query Responsibility Segregation). With a CQRS approach, the backend code is mostly organized around the “verbs” of your system, meaning the “command” messages (this could be HTTP services, and I’m not implying that there automatically has to be any asynchronous messaging) that are handled to capture changes to the system (events in our case), and “query” endpoints or APIs that strictly serve up information about your system.
While it’s certainly possible to do either Event Sourcing or CQRS without the other, the two things do go together as Forrest Gump would say, like peas and carrots.Marten is certainly valuable as part of a CQRS with Event Sourcing approach within a range of .NET messaging or web frameworks, but there is quite a bit of synergy between Marten and its “Critter Stack” stable mate Wolverine (see the details about the integration here).
And lastly of course, you’ve quite logically decided to use Marten as the persistence mechanism for the events. Marten is also a strong fit because it comes with some important functionality that we’ll need for CQRS command handlers:
Marten’s event projection support can give us a representation of the current state of the raw event data in a usable way that we’ll need within our command handlers to both validate requested actions and to “decide” what additional events should be persisted to our system
The FetchForWriting API in Marten will not only give us access to the projected event data, but it provides an easy mechanism for both optimistic and pessimistic concurrency protections in our system
Marten allows for a couple different options of projection lifecycle that can be valuable for performance optimization with differing system needs
As a sample application problem domain, I got to be part of a successful effort during the worst of the pandemic to stand up a new “telehealth” web portal using event sourcing. One of the concepts in that system that we needed to track in that system was the activity of a health care provider (nurse, doctor, nurse practitioner) with events for when they were available and what they were doing at any particular time during the day for later decision making:
public record ProviderAssigned(Guid AppointmentId);
public record ProviderJoined(Guid BoardId, Guid ProviderId);
public record ProviderReady;
public record ProviderPaused;
public record ProviderSignedOff;
// "Charting" is basically just whatever
// paperwork they need to do after
// an appointment, and it was important
// for us to track that time as part
// of their availability and future
// planning
public record ChartingFinished;
public record ChartingStarted;
public enum ProviderStatus
{
Ready,
Assigned,
Charting,
Paused
}
But of course, at several points, you do actually need to know what the actual state of the provider’s current shift is to be able to make decisions within the command handlers, so we had a “write” model something like this:
// I'm sticking the Marten "projection" logic for updating
// state from the events directly into this "write" model,
// but you could separate that into a different class if you
// prefer
public class ProviderShift
{
public Guid Id { get; set; }
// This is important, this would be set by Marten to the
// current event number or revision of the ProviderShift
// aggregate. This is going to be important later for
// concurrency protections
public int Version { get; set; }
public Guid BoardId { get; private set; }
public Guid ProviderId { get; init; }
public ProviderStatus Status { get; private set; }
public string Name { get; init; }
public Guid? AppointmentId { get; set; }
// The Create & Apply methods are conventional targets
// for Marten's "projection" capabilities
// But don't worry, you would never *have* to take a reference
// to Marten itself like I did below jsut out of laziness
public static ProviderShift Create(
ProviderJoined joined)
{
return new ProviderShift
{
Status = ProviderStatus.Ready,
ProviderId = joined.ProviderId,
BoardId = joined.BoardId
};
}
public void Apply(ProviderReady ready)
{
AppointmentId = null;
Status = ProviderStatus.Ready;
}
public void Apply(ProviderAssigned assigned)
{
Status = ProviderStatus.Assigned;
AppointmentId = assigned.AppointmentId;
}
public void Apply(ProviderPaused paused)
{
Status = ProviderStatus.Paused;
AppointmentId = null;
}
// This is kind of a catch all for any paperwork the
// provider has to do after an appointment has ended
// for the just concluded appointment
public void Apply(ChartingStarted charting)
{
Status = ProviderStatus.Charting;
}
}
The whole purpose of the ProviderShift type above is to be a “write” model that contains enough information for the command handlers to “decide” what should be done — as opposed to a “read” model that might have richer information like the provider’s name that would be more suitable or usable for using within a user interface. “Write” or “read” in this case is just a role within the system, and at different times it might be valuable to have separate models for different consumers of the information and in other times be able to happily get by with a single model.
Alright, so let’s finally look at a very simple command handler related to providers that tries to mark the provider as being finished charting:
// Since we're focusing on Marten, I'm using an MVC Core
// controller just because it's commonly used and understood
public class CompleteChartingController : ControllerBase
{
[HttpPost("/provider/charting/complete")]
public async Task Post(
[FromBody] CompleteCharting charting,
[FromServices] IDocumentSession session)
{
// We're looking up the current state of the ProviderShift aggregate
// for the designated provider
var stream = await session
.Events
.FetchForWriting<ProviderShift>(charting.ProviderShiftId, HttpContext.RequestAborted);
// The current state
var shift = stream.Aggregate;
if (shift.Status != ProviderStatus.Charting)
{
// Obviously do something smarter in your app, but you
// get the point
throw new Exception("The shift is not currently charting");
}
// Append a single new event just to say
// "charting is finished". I'm relying on
// Marten's automatic metadata to capture
// the timestamp of this event for me
stream.AppendOne(new ChartingFinished());
// Commit the transaction
await session.SaveChangesAsync();
}
}
I’m using the Marten FetchForWriting() API to get at the current state of the event stream for the designated provider shift (a provider’s activity during a single day). I’m also using this API to capture a new event marking the provider as being finished with charting. FetchForWriting() is doing two important things for us:
Executes or finds the projected data for ProviderShift from the raw events. More on this a little later
Provides a little bit of optimistic concurrency protection for our provider shift stream
Building on the theme of concurrency first, the command above will “remember” the current state of the ProviderShift at the point that FetchForWriting() is called. Upon SaveChangesAsync(), Marten will reject the transaction and throw a ConcurrencyException if some how, some way, some other request magically came through and completed against that same ProviderShift stream between the call to FetchForWriting() and SaveChangesAsync().
That level of concurrency is baked in, but we can do a little bit better. Remember that the ProviderShift has this property:
// This is important, this would be set by Marten to the
// current event number or revision of the ProviderShift
// aggregate. This is going to be important later for
// concurrency protections
public int Version { get; set; }
The projection capability of Marten makes it easy for us to “know” and track the current version of the ProviderShift stream so that we can feed it back to command handlers later. Here’s the full definition of the CompleteCharting command:
public record CompleteCharting(
Guid ProviderShiftId,
// This version is meant to mean "I was issued
// assuming that the ProviderShift is currently
// at this version in the server, and if the version
// has shifted since, then this command is now invalid"
int Version
);
Let’s tighten up the optimistic concurrency protection such that Marten will shut down the command handling faster before we waste system resources doing unnecessary work by passing the command version right into this overload of FetchForWriting():
// Since we're focusing on Marten, I'm using an MVC Core
// controller just because it's commonly used and understood
public class CompleteChartingController : ControllerBase
{
[HttpPost("/provider/charting/complete")]
public async Task Post(
[FromBody] CompleteCharting charting,
[FromServices] IDocumentSession session)
{
// We're looking up the current state of the ProviderShift aggregate
// for the designated provider
var stream = await session
.Events
.FetchForWriting<ProviderShift>(
charting.ProviderShiftId,
// Passing the expected, starting version of ProviderShift
// into Marten
charting.Version,
HttpContext.RequestAborted);
// And the rest of the controller stays the same as
// before....
}
}
In the usage above, Marten will do a version check both at the point of FetchForWriting() using the version we passed in, and again during the call to SaveChangesAsync() to reject any changes made if there was a concurrent update to that same stream.
Lastly, Marten gives you the ability to opt into heavier, exclusive access to the ProviderShift with this option:
// We're looking up the current state of the ProviderShift aggregate
// for the designated provider
var stream = await session
.Events
.FetchForExclusiveWriting<ProviderShift>(
charting.ProviderShiftId,
HttpContext.RequestAborted);
In that last usage, we’re relying on the underlying PostgreSQL database to get us an exclusive row lock on the ProviderShift event stream such that only our current operation is allowed to write to that event stream while we have the lock. This is heavier and comes with some risk of database locking problems, but solves the concurrency issue.
So that’s concurrency protection in FetchForWriting(), but I mostly skipped over when and how that API will execute the projection logic to go from the raw events like ProviderJoined, ProviderReady, or ChartingStarted to the projected ProviderShift.
Any projection in Marten can be calculated or executed with three different “projection lifecycles”:
Live — in this case, a projection is calculated on the fly by loading the raw events in memory and calculating the current state right then and there. In the absence of any other configuration, this is the default lifecycle for the ProviderShift per stream aggregation.
Inline — a projection is updated at the time any events are appended by Marten and persisted by Marten as a document in the PostgreSQL database.
Async — a projection is updated in a background process as events are captured by Marten across the system. The projected state is persisted as a Marten document to the underlying PostgreSQL database
The first two options give you strong consistency models where the projection will always reflect the current state of the events captured to the database. Live is probably a little more optimal in the case where you have many writes, but few reads, and you want to optimize the “write” side. Inline is optimal for cases where you have few writes, but many reads, and you want to optimize the “read” side (or need to query against the projected data rather than just load by id). The Async model gives you the ability to take the work of projecting events into the aggregated state out of both the “write” and “read” side of things. This might easily be advantageous for performance and very frequently necessary for ordering or concurrency concerns.
In the case of the FetchForWriting() API, you will always have a strongly consistent view of the raw events because that API is happily wallpapering over the lifecycle for you. Live aggregation works as you’d expect, Inline aggregation works by just loading the expected document directly from the database, and Async aggregation is a hybrid model that starts from the last known persisted value for the aggregate and applies any missing events right on top of that (the async behavior was a big feature added in Marten 7).
By hiding the actual lifecycle behavior behind the FetchForWriting() signature, teams are able to experiment with different approaches to optimize their application without breaking existing code.
Summary
FetchForWriting() was built specifically to ease the usage of Marten within CQRS command handlers after seeing how much boilerplate code teams were having to use before with Marten. At this point, this is our strongly recommended approach for command handlers. Also note that this API is utilized within the Wolverine + Marten “aggregate handler workflow” usage that does even more to remove code ceremony from CQRS command handler code. To some degree, what is now Wolverine was purposely rebooted and saved from the scrap heap specifically because of that combination with Marten and the FetchForWriting API.
Personally, I’m opposed to any kind of IAggregateRepository or approach where the “write” model itself tracks the events that are applied or uncommitted. I’m trying hard to discourage folks using Marten away from this still somewhat popular old approach in favor of a more Functional Programming-ish approach.
FetchForWriting could be used as part of a homegrown “Decider” pattern usage if that’s something you prefer (I think the “decider” pattern in real life usage ends up devolving into brute force procedural code through massive switch statements personally).
The “telehealth” system I mentioned before was built in real life with a hand-rolled Node.js event sourcing implementation, but that experience has had plenty of influence over later Marten work including a feature that just went into Marten over the weekend for a JasperFx client to be able to emit “side effect” actions and messages during projection updates.
I was deeply unimpressed with the existing Node.js tooling for event sourcing at that time (~2020), but I would hope it’s much better now. Marten has absolutely grown in capability in the past couple years.
Just a reminder, JasperFx Software offers support contracts and consulting services to help you get the most out of the “Critter Stack” tools (Marten and Wolverine).If you’re building server side applications on .NET, the Critter Stack is the most feature rich tool set for Event Sourcing and Event Driven Architectures around. And as I hope to prove to you in this post, Marten is a great option as a document database too!
Marten as a project started as an ultimately successful attempt to replace my then company’s usage of an early commercial “document database” with the open source PostgreSQL database — but with a small, nascent event store functionality bolted onto the side. With the exception of LINQ provider related issues, most of my attention these days is focused on the event sourcing side of things with the document database features in Marten just being a perfect complement for event projections.
This week and last though, I’ve had cause to work with a different document database option and it served to remind me that hey, Marten has a very strong technical story as a document database option. With that being said, let me get on with lionizing Marten by starting with a quick start.
Let’s say that you are building a server side .NET application with some kind of customer data and you at least start by modeling that data like so:
public class Customer
{
public Guid Id { get; set; }
// We'll use this later for some "logic" about how incidents
// can be automatically prioritized
public Dictionary<IncidentCategory, IncidentPriority> Priorities { get; set; }
= new();
public string? Region { get; set; }
public ContractDuration Duration { get; set; }
}
public record ContractDuration(DateOnly Start, DateOnly End);
public enum IncidentCategory
{
Software,
Hardware,
Network,
Database
}
public enum IncidentPriority
{
Critical,
High,
Medium,
Low
}
And once you have those types, you’d like to have that customer data saved to a database in a way that makes it easy to persist, query, and load that data with minimal developmental cost while still being as robust as need be. Assuming that you have access to a running instance of PostgreSQL (it’s very Docker friendly and I tend to use that as a development default), bring in Marten by first adding a reference to the “Marten” Nuget. Next, write the following code in a simple console application that also contains the C# code from above:
using Marten;
using Newtonsoft.Json;
// Bootstrap Marten itself with default behaviors
await using var store = DocumentStore
.For("Host=localhost;Port=5432;Database=marten_testing;Username=postgres;password=postgres");
// Build a Customer object to save
var customer = new Customer
{
Duration = new ContractDuration(new DateOnly(2023, 12, 1), new DateOnly(2024, 12, 1)),
Region = "West Coast",
Priorities = new Dictionary<IncidentCategory, IncidentPriority>
{
{ IncidentCategory.Database, IncidentPriority.High }
}
};
// IDocumentSession is Marten's unit of work
await using var session = store.LightweightSession();
session.Store(customer);
await session.SaveChangesAsync();
// Marten assigned an identity for us on Store(), so
// we'll use that to load another copy of what was
// just saved
var customer2 = await session.LoadAsync<Customer>(customer.Id);
// Just making a pretty JSON printout
Console.WriteLine(JsonConvert.SerializeObject(customer2, Formatting.Indented));
And that’s that, we’ve got a working usage of Marten to save, then load Customer data to the underlying PostgreSQL database. Right off the bat I’d like to point out a couple things about the code samples above:
We didn’t have to do any kind of mapping from our Customer type to a database structure. Marten is using JSON serialization to persist the data to the database, and as long as the Customer type can be bi-directionally serialized to and from JSON, Marten is going to be able to persist and load the type.
We didn’t specify or do anything about the actual database structure. In its default “just get things done” settings, Marten is able to happily detect that the necessary database objects for Customer are missing in the database, and build those out for us on demand
So that’s the easiest possible quick start, but what about integrating Marten into a real .NET application? Assuming you have a reference to the Marten nuget package, it’s just an IServiceCollection.AddMarten() call as shown below from a sample web application:
builder.Services.AddMarten(opts =>
{
// You always have to tell Marten what the connection string to the underlying
// PostgreSQL database is, but this is the only mandatory piece of
// configuration
var connectionString = builder.Configuration.GetConnectionString("postgres");
opts.Connection(connectionString);
})
// This is a mild performance optimization
.UseLightweightSessions();
At this point in the .NET ecosystem, it’s more or less idiomatic to use an Add[Tool]() method to integrate tools with your application’s IHost, and Marten tries to play within the typical .NET rules here.
I think this idiom and the generic host builder tooling has been a huge boon to OSS tool development in the .NET space compared to the old wild, wild west days. I do wish it would stop changing from .NET version to version though.
So that’s all a bunch of simple stuff, so let’s dive into something that shows off how Marten — really PostgreSQL — has a much stronger transactional model than many document databases that only support eventual consistency:
public static async Task manipulate_customer_data(IDocumentSession session)
{
var customer = new Customer
{
Name = "Acme",
Region = "North America",
Class = "first"
};
// Marten has "upsert", insert, and update semantics
session.Insert(customer);
// Partial updates to a range of Customer documents
// by a LINQ filter
session.Patch<Customer>(x => x.Region == "EMEA")
.Set(x => x.Class, "First");
// Both the above operations happen in one
// ACID transaction
await session.SaveChangesAsync();
// Because Marten is ACID compliant, this query would
// immediately work as expected even though we made that
// broad patch up above and inserted a new document.
var customers = await session.Query<Customer>()
.Where(x => x.Class == "First")
.Take(100)
.ToListAsync();
}
That’s a completely contrived example, but the point is, because Marten is completely ACID-compliant, you can make a range of operations within transactional boundaries and not have to worry about eventual consistency issues in immediate queries that other document databases suffer from.
So what else does Marten do? Here’s a bit of a rundown because Marten has a significantly richer built in feature set than many other low level document databases:
Fine-grained control over identity map or automatic dirty checking to run lighter for better performance — or to opt into the heavier automatic dirty checking for convenience. The point here is that you have control
And quite a bit more than that, including some test automation support I really need to better document:/
And on top of everything else, because Marten is really just a fancy library on top of PostgreSQL — the most widely used database engine in the world — Marten instantly comes with a wide array of solid cloud hosting options as well as being deployable to local infrastructure on premise. PostgreSQL is also very Docker-friendly, making it a great technical choice for local development.
What’s a Document Database?
If you’re not familiar with the term “document database,” it refers to a type of NoSQL database where data is almost inevitably stored as JSON data, where the database allows you to quickly marshal objects in code to the database, then query that data later right back into the same object structures. The huge benefit of document databases at development time is being able to code much more productively because you just don’t have nearly as much friction as you do when dealing with any kind of object-relational mapping with either an ORM tool or by writing SQL and object mapping code by hand.
Wolverine puts a very high emphasis on reducing code ceremony and tries really hard to keep itself out of your application code. Wolverine is also built with testability in mind. If you’d be interested in learning more about how Wolverine could simplify your existing application code or set you up with a solid foundation for sustainable productive development for new systems, JasperFx Software is happy to work with you!
Before I get into the nuts and bolts of Wolverine sagas, let me come right out and say that I think that compared to other .NET frameworks, the Wolverine implementation of sagas requires much less code ceremony and therefore easier code to reason about. Wolverine also requires less configuration and explicit code to integrate your custom saga with Wolverine’s saga persistence. Lastly, Wolverine makes the development experience better by building in so much support for automatically configuring development environment resources like database schema objects or message broker objects. I do not believe that any other .NET tooling comes close to the developer experience that the Wolverine and its “Critter Stack” buddy Marten can provide.
Let’s say that you have some kind of multi-step process in your application that might have some mix of:
Callouts to 3rd party services
Some logical steps that can be parallelized
Possibly some conditional workflow based on the results of some of the steps
A need to enforce “timeout” conditions if the workflow is taking too long — think maybe of some kind of service level agreement for your workflow
This kind of workflow might be a great opportunity to use Wolverine’s version of Sagas. Conceptually speaking, a “saga” in Wolverine is just a special message handler that needs to inherit from Wolverine’s Saga class and modify itself to track state between messages that impact the saga.
Below is a simple version from the documentation called Order:
public record StartOrder(string OrderId);
public record CompleteOrder(string Id);
public class Order : Saga
{
// You do need this for the identity
public string? Id { get; set; }
// This method would be called when a StartOrder message arrives
// to start a new Order
public static (Order, OrderTimeout) Start(StartOrder order, ILogger<Order> logger)
{
logger.LogInformation("Got a new order with id {Id}", order.OrderId);
// creating a timeout message for the saga
return (new Order{Id = order.OrderId}, new OrderTimeout(order.OrderId));
}
// Apply the CompleteOrder to the saga
public void Handle(CompleteOrder complete, ILogger logger)
{
logger.LogInformation("Completing order {Id}", complete.Id);
// That's it, we're done. Delete the saga state after the message is done.
MarkCompleted();
}
// Delete this order if it has not already been deleted to enforce a "timeout"
// condition
public void Handle(OrderTimeout timeout, ILogger<Order> logger)
{
logger.LogInformation("Applying timeout to order {Id}", timeout.Id);
// That's it, we're done. Delete the saga state after the message is done.
MarkCompleted();
}
public static void NotFound(CompleteOrder complete, ILogger logger)
{
logger.LogInformation("Tried to complete order {Id}, but it cannot be found", complete.Id);
}
}
Order is really meant to just be a state machine where it modifies its own state in response to incoming messages and returns cascading messages (you could also use IMessageBus directly as a method argument if you prefer, but my advice is to use simple pure functions) that tell Wolverine what to do next in the multi-step process.
A new Order saga can be created by any old message handler by simply returning a type that inherits from the Saga type in Wolverine. Wolverine is going to automatically discover any public types inheriting from Saga and utilize any public instance methods following certain naming conventions (or static Create() methods) as message handlers that are assumed to modify the state of the saga objects. Wolverine itself is handling everything to do with loading and persisting the Order saga object between commands around the call to the message handler methods on the saga types.
If you’ll notice the Handle(CompleteOrder) method above, the Order is calling MarkCompleted() on itself. That will tell Wolverine that the saga is now complete, and direct Wolverine to delete the current Order saga from the underlying persistence.
As for tracking the saga id between message calls, there are naming conventions about the messages that Wolverine can use to pluck the identity of the saga, but if you’re strictly exchanging messages between a Wolverine saga and other Wolverine message handlers, Wolverine will automatically track metadata about the active saga back and forth.
I’d also ask you to notice the OrderTimeout message that the Order saga returns as it starts. That message type is shown below:
// This message will always be scheduled to be delivered after
// a one minute delay because I guess we want our customers to be
// rushed? Goofy example code:)
public record OrderTimeout(string Id) : TimeoutMessage(1.Minutes());
Wolverine’s cascading message support allows you to return an outgoing message with a time delay — or a particular scheduled time or any other number of options — by just returning a message object. Admittedly this ties you into a little more of Wolverine, but the key takeaway I want you to notice here is that every handler method is a “pure function” with no service dependencies. Every bit of the state change and workflow logic can be tested with simple unit tests that merely work on the before and after state of the Order objects as well as the cascaded messages returned by the message handler functions. No mock objects, no fakes, no custom test harnesses, just simple unit tests. No other saga implementation in the .NET ecosystem can do that for you anywhere nearly as cleanly.
So far I’ve only focused on the logical state machine part of sagas, so let’s jump to persistence. Wolverine has long had a simplistic saga storage mechanism with its integration with Marten, and that’s still one of the easiest and most powerful options. You can also use EF Core for saga persistence, but ick, that means having to use EF Core.
Wolverine 3.0 added a new lightweight saga persistence option for either Sql Server or PostgreSQL (without Marten or EF Core) that just stands up a little table for just a single Saga type and uses JSON serialization to persist the saga. Here’s an example:
using var host = await Host.CreateDefaultBuilder()
.UseWolverine(opts =>
{
// This isn't actually mandatory, but you'll
// need to do it just to make Wolverine set up
// the table storage as part of the resource setup
// otherwise, Wolverine is quite capable of standing
// up the tables as necessary at runtime if they
// are missing in its default configuration
opts.AddSagaType<RedSaga>("red");
opts.AddSagaType(typeof(BlueSaga),"blue");
// This part is absolutely necessary just to have the
// normal transactional inbox/outbox support and the new
// default, lightweight saga persistence
opts.PersistMessagesWithSqlServer(Servers.SqlServerConnectionString, "color_sagas");
opts.Services.AddResourceSetupOnStartup();
}).StartAsync();
Just as with the integration with Marten, Wolverine’s lightweight saga implementation is able to build the necessary database table storage on the fly at runtime if it’s missing. The “critter stack” philosophy is to optimize the all important “time to first pull request” metric — meaning that you can get a Wolverine application up fast on your local development box because it’s able to take care of quite a bit of environment setup for you.
Lastly, Wolverine 3.0 is adding optimistic concurrency checks for the Marten saga storage and the new lightweight saga persistence. That’s been an important missing piece of the Wolverine saga story.
Just for some comparison, check out some other saga implementations in .NET:
MassTransit’s saga support which somewhat inspired the Wolverine implementation and accounts for a big chunk of Marten usage through MassTransit’s usage of Marten for saga storage
NServiceBus is to my knowledge, the oldest tool in this space and they support sagas
I couldn’t find any support for sagas in Brighter, but feel free to correct that
If you’re planning on coming to my workshop, you’ll want .NET 8, Git, and some kind of Docker Desktop on your box to run the sample code I’ll use in the workshop. If Docker doesn’t work for you, you maybe want a local install of PostgreSQL and Rabbit MQ.
Hey folks, I’ll be giving the first ever workshop on building an Event Driven Architecture with the full “Critter Stack” at DevUp 2024 in St. Louis next week on Wednesday the 14th bright and early at 8:30 AM.
We’ll be working through a sample backend web service that also communicates with other headless services using Event Sourcing within a general CQRS architectural approach with the whole “Critter Stack. We’ll use Marten (over PostgreSQL) for our persistence strategy using both its event sourcing support and as a document database. We’ll combine that with Wolverine as a server side framework for background processing, asynchronous messaging, and even as an alternative HTTP endpoint framework. Lastly, just for fun, there’ll be guest appearances from other JasperFx tools like Alba and Oakton for automated integration testing and command line execution respectively.
So why would you want to come to this and what might you get out of it? I’m hoping the takeaways — even if you don’t intend to use Marten or Wolverine — will be:
A good introduction to event sourcing as a technical approach and some of the real challenges you’ll face when building a system using event sourcing as a persistence strategy
An understanding of what goes into building a robust CQRS system including dealing with transient errors, observability, concurrency, and how to best segment message processing to achieve self-healing systems
Challenging the industry conventional wisdom about the efficacy of Hexagonal/Clean/Onion Architecture approaches really are when I show what a very low ceremony “vertical slice architecture” approach can be like with the Wolverine + Marten combination while still being robust, observable, highly testable, and still keeping infrastructure concerns out of the business logic
Some exposure to Open Telemetry and general observability tooling for distributed systems you absolutely want if you don’t already have that
Techniques for automating integration tests against an Event Driven Architecture
Because I’m absolutely in the business of promoting the “Critter Stack” tools, I’ll try to convince you that:
Marten is already the most robust and feature rich solution for event sourcing in the .NET ecosystem while also being arguably the easiest to get up and going with
How the Wolverine + Marten combination makes CQRS with Event Sourcing a much easier architectural pattern to use
Wolverine’s emphasis on low ceremony code approaches can help systems be more successfully maintained over time by simply having much less noise code and layering in your systems while still being robust
The “Critter Stack” has an excellent story for automated integration testing support that can do a lot to make your development efforts more successful
Both Marten & Wolverine can help your teams achieve a low “time to first pull request” by doing a lot to configure necessary infrastructure like databases or message brokers on the fly for a better development experience
I’m excited, because this is my first opportunity to do a workshop on the “Critter Stack” tools, and I think we’ve got a very compelling technical story to tell about the tools! And if nothing else, I’m looking forward to any feedback that might help us improve the tools down the line.
And for any *ahem* older folks from St. Louis in my talk, I personally at the time that Jorge Orta was out at first and the Cards should have won that game.
It’s been a little bit since I’ve written any kind of update on the unofficial “Critter Stack” roadmap, with the last update in February. A ton of new, important strategic features have been added to especially Marten since then, with plenty of expansion of Wolverine to boot. Before jumping into what’s to come, let me indulge in a bit of retrospective about what new features or improvements have been delivered in 2024 so far before getting into the road map in the next section.
More parsimonious usage of database connections for better scalability and improved integration with GraphQL via Hot Chocolate
Some ability to do zero down time deployments event with asynchronous projections
Blue/green deployment capabilities for “write” model projections (that’s a big deal)
Ability to add new tenanted databases at runtime for both Marten and Wolverine with zero downtime
There’s only one user so far, but CritterStackPro.Projections is the first paid add on model for Marten & Wolverine that allows for better load distribution of asynchronous projections and event subscriptions within a clustered application
Marten has first class support for strong typed identifiers now — which was a long requested feature that got put off because of how much effort I feared that would require (rightly as it turned out, but it’s all done now)
At this point I feel like we’ve crossed off the mass majority of the features I thought we needed to add to Marten this year to be able to stand Marten up against basically any other event store infrastructure tooling on the whole damn planet. What that also means is that I think that Marten development probably slows down to nothing but bug fixes and community contributions as folks run into things. There are still some features in the backlog that I might personally work on, but that will be in the course of some ongoing and potential JasperFx client work.
That being said, let’s talk about the rest of the year!
The Roadmap for the Back Half of 2024
Obviously, this roadmap is just a snapshot in time and client needs, community requests, and who knows what changes from Microsoft or other related tools could easily change priorities from any of this. All that being said, this is the Critter Stack core team & I’s current vision of the next big steps.
RavenDb integration with Wolverine. This is some client sponsored work that I’m hoping will set Wolverine up for easier integration with other database engines in the near future
“Critter Watch” — an ongoing effort to build out a management and monitoring console application for any combination of Marten, Wolverine, and future critters. This will be a paid product. We’ve already had a huge amount of feedback from Marten & Wolverine users, and I’m personally eager to get this moving in the 3rd quarter
Marten 8.0 and Wolverine 4.0 — the goal here is mostly a rearrangement of dependencies underneath both Marten & Wolverine to eliminate duplication and spin out a lot of the functionality around projections and the async daemon. This will also be a significant effort to spin off some new helper libraries for the “Critter Stack” to enable the next bullet point
“Ermine” — a port of Marten’s event store capabilities and a subset of its document database capabilities to SQL Server. My thought is that this will share a ton of guts with Marten. I’m voting that Ermine will have direct integration with Wolverine from the very beginning as well for subscriptions and middleware similar to the existing Wolverine.Marten integration
If Ermine goes halfway well, I’d love to attempt a CosmosDb and maybe a DynamoDb backed event store in 2025
As usual, that list is a guess and unlikely to ever play out exactly that way. All the same though, there’s my hopes and dreams for the next 6 months or so.
Did I miss something you were hoping for? Does any of that concern you? Let me and the rest of the Critter Stack community know either here or anywhere in our Discord room!
There’s been a definite theme lately about increasing the performance and scalability of Marten, as evident (I hope) in my post last week describing new optimization options in Marten 7.25. Today I was able to push a follow up feature that got missed in that release that allows Marten users to utilize PostgreSQL table partitioning behind the scenes for document storage (7.25 added a specific utilization of table partitioning for the event store). The goal here is in selected scenarios, this would enable PostgreSQL to be mostly working with far smaller tables than it would otherwise, and hence perform better in your system.
Think of these common usages of Marten:
You’re using soft deletes in Marten against a document type, and the mass majority of the time Marten is putting a default filter in for you to only query for “not deleted” documents
You are aggressively using the Marten feature to mark event streams as archived when whatever process they model is complete. In this case, Marten is usually querying against the event table using a value of is_archived = false
You’re using “conjoined” multi-tenancy within a single Marten database, and most of the time your system is naturally querying for data from only one tenant at a time
Maybe you have a table where you’re frequently querying against a certain date property or querying for documents by a range of expected values
In all of those cases, it would be more performant to opt into PostgreSQL table partitioning where PostgreSQL is separating the storage for a single, logical table into separate “partition” tables. Again, in all of those cases above we can enable PostgreSQL + Marten to largely be querying against a much smaller table partition than the entire table would be — and querying against smaller database tables can be hugely more performant than querying against bigger tables.
The Marten community has been kicking around the idea of utilizing table partitioning for years (since 2017 by my sleuthing last week through the backlog), but it always got kicked down the road because of the perceived challenges in supporting automatic database migrations for partitions the same way we do today in Marten for every other database schema object (and in Wolverine too for that matter).
Thanks to an engagement with a JasperFx customer who has some pretty extreme scalability needs, I was able to spend the time last week to break through the change management challenges with table partitioning, and finally add table partitioning support for Marten.
As for what’s possible, let’s say that you want to create table partitioning for a certain very large table in your system for a particular document type. Here’s the new option for 7.26:
var store = DocumentStore.For(opts =>
{
opts.Connection("some connection string");
// Set up table partitioning for the User document type
opts.Schema.For<User>()
.PartitionOn(x => x.Age, x =>
{
x.ByRange()
.AddRange("young", 0, 20)
.AddRange("twenties", 21, 29)
.AddRange("thirties", 31, 39);
});
// Or use pg_partman to manage partitioning outside of Marten
opts.Schema.For<User>()
.PartitionOn(x => x.Age, x =>
{
x.ByExternallyManagedRangePartitions();
// or instead with list
x.ByExternallyManagedListPartitions();
});
// Or use PostgreSQL HASH partitioning and split the users over multiple tables
opts.Schema.For<User>()
.PartitionOn(x => x.UserName, x =>
{
x.ByHash("one", "two", "three");
});
opts.Schema.For<Issue>()
.PartitionOn(x => x.Status, x =>
{
// There is a default partition for anything that doesn't fall into
// these specific values
x.ByList()
.AddPartition("completed", "Completed")
.AddPartition("new", "New");
});
});
To use the “hot/cold” storage on soft-deleted documents, you have this new option:
var store = DocumentStore.For(opts =>
{
opts.Connection("some connection string");
// Opt into partitioning for one document type
opts.Schema.For<User>().SoftDeletedWithPartitioning();
// Opt into partitioning and and index for one document type
opts.Schema.For<User>().SoftDeletedWithPartitioningAndIndex();
// Opt into partitioning for all soft-deleted documents
opts.Policies.AllDocumentsSoftDeletedWithPartitioning();
});
storeOptions.Policies.AllDocumentsAreMultiTenantedWithPartitioning(x =>
{
// Selectively by LIST partitioning
x.ByList()
// Adding explicit table partitions for specific tenant ids
.AddPartition("t1", "T1")
.AddPartition("t2", "T2");
// OR Use LIST partitioning, but allow the partition tables to be
// controlled outside of Marten by something like pg_partman
// https://github.com/pgpartman/pg_partman
x.ByExternallyManagedListPartitions();
// OR Just spread out the tenant data by tenant id through
// HASH partitioning
// This is using three different partitions with the supplied
// suffix names
x.ByHash("one", "two", "three");
// OR Partition by tenant id based on ranges of tenant id values
x.ByRange()
.AddRange("north_america", "na", "nazzzzzzzzzz")
.AddRange("asia", "a", "azzzzzzzz");
// OR use RANGE partitioning with the actual partitions managed
// externally
x.ByExternallyManagedRangePartitions();
});
Summary
Your mileage will vary of course depending on how big your database is and how you really query the database, but at least in some common cases, the Marten community is pretty excited for the potential of table partitioning to improve Marten performance and scalability.
Just a reminder, JasperFx Software offers support contracts and consulting services to help you get the most out of the “Critter Stack” tools (Marten and Wolverine).If you’re building server side applications on .NET, the Critter Stack is the most feature rich tool set for Event Sourcing and Event Driven Architectures around.
The theme of the last couple months for the Marten community and I has been a lot of focus on improving Marten’s event sourcing feature set to be able to reliably handle very large data loads. With that being said, Marten 7.25 was released today with a huge amount of improvements around its performance, scalability, and reliability under very heavy loads (we’re talking about databases with hundreds of millions of events).
Before I get into the details, there’s a lot of thanks and credit to go around:
Core team member JT made several changes to reduce the amount of object allocations that Marten does at runtime in SQL generation — and basically every operation it does involves SQL generation
Ben Edwards contributed several ideas, important feedback, and some optimization pull requests toward this release
Babu made some improvements to our CI pipeline that made it a lot easier for me to troubleshoot the work I was doing
a-shtifanov-laya did some important load testing harness work that helped quite a bit to validate the work in this release
Urbancsik Gergely for doing a lot of performance and load testing with Marten that helped tremendously
And I’ll be giving some personal thanks to a couple JasperFx clients who enabled me to spend so much time on this effort
And now, the highlights for event store performance, scalability, and reliability improvements — most of which are “opt in” configuration items so as to not disturb existing users:
The new “Quick Append” option is completely usable and appears from testing to be about 2X as fast as the V4-V7 “Rich” appending process. More than that, opting into the quick append mechanism appears to eliminate the event “skipping” problem with asynchronous projections or event subscriptions that some people have experienced in very heavy loads. Lastly, I originally meant to play this work because I think it will alleviate issues that some people run into with concurrent operations trying to append events to the same event streams
Marten can create a Hot/Cold Storage mechanism around its event store by leveraging PostgreSQL native table partitioning. There’s work on users part to mark event streams as archived for this to matter, but this is potentially a huge win for Marten scalability. A later Marten release will shortly add partitioning support to Marten document tables
There’s several optimizations inside of even the classic, “rich” event appending that reduce the number of network round trips happening at runtime — and thats a good thing because network round trips are evil!
Outside of the event store improvements, Marten also got a new “Specification” alternative called “query plans” for reusable query logic for when Marten’s compiled query feature won’t work. The goal with this feature is to help a JasperFx client migrate off of Clean Architecture style repository wrapper abstractions in a way that doesn’t cause code duplication while also setting them up to utilize Marten’s batch query feature for a much more performant code.
Summary
I’m still digging out from a very good family vacation, but man, getting this stuff out feels really good. The Marten community is very vibrant right now, with a lot of community engagement that’s driving the tool’s capabilities into much more serious system territory. The “hot/cold storage” feature that just went in has been in the Marten backlog since 2017, and I’m thrilled to finally see that make it in.