Update on Jasper v2 with an actual alpha

First off, my super power (stop laughing at me!) is having a much longer attention span than the average software developer. In positive ways, this has enabled me to tackle very complex problems. In negative ways, I’ve probably wasted a tremendous amount of time in my career working on systems or projects long after they had probably already failed and I just wouldn’t admit it.

So late last year I started working on a reboot of Jasper, my attempt at creating a “next generation” messaging framework for .Net. The goal of Jasper has changed quite a bit since I started jotting down notes for it in 2014, but the current vision is to be a highly productive command execution engine and asynchronous messaging tool for .Net with less code ceremony than the currently popular tools in this space.

I kicked out a Jasper v 2.0.0-alpha-1 release this week just barely in time for my talk at That Conference yesterday (but didn’t end up showing it at all). Right now the intermediate goals to get to a full Jasper 2.0 rebooted project is to:

  • Finish the baked in Open Telemetry support. It’s there, but there’s holes in what’s being captured
  • Get the interop with MassTransit via Rabbit MQ working for more scenarios. I’ve got a successful proof of concept of bi-directional interaction between Jasper and MassTransit services
  • Finish documentation for the new 2.0 version. I moved the docs to VitePress and started re-writing the docs from scratch, and that takes time

The first two bullet points are all about getting Jasper ready to be something I could dogfood at work.

While I absolutely intend both Jasper and Marten to be perfectly usable without the other, there’s also going to be some specific integration between Jasper and Marten to create a full blown, opinionated CQRS stack for .Net development (think Axon for .Net, but hopefully with much less code ceremony). For this combination, the Marten team is talking about adding messaging subscriptions for the Marten event store functionality, Jasper middleware to reduce repetitive CQRS handler code, and using the outbox functionality in Jasper to also integrate Marten with external messaging infrastructure.

I’ll kick out actual content about all this in the next couple weeks, but a couple folks have noticed the big uptick in Jasper work and asked what was going on, so here’s a little blog post on it:)

Resetting Marten Database State Between Tests

TL;DR: Marten has a new method in V5 called ResetAllData() that’s very handy for rolling back database state to a known point in automated tests.

I’m a big believer in utilizing intermediate level integration tests. By this I mean the middle layer of the typical automated testing pyramid where you’re most definitely testing through your application’s infrastructure, but not necessarily running the system end to end.

Now, any remotely successful test automation strategy means that you have to be able to exert some level of control over the state of the system leading into a test because all automated tests need the combination of known inputs and expected outcomes. To that end, Marten has built in support for completely rolling back the state of a Marten-ized database between tests that I’ll be demonstrating in this post.

When I’m working on a system that uses a relational database, I’m a fan of using Respawn from Jimmy Bogard that helps you rollback the state of a database to its beginning point as part of integration test setup. Likewise, Marten has the “clean” functionality for the same purpose:

public async Task clean_out_documents(IDocumentStore store)
    // Completely remove all the database schema objects related
    // to the User document type
    await store.Advanced.Clean.CompletelyRemoveAsync(typeof(User));

    // Tear down and remove all Marten related database schema objects
    await store.Advanced.Clean.CompletelyRemoveAllAsync();

    // Deletes all the documents stored in a Marten database
    await store.Advanced.Clean.DeleteAllDocumentsAsync();

    // Deletes all of the persisted User documents
    await store.Advanced.Clean.DeleteDocumentsByTypeAsync(typeof(User));

    // For cases where you may want to keep some document types,
    // but eliminate everything else. This is here specifically to support
    // automated testing scenarios where you have some static data that can
    // be safely reused across tests
    await store.Advanced.Clean.DeleteDocumentsExceptAsync(typeof(Company), typeof(User));
    // And get at event storage too!
    await store.Advanced.Clean.DeleteAllEventDataAsync();

So that’s tearing down data, but many if not most systems will need some baseline reference data to function. We’re still in business though, because Marten has long had a concept of initial data applied to a document store on its start up with the IInitialData interface. To illustrate that interface, here’s a small sample implementation:

    internal class BaselineUsers: IInitialData
        public async Task Populate(IDocumentStore store, CancellationToken cancellation)
            using var session = store.LightweightSession();
            session.Store(new User
                UserName = "magic",
                FirstName = "Earvin",
                LastName = "Johnson"

            session.Store(new User
                UserName = "sircharles",
                FirstName = "Charles",
                LastName = "Barkley"

            await session.SaveChangesAsync(cancellation);

And the BaselineUsers type could be applied like this during initial application configuration:

using var host = await Host.CreateDefaultBuilder()
    .ConfigureServices(services =>
        services.AddMarten(opts =>
            opts.Connection("some connection string");

Or, maybe a little more likely, if you have some reference data that’s only applicable for your automated testing, we can attach our BaselineUsers data set to Marten, but **only in our test harness** with usage like this:

// First, delegate to your system under test project's
// Program.CreateHostBuilder() method to get the normal system configuration
var host = Program.CreateHostBuilder(Array.Empty<string>())

    // But next, apply initial data to Marten that we need just for testing
    .ConfigureServices(services =>
        // This will add the initial data to the DocumentStore
        // on application startup

For some background, as of V5 the mechanics for the initial data set feature moved to executing in an IHostedService so there’s no more issue of asynchronous code being called from synchronous code with the dreaded “will it dead lock or am I feeling lucky?” GetAwaiter().GetResult() mechanics.

Putting it all together with xUnit

The way I like to do integration testing with xUnit (the NUnit mechanics would involve static members, but the same concepts of lifetime still apply) is to have a “fixture” class that will bootstrap and hold on to a shared IHost instance for the system under test between tests like this one:

    public class MyAppFixture: IAsyncLifetime
        public IHost Host { get; private set; }

        public async Task InitializeAsync()
            // First, delegate to your system under test project's
            // Program.CreateHostBuilder() method to get the normal system configuration
            Host = await Program.CreateHostBuilder(Array.Empty<string>())

                // But next, apply initial data to Marten that we need just for testing
                .ConfigureServices(services =>

        public async Task DisposeAsync()
            await Host.StopAsync();

Next, I like to have a base class for integration tests that in this case will consume the MyAppFixture above, but also reset the Marten database between tests with the new V5 IDocumentStore.Advanced.ResetAllStore() like this one:

    public abstract class IntegrationContext : IAsyncLifetime
        protected IntegrationContext(MyAppFixture fixture)
            Services = fixture.Host.Services;

        public IServiceProvider Services { get; set; }

        public Task InitializeAsync()
            var store = Services.GetRequiredService<IDocumentStore>();

            // This cleans out all existing data, and reapplies
            // the initial data set before all tests
            return store.Advanced.ResetAllData();

        public virtual Task DisposeAsync()
            return Task.CompletedTask;

Do note that I left out some xUnit ICollectionFixture mechanics that you might need to do to make sure that MyAppFixture is really shared between tests. See xUnit’s Shared Context documentation.

Improving the Development and Production Time Experience with Marten V5

Marten V5 dropped last week, with significant new features for multi-tenancy scenarios and enabling users to use multiple Marten document stores in one .Net application. A big chunk of the V5 work was mostly behind the scenes trying to address user feedback from the much larger V4 release late last year. As always, the Marten documentation is here.

First, why didn’t you just…

I’d advise developers and architects to largely eliminate the word “just” and any other lullabye language from their vocabulary when talking about technical problems and solutions.

That being said:

  • Why didn’t you just use source generators instead? Most of this was done before source generators were released, and source generators are limited to information that’s available at compile time. The dynamic code generation in Marten is potentially using information that is only available at run time
  • Why didn’t you just use IL generation instead? Because I despise working directly with IL and I think that would have dramatically curtailed what was easily possible. It’s also possible that we end up having to go there eventually.

Setting the Stage

Consider this simplistic code to start a new Marten DocumentStore against a blank database and persist a single User document:

var store = DocumentStore.For("connection string");

await using var session = store.LightweightSession();
var user = new User
    UserName = "pmahomes", 
    FirstName = "Patrick", 
    LastName = "Mahomes"

await session.SaveChangesAsync();

Hopefully that code is simple enough for new users to follow and immediately start being productive with Marten. The major advantage of document databases over the more traditional RDBMS with or without an ORM is the ability to just get stuff done without having to spend a lot of time configuring databases or object to database mappings or anywhere as much underlying code to just read and write data. To that end, there’s a lot of stuff going on behind the scenes of that code up above.

First off, there’s some automatic database schema management. In the default configuration used up above, Marten is quietly checking the underlying database on the first usage of the User document type to see if the database matches Marten’s configuration for the User document, and applies database migrations at runtime to change the database as necessary.

Secondly, there’s some runtime code generation happening to “bake in” the internal handling of how User documents are read from and written to the database. It’s not apparent here, but there’s a lot of knobs you can twist in Marten to change the behavior of how a document type is stored and retrieved from the database (soft deletes, turning on more metadata tracking, turning off default metadata tracking to be leaner, etc.). That behavior even varies between the lightweight session I used up above and the behavior of IDocumentStore.OpenSession() that adds identity map behavior to the session. To be more efficient over all, Marten generates the tightest possible C# code to handle each document type, then in the default mode, actually compiles that code in memory with Roslyn and uses the dynamically built assembly.

Cool, right? I’d argue that Marten can make teams be far more productive than they would be with the more typical EF Core or Dapper backed approach. Now let’s move on to the unfortunately very real downsides of Marten’s approach and what we’ve done to improve matters:

  • The dynamic Roslyn code generation can sometimes incur a major “cold start” issue on the very first usage. It’s definitely not consistent, as some people do not see any noticeable impact and other folks tell me they get a 9 second delay on the first usage. This cold start issue is especially problematic for folks using Marten in a Serverless architecture
  • The dynamically generated code can’t be used for any kind of potentially valuable AOT optimization
  • Roslyn usage sometimes causes a big ol’ memory leak no matter what we try. This isn’t consistent, so I don’t know why
  • The database change tracking does do some in memory locking, and that’s been prone to dead lock issues in some flavors of .Net (Blazor, WPF)
  • Some of you won’t want to give your application rights to modify a database at runtime
  • In Marten V4 there were a few too many places where Marten was executing the database change detection asynchronously, but from within synchronous calls using the dreaded .GetAwaiter().GetResult() approach. Occasional deadlock issues occurred, mostly in Marten usage within Blazor.

Database Migration Improvements

Alright, let’s tackle the database migration issues first. Marten has long had some command line support so that you could detect and apply any outstanding database changes from your application itself with this call:

dotnet run -- marten-apply

If you use the command line tooling for migrations, you can now optimize Marten to just turn off all runtime database migrations like so:

using var host = Host.CreateDefaultBuilder()
    .ConfigureServices(services =>
            .AddMarten(opts =>
                opts.Connection("connection string");
                opts.AutoCreateSchemaObjects = AutoCreate.None;

Other folks won’t want to use the command line tooling, so there’s another option to just do all database migrations on database startup once, but otherwise completely eliminate all other potential locking in Marten V5, but this time I have to use the IHost integration:

using var host = Host.CreateDefaultBuilder()
    .ConfigureServices(services =>
            .AddMarten(opts =>
                opts.Connection("connection string");
                // Mild compromise, now I've got to tell
                // Marten about the User document

            // This tells the app to do all database migrations
            // at application startup time

In case you’re wondering, this option is safe to use even if you have multiple application nodes starting up simultaneously. The V5 version here relies on global locks in Postgresql itself to prevent simultaneous database changes that previously resulted in interestingly chaotic failure:(

Pre-building the Generated Types

Now, onto dealing with the dynamic codegen aspect of things. V4 created a “build types ahead” model where you can generate all the dynamic code with this command line call:

dotnet run -- codegen write

You can now completely dodge the runtime code generation issue by this sequence of events:

  1. In your deployment scripts, run dotnet run -- codegen write first
  2. Compile your application, which will embed the newly generated code right into your application’s entry assembly
  3. Use the below setting to completely disable all dynamic codegen:
using var host = Host.CreateDefaultBuilder()
    .ConfigureServices(services =>
            .AddMarten(opts =>
                opts.Connection("connection string");

                // Turn off all dynamic code generation, but this
                // will blow up if the necessary type isn't compiled
                // into 
                opts.GeneratedCodeMode = TypeLoadMode.Static;

Again though, this depends on you having all document types registered with Marten instead of depending on runtime discovery as we did in the very first sample in this post — and that’s a bit of friction. What we’ve found is that folks have found the origin pre-built generation model to be clumsy, so we went back to the drawing board for Marten V5 and came up with the…

“Auto” Generated Code Mode

For V5, we have the option shown below:

using var host = Host.CreateDefaultBuilder()
    .ConfigureServices(services =>
            .AddMarten(opts =>
                opts.Connection("connection string");

                // use pre-built code if it exists, or
                // generate code if it doesn't and "just work"
                opts.GeneratedCodeMode = TypeLoadMode.Auto;

My thinking here is that you’d just keep this on all the time, and as long as you’re running the application locally or through your integration test suite (you have one of those, right?), you’d have the dynamic types written to your main project’s code automatically (in an /Internal/Generated folder). Unless you purposely add those to your source control’s ignore list, that code will also be checked in. Woohoo, right?

Now, finally let’s put this all together and bundle all of what I would recommend as Marten best practices into the new…

Optimized Artifact Workflow

New in Marten V5 is what I named the “optimized artifact workflow” (I say “I” because I don’t think other folks like the name:)) as shown below:

using var host = Host.CreateDefaultBuilder()
    .ConfigureServices(services =>
            .AddMarten(opts =>
                opts.Connection("connection string");
            // This is the call you want!
    // In testing harnesses, or with AWS Lambda / Azure Functions,
    // you may have to help out .Net by explicitly setting
    // the main application assembly

With the OptimizeArtifaceWorkflow(TypeLoadMode.Static) usage above, Marten is running with automatic database management and “Auto” code generation if the host’s environment name is “Development” as it would typically be on a local developer box. In “Production” mode, Marten is running with all automatic database management disabled at runtime beside the initial database change application at startup. In “Production” mode, Marten is also turning off all dynamic code generation with the assumption that all necessary types can be found in the entry assembly.

The goal here was to have a quick setting that optimized Marten usage in both development and production time without having to add in a bunch of nested conditional logic for IHostEnvironment.IsDevelopment() throughout the IHost configuration code.

Exterminating Sync over Async Calls

Back to the very original sample code:

var store = DocumentStore.For("connection string");

await using var session = store.LightweightSession();
var user = new User
    UserName = "pmahomes", 
    FirstName = "Patrick", 
    LastName = "Mahomes"

await session.SaveChangesAsync();

In Marten V4, the first call to session.Store(user) would trigger the database schema detection, which behind the scenes would end up doing a .GetAwaiter().GetResult() trick to call asynchronous code within the synchronous Store() command (not gonna get into that here, but we eliminated all synchronous database schema detection functionality for unrelated reasons in V4).

In V5, we rewired a lot of the internal guts such that the database schema detection is happening instead in the call to IDocumentSession.SaveChangesAsync(), which is of course, asynchronous. That allowed us to eliminate usages of “sync over async” calls. Likewise, we made similar changes throughout other areas of Marten.


The hope here is that we can make our users be more successful with Marten, and side step the problems our users have had specifically with using Marten with AWS Lambda, Azure Functions, Blazor, and inside of WPF applications. I’m also hoping that the OptimizedArtifactWorkflow() usage greatly simplifies the usage of Marten “best practices.”

Working with Multiple Marten Databases in One Application

Marten V5 dropped last week. I covered the new “database per tenant” strategy for multi-tenancy in my previous blog post. Closely related to that feature is the ability to register and work with multiple Marten databases from a single .Net system, and that’s what I want to talk about today.

Let’s say that for whatever reason (but you know there’s some legacy in there somehow), our application is mostly persisted in its own Marten database, but also needs to interact with a completely separate “Invoicing” database on a different database server and having a completely different configuration. With Marten V5 we can register an additional Marten database by first writing a marker interface for that other database:

    // These marker interfaces *must* be public
    public interface IInvoicingStore : IDocumentStore


And now we can register and configure a completely separate Marten database in our .Net system with the AddMartenStore<T>() usage shown below:

using var host = Host.CreateDefaultBuilder()
    .ConfigureServices(services =>
        // You can still use AddMarten() for the main document store
        // of this application
        services.AddMarten("some connection string");

        services.AddMartenStore<IInvoicingStore>(opts =>
                // All the normal options are available here
                opts.Connection("different connection string");

                // more configuration
            // Optionally apply all database schema
            // changes on startup

            // Run the async daemon for this database

            // Use IInitialData
            .InitializeWith(new DefaultDataSet())

            // Use the V5 optimized artifact workflow
            // with the separate store as well

So here’s a few things to talk about from that admittedly busy code sample above:

  1. The IInvoicingStore will be registered in your underlying IoC container with singleton scoping. Marten is quietly making a concrete implementation of your interface for you, similar to how Refit works if you’re familiar with that library.
  2. We don’t yet have a way to register a matching IDocumentSession or IQuerySession type to go with the separate document store. I think my though on that is to wait until folks ask for that.
  3. The separate store could happily connect to the same database with a different database schema or connect to a completely different database server altogether
  4. You are able to separately apply all detected database changes on startup
  5. The async daemon can be enabled completely independently for the separate document store
  6. The IInitialData model can be used in isolation with the separate document store for baseline data
  7. The new V5 “optimized artifact workflow” model can be enabled explicitly on each separate document store. This will be the subject of my next Marten related blog post.
  8. It’s not shown up above, but if you really wanted to, you could make the separate document stores use a multi-tenancy strategy with multiple databases
  9. The Marten command line tooling is “multiple database aware,” meaning that it is able to apply changes or assert the configuration on all the known databases at one time or by selecting specific databases by name. This was the main reason the Marten core team did the separate document store story at the same time as the database per tenant strategy.

As I said earlier, we have a service registration for a fully functional DocumentStore implementing our IInvoicingStore that can be injected as a constructor dependency as shown in an internal service of our application:

public class InvoicingService
    private readonly IInvoicingStore _store;

    // IInvoicingStore can be injected like any other
    // service in your IoC container
    public InvoicingService(IInvoicingStore store)
        _store = store;

    public async Task DoSomethingWithInvoices()
        // Important to dispose the session when you're done
        // with it
        await using var session = _store.LightweightSession();

        // do stuff with the session you just opened

This feature and the multi-tenancy with a database per tenant have been frequent feature requests by Marten users, and it made a lot of sense to tackle them together in V5 because there was quite a bit of overlap in the database change management code to support both. I would very strongly state that a single database should be completely owned by one system, but I don’t know how I really feel about a single system working with multiple databases. Regardless, it comes up often enough that I’m glad we have something in Marten.

I worked with a client system some years back that was a big distributed monolith where the 7-8 separate windows services all talked to the same 4-5 Marten databases, and we hacked together something similar to the new formal support in Marten V5 to accommodate that. I do not recommend getting yourself into that situation though:-)

Multi-Tenancy with Marten

Multitenancy is a reference to the mode of operation of software where multiple independent instances of one or multiple applications operate in a shared environment. The instances (tenants) are logically isolated, but physically integrated.

Gartner Glossary

In this case, I’m referring to “multi-tenancy” in regards to Marten‘s ability to deploy one logical system where the data for each client, organization, or “tenant” is segregated such that users are only ever reading or writing to their own tenant’s data — even if that data is all stored in the same database.

In my research and experience, I’ve really only seen three main ways that folks handle multi-tenancy at the database layer (and this is going to be admittedly RDBMS-centric here):

  1. Use some kind of “tenant id” column in every single database table, then do something behind the scenes in the application layer to always be filtering on that column based on the current user. Marten has supported what I named the “Conjoined” model* since very early versions.
  2. Separate database schema per tenant within the same database. This model is very unlikely to ever be supported by Marten because Marten compiles database schema names into generated code in many, many places.
  3. Using a completely separate database per tenant with identical structures. This approach gives you the most complete separation of data between tenants, and could easily give your system much more scalability when the database is your throughput bottleneck. While you could — and many folks did — roll your own version of “tenant per database” with Marten, it wasn’t supported out of the box.

But, drum roll please, Marten V5 that dropped just last week adds out of the box support for doing multi-tenancy with Marten using a separate database for each tenant. Let’s just right into the simplest possible usage. Let’s say that we have a small system where all we want:

  1. Tenants “tenant1” and “tenant2” to be stored in a database named “database1”
  2. Tenant “tenant3” should be stored in a database named “tenant3”
  3. Tenant “tenant4” should be stored in a database named “tenant4”

And that’s that. Just three databases that are known at bootstrapping time. Jumping into the configuration code in a small .Net 6 web api projection gives us this code:

var builder = WebApplication.CreateBuilder(args);

var db1ConnectionString = builder.Configuration

var tenant3ConnectionString = builder.Configuration

var tenant4ConnectionString = builder.Configuration

builder.Services.AddMarten(opts =>
    opts.MultiTenantedDatabases(x =>
        // Map multiple tenant ids to a single named database
            .ForTenants("tenant1", "tenant2");

        // Map a single tenant id to a database, which uses the tenant id as well for the database identifier
        x.AddSingleTenantDatabase(tenant3ConnectionString, "tenant3");

    // Register all the known document types just
    // to enable database schema management
        // This is *only* necessary if you want to put more
        // than one tenant in one database. Which we did.

Now let’s see this in usage a bit. Knowing that the variable theStore in the test below is the IDocumentStore registered in our system with the configuration code above, this test shows off a bit of the multi-tenancy usage:

public async Task can_use_bulk_inserts()
    var targets3 = Target.GenerateRandomData(100).ToArray();
    var targets4 = Target.GenerateRandomData(50).ToArray();

    await theStore.Advanced.Clean.DeleteAllDocumentsAsync();

    // This will load new Target documents into the "tenant3" database
    await theStore.BulkInsertDocumentsAsync("tenant3", targets3);

    // This will load new Target documents into the "tenant4" database
    await theStore.BulkInsertDocumentsAsync("tenant4", targets4);

    // Open a query session for "tenant3". This QuerySession will
    // be connected to the "tenant3" database
    using (var query3 = theStore.QuerySession("tenant3"))
        var ids = await query3.Query<Target>().Select(x => x.Id).ToListAsync();

        ids.OrderBy(x => x).ShouldHaveTheSameElementsAs(targets3.OrderBy(x => x.Id).Select(x => x.Id).ToList());

    using (var query4 = theStore.QuerySession("tenant4"))
        var ids = await query4.Query<Target>().Select(x => x.Id).ToListAsync();

        ids.OrderBy(x => x).ShouldHaveTheSameElementsAs(targets4.OrderBy(x => x.Id).Select(x => x.Id).ToList());

So far, so good. There’s a little extra configuration in this case to express the mapping of tenants to database, but after that, the mechanics are identical to the previous “Conjoined” multi-tenancy model in Marten. However, as the next set of questions will show, there was a lot of thinking and new infrastructure code under the visible surface because Marten can no longer assume that there’s only one database in the system.


To dive a little deeper, I’m going to try to anticipate the questions a user might have about this new functionality:

Is there a DocumentStore per database, or just one?

DocumentStore is a very expensive object to create because of the dynamic code compilation that happens within it. Fortunately, with this new feature set, there is only one DocumentStore. The one DocumentStore does store the database schema difference detection by database though.

How much can I customize the database configuration?

The out of the box options for “database per tenant” configuration are pretty limited, and we know that they won’t cover every possible need of our users. No worries though, because this is pluggable by writing your own implementation of our ITenancy interface, then setting that on StoreOptions.Tenancy as part of your Marten bootstrapping.

For more examples, here’s the StaticMultiTenancy model that underpins the example usage up above. There’s also the SingleServerMultiTenancy model that will dynamically create a named database on the same database server for each tenant id.

To apply your custom ITenancy model, set that on StoreOptions like so:

var store = DocumentStore.For(opts =>
    // Tenancy option below
    //opts.Connection("connection string");

    // Apply custom tenancy model
    opts.Tenancy = new MySpecialTenancy();

Is it possible to mix “Conjoined” multi-tenancy with multiple databases?

Yes, it is, and the example code above tried to show that. You’ll still have to mark document types as MultiTenanted() to opt into the conjoined multi-tenancy in that case. We supported that model thinking that this would be helpful for cases where the logical tenant of an application may have suborganizations. Whether or not this ends up being useful is yet to be proven.

What about the “Clean” functionality?

Marten has some built in functionality to reset or teardown database state on demand that is frequently used for test automation (think Respawn, but built into Marten itself). With the introduction of database per tenant multi-tenancy, the old IDocumentStore.Advanced.Clean functionality had to become multi-database aware. So when you run this code:

    // theStore is an IDocumentStore
    await theStore.Advanced.Clean.DeleteAllDocumentsAsync();

Marten is deleting all the document data in every known tenant database. To be more targeted, we can also “clean” a single database like so:

            // Find a specific database
            var database = await store.Storage.FindOrCreateDatabase("tenant1");

            // Delete all document data in just this database
            await database.DeleteAllDocumentsAsync();

What about database management?

Marten tries really hard to manage database schema changes for you behind the scenes so that your persistence code “just works.” Arguably the biggest task for per database multi-tenancy was enhancing the database migration code to support multiple databases.

If you’re using the Marten command line support for the system above, this will apply any outstanding database changes to each and every known tenant database:

dotnet run -- marten-apply

But to be more fine-grained, we can choose to apply changes to only the tenant database named “database1” like so:

dotnet run -- marten-apply --database database1

And lastly, you can interactively choose which databases to migrate like so:

dotnet run -- marten-apply -i

In code, you can direct Marten to detect and apply any outstanding database migrations (between how Marten is configured in code and what actually exists in the underlying database) across all tenant database upon application startup like so:

services.AddMarten(opts =>
    // Marten configuration...

The migration code above runs in an IHostedService upon application startup. To avoid collisions between multiple nodes in your application starting up at the same time, Marten uses a Postgresql advisory lock so that only one node at a time can be trying to apply database migrations. Lesson learned:)

Or in your own code, assuming that you have a reference to an IDocumentStore object named theStore, you can use this syntax:

// Apply changes to all tenant databases
await theStore.Storage.ApplyAllConfiguredChangesToDatabaseAsync();

// Apply changes to only one database
var database = await theStore.Storage.FindOrCreateDatabase("database1");
await database.ApplyAllConfiguredChangesToDatabaseAsync();

Can I execute a transaction across databases in Marten?

Not natively with Marten, but I think you could pull that off with TransactionScope, and multiple Marten IDocumentSession objects for each database.

Does the async daemon work across the databases?

Yes! Using the IHost integration to set up the async daemon like so:

services.AddMarten(opts =>
    // Marten configuration...
    // Starts up the async daemon across all known
    // databases on one single node

Behind the scenes, Marten is just iterating over all the known tenant databases and actually starting up a separate object instance of the async daemon for each database.

We don’t yet have any way of distributing projection work across application nodes, but that is absolutely planned.

Can I rebuild a projection by database? Or by all databases at one time?

Oopsie. In the course of writing this blog post I realized that we don’t yet support “database per tenant” with the command line projections option. You can create a daemon instance programmatically for a single database like so:

// Rebuild the TripProjection on just the database named "database1"
using var daemon = await theStore.BuildProjectionDaemonAsync("database1");
await daemon.RebuildProjection<TripProjection>(CancellationToken.None);

*I chose the name “Conjoined,” but don’t exactly remember why. I’m going to claim that that was taken from the “Conjoiner” sect from Alistair Reynolds “Revelation Space” series.

Marten V5 is out!

The Marten team published Marten V5.0 today! It’s not as massive a leap as the Marten V4 release late last year was (and a much, much easier transition from 4 to 5 than 3 to 4 was:)), but I think this addresses a lot of the user issues folks have had with the V4 and makes Marten a much better tool in production and in development.

Some highlights:

  • The release notes and the 5.0 GitHub milestone issues.
  • The closed GitHub milestone just to prove we were busy
  • Fully supports .Net 6 and the latest version of Npgsql for folks who use Marten in combination with Dapper or *gasp* EF Core
  • Marten finally supports doing multi-tenancy through a “database per tenant” strategy with Marten fully able to handle schema migrations across all the known databases!
  • There were a lot of improvements to the database change management and the “pre-built code generation” model has a much easier to use alternative now. See the Development versus Production Usage. Also see the new AddMarten().ApplyAllDatabaseChangesOnStartup() option here.
  • I went through the Marten internals with a fine toothed comb to try and eliminate async within sync calls using .GetAwaiter().GetResult() to try to prevent deadlock issues that some users had reported with, shall we say, “alternative” Marten usages.
  • You can now add and resolve additional document stores in one .Net application.
  • There’s a new option for “custom aggregations” in the event sourcing support for advanced aggregations that fall outside of what was currently possible. This still allows for the performance optimizations we did for Marten V4 aggregates without having to roll your own infrastructure.

As always, thank you to Oskar Dudycz and Babu Annamalai for all their contributions as Marten is fortunately a team effort.

I’ll blog some later this and next week on the big additions. 5.0.1 will inevitably follow soon with who knows what bug fixes. And after that, I’m taking a break on Marten development for a bit:)

Batch Querying with Marten

Before I talk about the batch querying feature set in Marten, let’s take a little detour through a common approach to persistence in .Net architectures that commonly causes the exact problem that Marten’s batch querying seeks to solve.

I’ve been in several online debates lately about the wisdom or applicability of granular repository abstractions over inner persistence infrastructure like EF Core or Marten like this sample below:

    public interface IRepository<T>
        Task<T> Load(Guid id, CancellationToken token = default);
        Task Insert(T entity, CancellationToken token = default);
        Task Update(T entity, CancellationToken token = default);
        Task Delete(T entity, CancellationToken token = default);

        IQueryable<T> Query();

That’s a pretty common approach, and I’m sure it’s working out for some people in at least simpler CRUD-centric applications. Unfortunately though, that reliance on fine-grained repositories also breaks down badly in more complicated systems where a single logical operation may need to span multiple entity types. Coincidentally, I have frequently seen this kind of fine grained abstraction directly lead to performance problems in the systems I’ve helped with after their original construction over the past 6-8 years.

For an example, let’s say that we have a message handler that will need to access and modify data from three different entity types in one logical transaction. Using the fine grained repository strategy, we’d have something like this:

    public class SomeMessage
        public Guid UserId { get; set; }
        public Guid OrderId { get; set; }
        public Guid AccountId { get; set; }

    public class Handler
        private readonly IUnitOfWork _unitOfWork;
        private readonly IRepository<Account> _accounts;
        private readonly IRepository<User> _users;
        private readonly IRepository<Order> _orders;

        public Handler(
            IUnitOfWork unitOfWork,
            IRepository<Account> accounts,
            IRepository<User> users,
            IRepository<Order> orders)
            _unitOfWork = unitOfWork;
            _accounts = accounts;
            _users = users;
            _orders = orders;

        public async Task Handle(SomeMessage message)
            // The potential performance problem is right here.
            // Multiple round trips to the database
            var user = await _users.Load(message.UserId);
            var account = await _accounts.Load(message.AccountId);
            var order = await _orders.Load(message.OrderId);

            var otherOrders = await _orders.Query()
                .Where(x => x.Amount > 100)

            // Carry out rules and whatnot

            await _unitOfWork.Commit();

So here’s the problem with the code up above as I see it:

  1. You’re having to inject separate dependencies for the matching repository type for each entity type, and that adds code ceremony and noise code.
  2. The code is making repeated round trips to the database server every time it needs more data. This is a contrived example, and it’s only 4 trips, but in real systems this could easily be many more. To make this perfectly clear, one of the very most pernicious sources of slow code is chattiness (frequent network round trips) between the application layer and backing database.

Fortunately, Marten has a facility called batch querying that we can use to fetch multiple data queries at one time, and even start processing against the earlier results while the later results are still being read. To use that, we’ve got to ditch the “one size fits all, least common denominator” repository abstraction and use the raw Marten IDocumentSession service as shown in this version below:

    public class MartenHandler
        private readonly IDocumentSession _session;

        public MartenHandler(IDocumentSession session)
            _session = session;

        public async Task Handle(SomeMessage message)
            // Not gonna lie, this is more code than the first alternative
            var batch = _session.CreateBatchQuery();

            var userLookup = batch.Load<User>(message.UserId);
            var accountLookup = batch.Load<Account>(message.AccountId);
            var orderLookup = batch.Load<Order>(message.OrderId);
            var otherOrdersLookup = batch.Query<Order>().Where(x => x.Amount > 100).ToList();

            await batch.Execute();

            // We can immediately start using the data from earlier
            // queries in memory while the later queries are still processing
            // in the background for a little bit of parallelization
            var user = await userLookup;
            var account = await accountLookup;
            var order = await orderLookup;

            var otherOrders = await otherOrdersLookup;

            // Carry out rules and whatnot

            // Commit any outstanding changes with Marten
            await _session.SaveChangesAsync();

The code above creates a single, batched query for the four queries this handler needs, meaning that Marten is making a single database query for the four SELECT statements. As an improvement in the Marten V4 release, the results coming back from Postgresql are processed in a background Task, meaning that in the code above we can start working with the initial Account, User, and Order data while Marten is still building out the last Order results (remember that Marten has to deserialize JSON data to build out your documents and that can be non-trivial for large documents).

I think these are the takeaways for the before and after code here:

  1. Network round trips are expensive and chattiness can be a performance bottleneck, but batch querying approaches like Marten’s can help a great deal.
  2. Putting your persistence tooling behind least common denominator abstractions like the IRepository<T> approach shown above eliminate the ability to use advanced features of your actual persistence tooling. That’s a serious drawback as that disallows the usage of the exact features that allow you to create high performance solutions — and this isn’t specific to using Marten as your backing persistence tooling.
  3. Writing highly performant code can easily mean writing more code as you saw above with the batch querying. The point there being to not automatically opt for the most highly performant approach if it’s unnecessary and more complex than a slower, but simpler approach. Premature optimization and all that.

I’m only showing a small fraction of what the batch query supports, so certainly checkout the documentation for more examples.

My professional and OSS aspirations for 2022

I trot out one of these posts at the beginning of each year, but this time around it’s “aspirations” instead of “plans” because a whole lot of stuff is gonna be a repeat from 2020 and 2021 and I’m not going to lose any sleep over what doesn’t get done in the New Year or not be open to brand new opportunities.

In 2022 I just want the chance to interact with other developers. I’ll be at ThatConference in Round Rock, TX in January May? speaking about Event Sourcing with Marten (my first in person conference since late 2019). Other than that, my only goal for the year (Covid-willing) is to maybe speak at a couple more in person conferences just to be able to interact with other developers in real space again.

My peak as a technical blogger was the late aughts, and I think I’m mostly good with not sweating any kind of attempt to regain that level of readership. I do plan to write material that I think would be useful for my shop, or just about what I’m doing in the OSS space when I feel like it.

Which brings me to the main part of this post, my involvement with the JasperFx (Marten, Lamar, etc). family of OSS projects (plus Storyteller) which takes up most of my extracurricular software related time. Just for an idea of the interdependencies, here’s the highlights of the JasperFx world:

.NET Transactional Document DB and Event Store on PostgreSQL

Marten took a big leap forward late in 2021 with the long running V4.0 release. I think that release might have been the single biggest, most complicated OSS release that I’ve ever been a part of — FubuMVC 1.0 notwithstanding. There’s also a 5.0-alpha release out that addresses .Net 6 support and the latest version of Npgsql.

Right now Marten is a victim of its own success, and our chat room is almost constantly hair on fire with activity, which directly led to some planned improvements for V5 (hopefully by the end of January?) in this discussion thread:

  • Multi-tenancy through a separate database per tenant (long planned, long delayed, finally happening now)
  • Some kind of ability to register and resolve services for more than one Marten database in a single application
  • And related to the previous two bullet points, improved database versioning and schema migrations that could accommodate there being more than one database within a single .Net codebase
  • Improve the “generate ahead” model to make it easier to adopt. Think faster cold start times for systems that use Marten

Beyond that, some of the things I’d like to maybe do with Marten this year are:

  • Investigate the usage of Postgresql table partitioning and database sharding as a way to increase scalability — especially with the event sourcing support
  • Projection snapshotting
  • In conjunction with Jasper, expand Marten’s asynchronous projection support to shard projection work across multiple running nodes, introduce some sort of optimized, no downtime projection rebuilds, and add some options for event streaming with Marten and Kafka or Pulsar
  • Try to build an efficient GraphQL adapter for Marten. And by efficient, I mean that you wouldn’t have to bounce through a Linq translation first and hopefully could opt into Marten’s JSON streaming wherever possible. This isn’t likely, but sounds kind of interesting to play with.

In a perfect, magic, unicorns and rainbows world, I’d love to see the Marten backlog in GitHub get under 50 items and stay there permanently. Commence laughing at me on that one:(

Jasper is a toolkit for common messaging scenarios between .Net applications with a robust in process command runner that can be used either with or without the messaging.

I started working on rebooting Jasper with a forthcoming V2 version late last year, and made quite a bit of progress before Marten got busy and .Net 6 being released necessitated other work. There’s a non-zero chance I will be using Jasper at work, which makes that a much more viable project. I’m currently in flight with:

  • Building Open Telemetry tracing directly into Jasper
  • Bi-directional compatibility with MassTransit applications (absolutely necessary to adopt this in my own shop).
  • Performance optimizations
  • .Net 6 support
  • Documentation overhaul
  • Kafka as a message transport option (Pulsar was surprisingly easy to add, and I’m hopeful that Kafka is similar)

And maybe, just maybe, I might extend Jasper’s somewhat unique middleware approach to web services utilizing the new ASP.Net Core Minimal API support. The idea there is to more or less create an improved version of the old FubuMVC idiom for building web services.

Lamar is a modern IoC container and the successor to StructureMap

I don’t have any real plans for Lamar in the new year, but there are some holes in the documentation, and a couple advanced features could sure use some additional examples. 2021 ended up being busy for Lamar though with:

  1. Lamar v6 added interception (finally), a new documentation website, and a facility for overriding services at test time
  2. Lamar v7 added support for IAsyncEnumerable (also finally), a small enhancement for the Minimal API feature in ASP.Net Core, and .Net 6 support

Add Robust Command Line Options to .Net Applications

Oakton did have a major v4/4.1 release to accommodate .Net 6 and ASP.Net Core Minimal API usage late in 2021, but I have yet to update the documentation. I would like to shift Oakton’s documentation website to VitePress first. The only plans I have for Oakton this year is to maybe see if there’d be a good way for Oakton to enable “buddy” command line tools to your application like the dotnet ef tool using the HostFactoryResolver class.

The bustling metropolis of Alba, MO

Alba is a wrapper around the ASP.Net Core TestServer for declarative, in process testing of ASP.Net Core web services. I don’t have any plans for Alba in the new year other than to respond to any issues or opportunities to smooth out usage from my shop’s usage of Alba.

Alba did get a couple major releases in 2021 though:

  1. Alba 5.0 streamlined the entry API to mimic IHost, converted the documentation website to VitePress, and introduced new facilities for dealing with security in testing.
  2. Alba 6.0 added support for WebApplicationFactory and ASP.Net Core 6

Solutions for creating robust, human readable acceptance tests for your .Net or CoreCLR system and a means to create “living” technical documentation.

Storyteller has been mothballed for years, and I was ready to abandon it last year, but…

We still use Storyteller for some big, long running integration style tests in both Marten and Jasper where I don’t think xUnit/NUnit is a good fit, and I think maybe I’d like to reboot Storyteller later this year. The “new” Storyteller (I’m playing with the idea of calling it “Bobcat” as it might be a different tool) would be quite a bit smaller and much more focused on enabling integration testing rather than trying to be a BDD tool.

Not sure what the approach might be, it could be:

  • “Just” write some extension helpers to xUnit or NUnit for more data intensive tests
  • “Just” write some extension helpers to SpecFlow
  • Rebuild the current Storyteller concept, but also support a Gherkin model
  • Something else altogether?

My goals if this happens is to have a tool for automated testing that maybe supports:

  • Much more data intensive tests
  • Better handles integration tests
  • Strong support for test parallelization and even test run sharding in CI
  • Could help write characterization tests with a record/replay kind of model against existing systems (I’d *love* to have this at work)
  • Has some kind of model that is easy to use within an IDE like Rider or VS, even if there is a separate UI like Storyteller does today

And I’d still like to rewrite a subset of the existing Storyteller UI as an excuse to refresh my front end technology skillset.

To be honest, I don’t feel like Storyteller has ever been much of a success, but it’s the OSS project of mine that I’ve most enjoyed working on and most frequently used myself.


Weasel is a set of libraries for database schema migrations and ADO.Net helpers that we spun out of Marten during its V4 release. I’m not super excited about doing this, but Weasel is getting some sort of database migration support very soon. Weasel isn’t documented itself yet, so that’s the only major plan other than supporting whatever Marten and/or Jasper needs this year.


Baseline is a grab bag of helpers and extension methods that dates back to the early FubuMVC project. I haven’t done much with Baseline in years, and it might be time to prune it a little bit as some of what Baseline does is now supported in the .Net framework itself. The file system helpers especially could be pruned down, but then also get asynchronous versions of what’s left.


I don’t think that I got a single StructureMap question last year and stopped following its Gitter room. There are still plenty of systems using StructureMap out there, but I think the mass migration to either Lamar or another DI container is well underway.

Marten’s Compiled Query Feature

TL;DR: Marten’s compiled query feature makes using Linq queries significantly more efficient at runtime if you need to wring out just a little more performance in your Marten-backed application.

I was involved in a twitter conversation today that touched on the old Specification pattern of describing a reusable database query by an object (watch it, that word is overloaded in software development world and even refers to separate design patterns). I mentioned that Marten actually has an implementation of this pattern we call Compiled Queries.

Jumping right into a concrete example, let’s say that we’re building an issue tracking system because we hate Jira so much that we’d rather build one completely from scratch. At some point you’re going to want to query for all open issues currently assigned to a user. Assuming our new Marten-backed issue tracker has a document type called Issue, a compiled query class for that would look like this:

    // ICompiledListQuery<T> is from Marten
    public class OpenIssuesAssignedToUser: ICompiledListQuery<Issue>
        public Expression<Func<IMartenQueryable<Issue>, IEnumerable<Issue>>> QueryIs()
            return q => q
                .Where(x => x.AssigneeId == UserId)
                .Where(x => x.Status == "Open");
        // This is an input parameter to the query
        public Guid UserId { get; set; }

And now in usage, we’ll just spin up a new instance of the OpenIssuesAssignedToUser to query for the open issues for a given user id like this:

    var store = DocumentStore.For(opts =>
        opts.Connection("some connection string");

    await using var session = store.QuerySession();

    var issues = await session.QueryAsync(new OpenIssuesAssignedToUser
        UserId = userId // passing in the query parameter to a known user id
    // do whatever with the issues

Other than the weird method signature of the QueryIs() method, that class is pretty simple if you’re comfortable with Marten’s superset of Linq. Compiled queries can be valuable anywhere where the old Specification (query objects) pattern is useful, but here’s the cool part…

Compiled Queries are Faster

Linq has been an awesome addition to the .Net ecosystem, and it’s usually the very first thing I mention when someone asks me why they should consider .Net over Java or any other programming ecosystem. On the down side though, it’s complicated as hell, there’s some runtime overhead to generating and parsing Linq queries at runtime, and most .Net developers don’t actually understand how it works internally under the covers.

The best part of the compiled query feature in Marten is that on the first usage of a compiled query type, Marten memoizes its “query plan” for the represented Linq query so there’s significantly less overhead for subsequent usages of the same compiled query type within the same application instance.

To illustrate what’s happening when you issue a Linq query, consider the same logical query as above, but this time in inline Linq:

    var issues = await session.Query<Issue>()
        .Where(x => x.AssigneeId == userId)
        .Where(x => x.Status == "Open")

    // do whatever with the issues

When the Query() code above is executed, Marten is:

  1. Building an entire object model in memory using the .Net Expression model.
  2. Linq itself never executes any of the code within Where() or Select() clauses, instead it parses and interprets that Expression object model with a series of internal Visitor types.
  3. The result of visiting the Expression model is to build a corresponding, internal IQueryHandler object is created that “knows” how to build up the SQL for the query and then how to process the resulting rows returned by the database and then to coerce the raw data into the desired results (JSON deserialization, stash things in identity maps or dirty checking records, etc).
  4. Executing the IQueryHandler, which in turn writes out the desired SQL query to the outgoing database command
  5. Make the actual call to the underlying Postgresql database to return a data reader
  6. Interpret the data reader and coerce the raw records into the desired results for the Linq query

Sounds kind of heavyweight when you list it all out. When we move the same query to a compiled query, we only have to incur the cost of parsing the Linq query Expression model once, and Marten “remembers” the exact SQL statement, how to map query inputs like OpenIssuesAssignedToUser.UserId to the right database command parameter, and even how to process the raw database results. Behind the scenes, Marten is generating and compiling a new class at runtime to execute the OpenIssuesAssignedToUser query like this (I reformatted the generated source code just a little bit here):

using System.Collections.Generic;
using Marten.Internal;
using Marten.Internal.CompiledQueries;
using Marten.Linq;
using Marten.Linq.QueryHandlers;
using Marten.Testing.Documents;
using NpgsqlTypes;
using Weasel.Postgresql;

namespace Marten.Testing.Internals.Compiled
    public class
        OpenIssuesAssignedToUserCompiledQuery: ClonedCompiledQuery<IEnumerable<Issue>, OpenIssuesAssignedToUser>
        private readonly HardCodedParameters _hardcoded;
        private readonly IMaybeStatefulHandler _inner;
        private readonly OpenIssuesAssignedToUser _query;
        private readonly QueryStatistics _statistics;

        public OpenIssuesAssignedToUserCompiledQuery(IMaybeStatefulHandler inner, OpenIssuesAssignedToUser query,
            QueryStatistics statistics, HardCodedParameters hardcoded): base(inner, query, statistics, hardcoded)
            _inner = inner;
            _query = query;
            _statistics = statistics;
            _hardcoded = hardcoded;

        public override void ConfigureCommand(CommandBuilder builder, IMartenSession session)
            var parameters = builder.AppendWithParameters(
                @"select d.id, d.data from public.mt_doc_issue as d where (CAST(d.data ->> 'AssigneeId' as uuid) = ? and  d.data ->> 'Status' = ?)");

            parameters[0].NpgsqlDbType = NpgsqlDbType.Uuid;
            parameters[0].Value = _query.UserId;

    public class
        OpenIssuesAssignedToUserCompiledQuerySource: CompiledQuerySource<IEnumerable<Issue>, OpenIssuesAssignedToUser>
        private readonly HardCodedParameters _hardcoded;
        private readonly IMaybeStatefulHandler _maybeStatefulHandler;

        public OpenIssuesAssignedToUserCompiledQuerySource(HardCodedParameters hardcoded,
            IMaybeStatefulHandler maybeStatefulHandler)
            _hardcoded = hardcoded;
            _maybeStatefulHandler = maybeStatefulHandler;

        public override IQueryHandler<IEnumerable<Issue>> BuildHandler(OpenIssuesAssignedToUser query,
            IMartenSession session)
            return new OpenIssuesAssignedToUserCompiledQuery(_maybeStatefulHandler, query, null, _hardcoded);

What else can compiled queries do?

Besides being faster than raw Linq and being useful as the old reliable Specification pattern, compiled queries can be very valuable if you absolutely insist on mocking or stubbing the Marten IQuerySession/IDocumentSession. You should never, ever try to mock or stub the IQueryable interface with a dynamic mock library like NSubstitute or Moq, but mocking the IQuerySession.Query<T>(T query) method is pretty straight forward.

Most of the Linq support in Marten is usable within compiled queries — even the Include() feature for querying related document types in one round trip. There’s even an ability to “stream” the raw JSON byte array data from compiled query results directly to the HTTP response body in ASP.Net Core for Marten’s “ludicrous speed” mode.

Multi-Tenancy with Marten

We’ve got an upcoming Marten 5.0 release ostensibly to support breaking changes related to .Net 6, but that also gives us an opportunity to consider work that would result in breaking API changes. A strong candidate for V5 right now is finally adding long delayed first class support for multi-tenancy through separate databases.

Let’s say that you’re building an online database-backed, web application of some sort that will be servicing multiple clients. At a minimum, you need to isolate data access so that client users can only interact with the data for the correct client or clients. Ideally, you’d like to get away with only having one deployed instance of your application that services the users of all the clients. In other words, you want to support “multi-tenancy” in your architecture.

Software multitenancy is a software architecture in which a single instance of software runs on a server and serves multiple tenants.

Multi-tenancy on Wikipedia

For the rest of this post, I’m going to use the term “tenant” to refer to whatever the organizational entity is that owns separate database data. Depending on your business domain, that could be a client, a sub-organization, a geographic area, or some other organizational concept.

Fortunately, if you use Marten as your backing database store, Marten has strong support for multi-tenancy with new improvements in the recent V4 release and more potential improvements tentatively planned for V5.

There are three basic approaches to segregating tenant data in a database:

  1. Single database, single schema, but use a field or property in each table to denote the tenant. This is Marten’s approach today with what we call the “Conjoined” model. The challenge here is that all queries and writes to the database need to take into account the currently used tenant — and that’s where Marten’s multi-tenancy support helps a great deal. Database schema management is easier with this approach because there’s only one set of database objects to worry about. More on this later.
  2. Separate schema per tenant in a single database. Marten does not support this model, and it doesn’t play well with Marten’s current internal design. I seriously doubt that Marten will ever support this.
  3. Separate database per tenant. This has been in Marten’s backlog forever, and maybe now is the time this finally gets done (plenty of folks have used Marten this way already with custom infrastructure on top of Marten, but there’s some significant overhead). I’ll speak to this much more in the last section of this post.

Basic Multi-Tenancy Support in Marten

To set up multi-tenancy in your document storage with Marten, we can set up a document store with these options:

    var store = DocumentStore.For(opts =>
        opts.Connection("some connection string");

        // Let's just say that each and every document
        // type is going to be multi-tenanted

        // Or you can do this document type by document type
        // if some document types are not related to a tenant

There’s a couple other ways to opt document types into multi-tenancy, but you get the point. With just this, we can start a new Marten session for a particular tenant and carry out basic operations isolated to a single tenant like so:

    // Open a session specifically for the tenant "tenant1"
    await using var session = store.LightweightSession("tenant1");

    // This would return *only* the admin users from "tenant1"
    var users = await session.Query<User>().Where(x => x.Roles.Contains("admin"))

    // This user would be automatically be tagged as belonging to "tenant1" 
    var user = new User {UserName = "important_guy", Roles = new string[] {"admin"}};

    await session.SaveChangesAsync();

The key thing to note here is that other than telling Marten which tenant you want to work with as you open a new session, you don’t have to do anything else to keep the tenant data segregated as Marten is dealing with those mechanics behind the scenes on all queries, inserts, updates, and deletions from that session.

Awesome, except that some folks needed to occasionally do operations against multiple tenants at one time…

Tenant Spanning Operations in Marten V4

The big improvements in Marten V4 for multi-tenancy was in making it much easier to work with data from multiple tenants in one document session. Marten has long had the ability to query data across tenants with the AnyTenant() or ` like so:

    var allAdmins = await session.Query<User>()
        .Where(x => x.Roles.Contains("admin"))
        // This is a Marten specific extension to Linq
        // querying
        .Where(x => x.AnyTenant())

Which is great for what it is, but there wasn’t any way to know what tenant each document returned belonged to. We made a huge effort in V4 to expand Marten’s document metadata capabilities, and part of that is the ability to write the tenant id to a document being fetched from the database by Marten. The easiest way to do that is to have your document type implement the new ITenanted interface like so:

    public class MyTenantedDoc: ITenanted
        public Guid Id { get; set; }
        // This property will be set by Marten itself
        // when the document is persisted or loaded
        // from the database
        public string TenantId { get; set; }

So now we at least have the ability to know which documents we queried across the tenants belong to which tenant.

The next thing folks wanted from V4 was the ability to make writes against multiple tenants with one single document session in a single unit of work. To that end, Marten V4 introduced the concept of ITenantOperations to log operations against a specific tenants besides the tenant the current session was opened as. And all those operations should be committed to the underlying Postgresql database as a single transaction.

To make that concrete, here’s some sample code, but this time adding two new User document with the same user name, but to two different tenants by tenant id:

    // Same user name, but in different tenants
    var user1 = new User {UserName = "bob"};
    var user2 = new User {UserName = "bob"};

    // This exposes operations against only tenant1

    // This exposes operations that would apply to 
    // only tenant2
    // And both operations get persisted in one transaction
    await session.SaveChangesAsync();

So that’s the gist of the V4 multi-tenancy improvements. We also finally support multi-tenancy within the asynchronous projection support, but I’ll blog about that some other time.

Now though, it’s time to consider…

Database per Tenant

To be clear, I’m looking for any possible feedback about the requirements for this feature in Marten. Blast away here in comments, or here’s a link to the GitHub issue, or go to Gitter.

While you can — and many folks have successfully achieved — multi-tenancy through database per tenant by just keeping an otherwise identically configured DocumentStore per named tenant in memory with the only difference being a connection string. That certainly can work, especially with a low number of tenants. There’s a few problems with that approach though:

  • You’re on your own to configure that in the DI container within your application
  • DocumentStore is a relatively expensive object to create, and it potentially generates a lot of runtime objects that get held in memory. You don’t really want a bunch of those hanging around
  • Going around AddMarten() negates the Marten CLI support, which is the easiest possible way to manage Marten database schema migrations. Now you’re completely on your own about how to do database migrations without using pure runtime database patching — which we do not recommend in production.

So let’s just call it a given that we do want to add some formal support for multi-tenancy through separate databases per tenant to Marten. Moreover, Database per Tenant been in our backlog forever, but pushed off every time we’ve struggled to make big Marten releases.

I think there’s some potential for this story to cause breaking API changes (I don’t have anything specific in mind, it’s just likely in my opinion), so that makes that story a very good candidate to get in place for Marten V5. From the backlog issue writeup I made back in 2017:

  • Have all tenants tracked in memory, such that a single DocumentStore can share all the expensive runtime built internal objects across tenants
  • A tenanting strategy that can lookup the database connection string per tenant, and create sessions per separate tenants. There’s actually an interface hook in Marten all ready to go that may serve out of the box when we do this (I meant to do this work years ago, but it just didn’t happen).
  • At development time (AutoCreate != AutoCreate.None), be able to spin up a new database on the fly for a tenant if it doesn’t already exist
  • “Know” what all the existing tenants are so that we could apply database migrations from the CLI or through the DocumentStore schema migration APIs
  • Extend the CLI support to support multiple tenant databases
  • Make the database registry mechanism be a little bit pluggable. Thinking that some folks would have a few tenants where you’d be good with just writing everything into a static configuration file. Other folks may have a *lot* of tenants (I’ve personally worked on a system that had >100 separate tenant databases in one deployed application), so they may want a “master” database