Retiring Lamar and the Ghost of IoC Containers Past

EDIT: October 30th, 2024: Nevermind, ServiceProvider is awful, Lamar is going to continue with a version 14 soon-ish.

If you’re not familiar with it, Lamar is an IoC/DI container for .NET. It was originally built to be a faster, modernized, ASP.Net Core-compliant replacement for the much older StructureMap IoC library and also as a necessary subsystem of what is now Wolverine.

First off, if you have a vested investment in continuing to use Lamar in your development environment, it’s going to be continued to be supported for the time being for any necessary bug fixes, performance issues, or the inevitable changes when Microsoft drops a new .NET release that breaks Lamar somehow. I don’t expect there to be any new feature development with Lamar other than that though.

I do want to start deprecating Lamar throughout the rest of the JasperFx / “Critter Stack” ecosystem though. First off, I think it’s increasingly untenable in the long run to maintain any custom IoC container tool in the .NET ecosystem as Microsoft continues to bake in new assumptions about capabilities and behavior directly into the .NET ServiceProvider (the new “keyed services” feature was done in a really weird way and I see that as a harbinger of yet more pain coming from our MS overlords). Second, we’ve had a few complaints about how Wolverine requires Lamar as its IoC container and silently replaces the built in container in your system. Not that many people really care, but the ones who do have been kind of nasty about it, so it’s just time for the Wolverine coupling to Lamar to go. Third — and I think this is going to be a very good thing in the long run for all of us — the community as a whole seems to be settling on mostly using the small subset of core IoC container behavior that’s exposed by the abstracted IServiceProvider interface and that’s largely making IoC tools a commodity.

At this point, I’ve ported Lamar’s type scanning and conventional service discovery to target the basic ServiceDescriptor model in the JasperFx.Core library that’s underneath both Marten and Wolverine. I think I’d like to see the command line diagnostics from Lamar ported to the build in DI container as part of Oakton, but not sure what the priority of that will be. Happy to have a volunteer for that one!

You might say that the type scanning thing sounds like Scrutor, but I’d say that it’s the other way around as the StructureMap type scanning predates Scrutor by many years.

On the way out, I’d like to run through some of the “special” features of both StructureMap and Lamar as a kind of wake for two projects I spent way too much time on over the years.

Passing Arguments at Runtime with StructureMap

For the most part, when you use an IoC container today you’re just letting it do what we used to call “auto-wiring” to use its configuration to figure out exactly what the full build plan is for a service and all its dependencies. But, StructureMap also let you go loose-y goose-y and pass in dependencies at runtime:

var widget = new BWidget();
var service = new BService();

var guyWithWidgetAndService = container
    .With<IWidget>(widget)
    .With<IService>(service)
    .GetInstance<GuyWithWidgetAndService>();

In this case, StructureMap would build GuyWithWidgetAndService with all of its normal build plan, but substitute in the IWidget and IService value passed into the fluent interface above.

This feature was very flexible, endlessly vulnerable to permutations, and caused StructureMap to be less performant because of all of its runtime logic. I purposely ditched this with Lamar so that Lamar could “bake” in its build plans and pre-compile functions to build objects at runtime (I think basically every mainstream IoC tool you’d likely use in .NET does it this way now).

Inline Dependencies

Both Lamar and StructureMap allowed you to create service registrations that specified “inline” dependencies for only that service registration:

// ServiceRegistry is Lamar's analogue to IServiceCollection
// The For<T>().Use<TConcrete>() syntax is Lamar's older version
// of the AddSingleton<T>() extension methods used today
// with IServiceCollection
public class InlineCtorArgs : ServiceRegistry
{
    public InlineCtorArgs()
    {
        // Defining args by type
        For<IEventRule>().Use<SimpleRule>()
            .Ctor<ICondition>().Is<Condition1>()
            .Ctor<IAction>().Is<Action1>()
            .Named("One");

        // Pass the explicit values for dependencies
        For<IEventRule>().Use<SimpleRule>()
            .Ctor<ICondition>().Is(new Condition2())
            .Ctor<IAction>().Is(new Action2())
            .Named("Two");
    }
}

Not something I’ve used myself in the recent past, but at one point this was relatively common. I used this to build rules engines with the “Event Condition Action” pattern.

Built In Environment Checks

Several containers now have diagnostics that allow you to verify the configuration by basically saying “do I know enough to build everything that is registered and is any implied dependencies missing?”. Lamar & StructureMap both did that with their diagnostics (which are more robust than the built in DI container, thank you), but a special little hook they have is built in environment checks as shown below:

    public class DatabaseUsingService
    {
        private readonly DatabaseSettings _settings;

        public DatabaseUsingService(DatabaseSettings settings)
        {
            _settings = settings;
        }

        [ValidationMethod]
        public void Validate()
        {
            // For *now*, Lamar requires validate methods be synchronous
            using (var conn = new SqlConnection(_settings.ConnectionString))
            {
                // If this blows up, the environment check fails:)
                conn.Open();
            }
        }
    }

That allowed you to do some additional runtime checking to verify a system was ready to work when container.AssertConfigurationIsValid(); is called. Just to show my age, this feature was originally added to probe whether or not required COM components were registered locally during deployment so that a deployment could “fail fast” and be quickly reverted.

DisposalLock because devs just can’t be trusted!

I bumped into this one just today. Let’s say that you have some naughty code in your system that is unintentionally trying to dispose the root container for your application (don’t laugh, it happened enough to spawn this feature). StructureMap/Lamar have a feature specifically for this problem that allows you to “lock” the disposal of the container to either find or stop the problem:

var container = Container.Empty();

// Ignore any calls to Container.Dispose()
container.DisposalLock = DisposalLock.Ignore;

// Throw an exception right here and now so we 
// can find out who is erroneously disposing the container!
container.DisposalLock = DisposalLock.ThrowOnDispose;

// Normal mode
container.DisposalLock = DisposalLock.Unlocked;

container.Dispose();

Decorators and Interceptors

Lamar has a strong model for registering decorators or interceptors on service registrations to carry out some kind of action on a just build object or to wrap the inner behavior with a decorator.

public class WidgetDecorator : IWidget
{
    public WidgetDecorator(IThing thing, IWidget inner)
    {
        Inner = inner;
    }

    public IWidget Inner { get; }

    public void DoSomething()
    {
        // do something before 
        Inner.DoSomething();
        // do something after
    }
}

var container = new Container(_ =>
{
    // This usage adds the WidgetHolder as a decorator
    // on all IWidget registrations
    _.For<IWidget>().DecorateAllWith<WidgetDecorator>();

    // The AWidget type will be decorated w/ 
    // WidgetHolder when you resolve it from the container
    _.For<IWidget>().Use<AWidget>();

    _.For<IThing>().Use<Thing>();
});

// Snippet of code from a unit test in the Lamar
// codebase
container.GetInstance<IWidget>()
    .ShouldBeOfType<WidgetDecorator>()
    .Inner.ShouldBeOfType<AWidget>();

This isn’t a feature I’ve personally used in years, but it was very common at one point. It’s also an awesome way for a team to bake in some magic behavior off to the side where folks won’t be able to find it easily later when the code misbehaves. At one point, I was absolutely ready to throttle very early MediatR users who abused the hell out of Lamar decorators as a middleware strategy and constantly asked me for help before MediatR got a usable middleware strategy.

Child Containers in StructureMap

Child Containers in StructureMap (or Profiles) were partially isolated containers that inherited service registrations (and singletons) from a root container and allowed you to make selected overrides of the parent containers.

Let’s just start by looking at this sample code:

[Fact]
public void show_a_child_container_in_action()
{
    var parent = new Container(_ =>
    {
        _.For<IWidget>().Use<AWidget>();
        _.For<IService>().Use<AService>();
    });

    // Create a child container and override the
    // IService registration
    var child = parent.CreateChildContainer();
    child.Configure(_ =>
    {
        _.For<IService>().Use<ChildSpecialService>();
    });

    // The child container has a specific registration
    // for IService, so use that one
    child.GetInstance<IService>()
        .ShouldBeOfType<ChildSpecialService>();

    // The child container does not have any
    // override of IWidget, so it uses its parent's
    // configuration to resolve IWidget
    child.GetInstance<IWidget>()
        .ShouldBeOfType<AWidget>();
}

Sigh, folks loved this feature way back in the day. Especially for bigger, heavy client applications and sometimes for multi-tenancy. For me as a maintainer, it was a never ending nightmare to support because of all the possible permutations and how users would interpret the “proper” behavior differently. It also made StructureMap slow and more complex. I jettisoned this with Lamar to both optimize performance and to simplify my life.

There’s a very important lesson out of this feature and others in this page, the flexibility of a software tool and the performance characteristics of a software tool are often in conflict and there’s a tradeoff to be made.

Setter Injection

StructureMap and Lamar both supported Setter Injection where instead of only supplying dependencies through constructor functions, Lamar can also set dependencies on properties. There were various ways to do that, but the simplest (and ugliest) was through attributes:

public class Repository
{
    // Adding the SetterProperty to a setter directs
    // Lamar to use this property when
    // constructing a Repository instance
    [SetterProperty] public IDataProvider Provider { get; set; }

    [SetterProperty] public bool ShouldCache { get; set; }
}

But you could also do it through policies:

public class ClassWithNamedProperties
{
    public int Age { get; set; }
    public string LastName { get; set; }
    public string FirstName { get; set; }
    public IGateway Gateway { get; set; }
    public IService Service { get; set; }
}

[Fact]
public void specify_setter_policy_and_construct_an_object()
{
    var theService = new ColorService("red");

    var container = new Container(x =>
    {
        x.For<IService>().Use(theService);
        x.For<IGateway>().Use<DefaultGateway>();

        x.ForConcreteType<ClassWithNamedProperties>().Configure.Setter<int>().Is(5);

        x.Policies.SetAllProperties(
            policy => policy.WithAnyTypeFromNamespace("StructureMap.Testing.Widget3"));
    });

    var description = container.Model.For<ClassWithNamedProperties>().Default.DescribeBuildPlan();
    Debug.WriteLine(description);

    var target = container.GetInstance<ClassWithNamedProperties>();
    target.Service.ShouldBeSameAs(theService);
    target.Gateway.ShouldBeOfType<DefaultGateway>();
}

Now, this obviously sets up users for potential confusion about what properties are or are not being fulfilled by Lamar. I’m much more tolerant of a little “magic” in my code tools and I put a higher priority on “cleaner” looking code than I do on insisting that all code be explicit (and too ugly to actually read), but those “magic” tools absolutely have to be backed up with some kind of diagnostic that can unravel the “magic” behavior. Lamar actually has this now with its ability to preview the “build plan” for exactly how it’s going to resolve and build a service registration:

var container = new Container(x =>
{
    x.For<IEngine>().Use<Hemi>().Named("The Hemi");

    x.For<IEngine>().Add<VEight>().Singleton().Named("V8");
    x.For<IEngine>().Add<FourFiftyFour>();
    x.For<IEngine>().Add<StraightSix>().Scoped();

    x.For<IEngine>().Add(c => new Rotary()).Named("Rotary");
    x.For<IEngine>().Add(c => c.GetService<PluginElectric>());

    x.For<IEngine>().Add(new InlineFour());

    x.For<IEngine>().UseIfNone<VTwelve>();
});

// Little heavy weight, but this would show the equivalent C#
// code that would demonstrate what Lamar is doing to build
// or resolve every single service registration it has
Console.WriteLine(container.HowDoIBuild());

I’ve always been in the camp that says that setter injection is not ideal and that constructor injection (or method injection in newer frameworks like Wolverine or Minimal API), but there was from time to time a valid reason to use setter injection. Usually when there was some kind of inheritance involved — but that’s its own set of problems too.

Lamar’s diagnostics are head and shoulders better than the built in DI container from Microsoft and I think this will ultimately be the thing I miss most when Lamar is truly retired.

Overriding Services at Runtime

Lamar has some rump ability to do this, but StructureMap gave us the ultimate in runtime flexibility to override service registrations at will at runtime:

[Fact]
public void change_default_in_an_existing_container()
{
    var container = new Container(x => { x.For<IFoo>().Use<AFoo>(); });

    container.GetInstance<IFoo>().ShouldBeOfType<AFoo>();

    // Now, change the container configuration
    container.Configure(x => x.For<IFoo>().Use<BFoo>());

    // The default of IFoo is now different
    container.GetInstance<IFoo>().ShouldBeOfType<BFoo>();

    // or use the Inject method that's just syntactical
    // sugar for replacing the default of one type at a time

    container.Inject<IFoo>(new CFoo());

    container.GetInstance<IFoo>().ShouldBeOfType<CFoo>();
}

This was awesome for integration testing scenarios that depended on the IoC container in tests, but it was an unholy nightmare for me to maintain because it’s a permutation hell kind of trap. Yet again, Lamar chose performance and simplicity over flexibility and dropped this feature.

This is probably the one thing that people complain about the most switching from StructureMap to Lamar. I sympathize, but stand by my decisions there.

Missing Registration Policies

A lot of StructureMap was built in the days when Ruby on Rails seemed to be poised to dominate the development landscape, and I thought that Ruby’s missing method functionality looked cool as hell. StructureMap & Lamar both have a capability to “discover” or determine missing dependencies at runtime. While this was extensible, and we did use it on projects here and there, it was used internally by Lamar to “auto close” open generic types, find registrations for IEnumerable<T> by looking for T registrations, and some others.

The one Lamar/StructureMap feature (besides the diagnostics) I miss when using ServiceProvider is that Lamar can auto-resolve a concrete type upon request as long as Lamar can find all the necessary dependencies.

Lamar also has several policy types that would enable you to alter how registrations are actually built based on user defined policies like:

Use the “invoice” database connection string anytime the concrete type is in the namespace “OutApplication.Invoices”

Again, lots of opportunity for confusing the hell out of folks, and you’d absolutely want the magic unraveling diagnostics Lamar has in order to use something like that.

Summary

I’ve given up on several long running OSS projects in the 10-12 years with varying emotions of anger, disappointment, relief, or even a feeling of accomplishment. With Lamar I feel like it served its purpose in its time, and anyway:

“The world has moved on,’ we say… we’ve always said. But it’s moving on faster now. Something has happened to time.”

Roland Deschain

The “Critter Stack” Just Leveled Up on Modular Monolith Support

The goal for the “Critter Stack” tools is to be the absolute best set of tools for building server side .NET applications, and especially for any usage of Event Driven Architecture approaches. To go even farther, I would like there to be a day where organizations purposely choose the .NET ecosystem just because of the benefits that the “Critter Stack” provides over other options. But for now, that’s the journey we’re on. This post demonstrates an important new feature that I think fills in a huge capability gap that has long bothered me.

And as always, JasperFx Software is happy to work with any “Critter Stack” users through either support contracts or consulting engagements to help you wring the most value out of our tools and help you succeed with what you’re building.

I recently wrote some posts about the whole “Modular Monolith” architecture approach:

  1. Thoughts on “Modular Monoliths”
  2. Actually Talking about Modular Monoliths
  3. Modular Monoliths and the “Critter Stack”

Marten already has strong support for modular monoliths through its “separate store” functionality. In the last post though, I lamented that all the whizz bang integration between Wolverine and Marten like the aggregate handler workflow or Wolverine’s transactional outbox or Marten side effects or the new event subscription model that make the full “Critter Stack” such a productive toolset for Event Sourcing were, alas, not available in conjunction with Marten’s separate store model.

This week I’m helping a JasperFx client who has some complicated multi-tenancy requirements. In one of their services they have some types of event streams that need to use “conjoined multi-tenancy“, but at least one type of event stream (and related aggregate) that is global across all tenants. Marten event stores are either multi-tenanted or they’re not, with no mixing and matching. It occurred to me that we could solve this issue by putting the one type of global event streams in a separate Marten store. Even though the 2nd Marten store will still target the exact same PostgreSQL database (but in a different schema), we can give this second schema a different configuration to accommodate the different tenancy rules. Moreover, this would even be a good way to improve performance and scalability of their service by effectively sharding the events and streams tables (smaller tables generally mean better performance).

At the same time, I’m also helping them introduce Wolverine message handlers as well, and I really wanted to be able to use the aggregate handler workflow for commands that spawn new Marten events (effectively the Critter Stack version of the “Decider” pattern, but lower ceremony). I finally took some time — and stumbled onto a workable approach — that finally adds far better support for modular monolith architectures with the Wolverine 2.13.0 release that hit today.

Specifically, Wolverine finally got some support for full integration with ancillary document and event stores from Marten in the same application.

To see a sneak peek, let’s say that you have two additional Marten stores for your application like these two:

public interface IPlayerStore : IDocumentStore;
public interface IThingStore : IDocumentStore;

You can now bootstrap a Marten + Wolverine application (using the WolverineFx.Marten Nuget dependency) like so:

theHost = await Host.CreateDefaultBuilder()
    .UseWolverine(opts =>
    {
        opts.Services.AddMarten(Servers.PostgresConnectionString).IntegrateWithWolverine();

        opts.Policies.AutoApplyTransactions();
        opts.Durability.Mode = DurabilityMode.Solo;

        opts.Services.AddMartenStore<IPlayerStore>(m =>
        {
            m.Connection(Servers.PostgresConnectionString);
            m.DatabaseSchemaName = "players";
        })
            // THIS AND BELOW IS WHAT IS NEW FOR WOLVERINE 2.13
            .IntegrateWithWolverine()
            
            // Add a subscription
            .SubscribeToEvents(new ColorsSubscription())
            
            // Forward events to wolverine handlers
            .PublishEventsToWolverine("PlayerEvents", x =>
            {
                x.PublishEvent<ColorsUpdated>();
            });
        
        // Look at that, it even works with Marten multi-tenancy through separate databases!
        opts.Services.AddMartenStore<IThingStore>(m =>
        {
            m.MultiTenantedDatabases(tenancy =>
            {
                tenancy.AddSingleTenantDatabase(tenant1ConnectionString, "tenant1");
                tenancy.AddSingleTenantDatabase(tenant2ConnectionString, "tenant2");
                tenancy.AddSingleTenantDatabase(tenant3ConnectionString, "tenant3");
            });
            m.DatabaseSchemaName = "things";
        }).IntegrateWithWolverine(masterDatabaseConnectionString:Servers.PostgresConnectionString);

        opts.Services.AddResourceSetupOnStartup();
    }).StartAsync();

Now, moving to message handlers or HTTP endpoints, you will have to explicitly tag either the containing class or individual messages with the [MartenStore(store type)] attribute like this simple example below:

// This will use a Marten session from the
// IPlayerStore rather than the main IDocumentStore
[MartenStore(typeof(IPlayerStore))]
public static class PlayerMessageHandler
{
    // Using a Marten side effect just like normal
    public static IMartenOp Handle(PlayerMessage message)
    {
        return MartenOps.Store(new Player{Id = message.Id});
    }
}

Boom! Even that minor sample is using transactional middleware targeting Marten and able to work with the separate IPlayerStore. This new integration includes:

  • Transactional outbox support for all configured Marten stores
  • Transactional middleware
  • The “aggregate handler workflow”
  • Marten side effects
  • Subscriptions to Marten events
  • Multi-tenancy, both “conjoined” Marten multi-tenancy and multi-tenancy through separate databases

For more information, see the documentation on this new feature.

Summary

I’m maybe a little too excited for a feature that most users will never touch, but for those who do see this, the “Critter Stack” now has first class modular monolith support across a wide range of the features that make the “Critter Stack” a desirable platform in the first place.

Sneak Peek of Strong Typed Identifiers in Marten

If you really need to have strong typed identifier support in Marten right now, here’s the long standing workaround.

Some kind of support for “strong typed identifiers” has long been a feature request for Marten from our community. I’ve even been told by a few folks that they wouldn’t consider using Marten until it did have this support. I’ve admittedly been resistant to adding this feature strictly out of (a very well founded) fear that tackling that would be a massive time sink that didn’t really improve the tool in any great way (I’m hoping to be wrong about that).

My reticence about this aside, it came up a couple times in the past week from JasperFx Software customers, and that magically ratchets up the priority quite a bit. That all being said, here’s a little preview of some ongoing work for the next Marten feature release.

Let’s say that you’re using the Vogen library for value types and want to use this custom type for the identity of an Invoice document in Marten:

[ValueObject<Guid>]
public partial struct InvoiceId;

public class Invoice
{
    // Marten will use this for the identifier
    // of the Invoice document
    public InvoiceId? Id { get; set; }
    public string Name { get; set; }
}

Jumping to some already passing tests, Marten can assign an identity to a new document is one is missing just like it would today for Guid identities:


    [Fact]
    public void store_document_will_assign_the_identity()
    {
        var invoice = new Invoice();
        theSession.Store(invoice);

        // Marten sees that there is no existing identity,
        // so it assigns a new identity 
        invoice.Id.ShouldNotBeNull();
        invoice.Id.Value.Value.ShouldNotBe(Guid.Empty);
    }

Because this actually does matter for database performance, Marten is using a sequential Guid inside of the custom InvoiceId type. Following Marten’s desire for a “it just works” development experience, Marten is able to “know” how to work with the InvoiceId type generated by Vogen without having to require any kind of explicit mapping or mandatory interfaces on the identity type — which I thought was pretty important to keep your domain code from being coupled to Marten.

Moving to basic use cases, here’s a passing test for storing and loading a new document from the database:

    [Fact]
    public async Task load_document()
    {
        var invoice = new Invoice{Name = Guid.NewGuid().ToString()};
        theSession.Store(invoice);

        await theSession.SaveChangesAsync();

        (await theSession.LoadAsync<Invoice>(invoice.Id))
            .Name.ShouldBe(invoice.Name);
    }

and a look at how the strong typed identifiers can play in LINQ expressions so far:

    [Fact]
    public async Task use_in_LINQ_where_clause()
    {
        var invoice = new Invoice{Name = Guid.NewGuid().ToString()};
        theSession.Store(invoice);

        await theSession.SaveChangesAsync();

        var loaded = await theSession.Query<Invoice>().FirstOrDefaultAsync(x => x.Id == invoice.Id);

        loaded
            .Name.ShouldBe(invoice.Name);
    }

    [Fact]
    public async Task load_many()
    {
        var invoice1 = new Invoice{Name = Guid.NewGuid().ToString()};
        var invoice2 = new Invoice{Name = Guid.NewGuid().ToString()};
        var invoice3 = new Invoice{Name = Guid.NewGuid().ToString()};
        theSession.Store(invoice1, invoice2, invoice3);

        await theSession.SaveChangesAsync();

        var results = await theSession
            .Query<Invoice>()
            .Where(x => x.Id.IsOneOf(invoice1.Id, invoice2.Id, invoice3.Id))
            .ToListAsync();
        
        results.Count.ShouldBe(3);
    }

    [Fact]
    public async Task use_in_LINQ_order_clause()
    {
        var invoice = new Invoice{Name = Guid.NewGuid().ToString()};
        theSession.Store(invoice);

        await theSession.SaveChangesAsync();

        var loaded = await theSession.Query<Invoice>().OrderBy(x => x.Id).Take(3).ToListAsync();
    }

There’s a world of use case permutations yet to go (bulk writing, numeric identities with HiLo generation, Include() queries, more LINQ scenarios, magically adding JSON serialization converters, using StrongTypedId as well), but I think we’ve got a solid start on a long asked for feature that I’ve previously been leery of building out.

Multi-Tenancy: Database per Tenant with Marten

This is continuing a series about multi-tenancy with Marten, Wolverine, and ASP.Net Core:

  1. What is it and why do you care?
  2. Marten’s “Conjoined” Model
  3. Database per Tenant with Marten (this post)

In the previous post we learned how to keep all the document or event data for each tenant in the same database, but using Marten’s “conjoined multi-tenancy” model to keep the data separated. This time out, let’s go for a much higher degree of separation by using a completely different database for each tenant with Marten.

Marten has a couple different recipes for “database per tenant multi-tenancy”, but let’s start with the simplest possible model where we’ll explicitly tell Marten about every single tenant by its id (the tenant_id values) and a connection string to that tenant’s specific database:

var builder = Host.CreateApplicationBuilder();
var configuration = builder.Configuration;
builder.Services.AddMarten(opts =>
{
    // Setting up Marten to "know" about five different tenants
    // and the database connection string for each
    opts.MultiTenantedDatabases(tenancy =>
    {
        tenancy.AddSingleTenantDatabase(configuration.GetConnectionString("tenant1"), "tenant1");
        tenancy.AddSingleTenantDatabase(configuration.GetConnectionString("tenant2"), "tenant2");
        tenancy.AddSingleTenantDatabase(configuration.GetConnectionString("tenant3"), "tenant3");
        tenancy.AddSingleTenantDatabase(configuration.GetConnectionString("tenant4"), "tenant4");
        tenancy.AddSingleTenantDatabase(configuration.GetConnectionString("tenant5"), "tenant5");
    });
});

using var host = builder.Build();
await host.StartAsync();

Just like in the post on conjoined tenancy, you can open a Marten document session (Marten’s unit of work abstraction for most typical operations) by supplying the tenant id like so:

// This was a recent convenience method added to 
// Marten to fetch the IDocumentStore singleton
var store = host.DocumentStore();

// Open up a Marten session to the database for "tenant1"
await using var session = store.LightweightSession("tenant1");

With that session object above, you can query all the data in that one specific tenant, or write Marten documents or events to that tenant database — and only that tenant database.

Now, to answer some questions you might have:

  • Marten’s DocumentStore is still a singleton registered service in your application’s IoC container that “knows” about multiple databases that are assumed to be identical. DocumentStore is an expensive object to create, and an important part of Marten’s multi-tenancy strategy was to ensure that you only every needed one — even with multiple tenant databases
  • Marten is able to track the schema object creation completely separate for each tenant database, so the “it just works” default mode where Marten is completely able to do database migrations for you on the fly also “just works” with the multi-tenancy by separate database approach
  • Marten’s (really Weasel‘s) command line tooling is absolutely able to handle multiple tenant databases. You can either migrate or patch all databases, or one database at a time through the command line tools
  • Marten’s Async Daemon background processing of event projections is perfectly capable of managing the execution against multiple databases as well
  • We’ll get into this in a later post, but it’s also possible to do two layers of multi-tenancy by combining both separate databases and conjoined multi-tenancy

Moving to a bit more complex case, let’s use Marten’s relatively recent “master table tenancy” model that will locate a table of tenant identifiers to tenant database connection strings in a table in a “master” database:

var builder = Host.CreateApplicationBuilder();
var configuration = builder.Configuration;
builder.Services.AddMarten(opts =>
{
    var tenantDatabaseConnectionString = configuration.GetConnectionString("tenants");
    opts.MultiTenantedDatabasesWithMasterDatabaseTable(tenantDatabaseConnectionString);
});

using var host = builder.Build();
await host.StartAsync();

The usage at runtime is identical to any other kind of multi-tenancy in Marten, but this model gives you the ability to add new tenants and tenant database at runtime without any down time. Marten will still be able to recognize a new tenant id and apply any necessary database changes at runtime.

Summary and What’s Next

Using separate databases for each tenant is a great way to create an even more rigid separation of data. You might opt for this model as a way to:

  • Scale your system better by effectively sharding your customer databases into smaller databases
  • Potentially reduce hosting costs by placing high volume tenants on different hardware than lower volume tenants
  • Meet more rigid security requirements for less risk of tenant data being exposed incorrectly

To the last point, I’ve heard of several cases where regulatory concerns have trumped technical concerns and led teams to choose the tenant per database approach.

Of course, the obvious potential downsides are more complex deployments, more things to go wrong, and maybe higher hosting costs if you’re not careful. Yeah, I know I said that’s a potential cost savings, that sword can cut both ways, so just be aware of potential hosting cost changes.

As for what’s next, actually quite a bit! In subsequent posts we’ll dig into Wolverine’s multi-tenancy support, detecting the tenant id from HTTP requests, two level tenancy in Marten because’s that’s possible, and even Wolverine’s ability to spawn virtual actors by tenant id.

For my fellow Gen X’ers out there who keep hearing the words “keep the data separated” and naturally have this song stuck in your head:

One Year of JasperFx Software

After about 25 years or so in the software industry, I finally founded my own business named JasperFx Software last year at about this time with the general strategy of building a services and product company around the “Critter Stack” tools (Marten, Wolverine, Weasel, and soon to be some others) as well as any or all consulting opportunities around server side .NET development or automated testing. After a year, it’s time for a little retrospective.

How’s it going?

Everything has taken longer than I’d hoped, and technical or company milestones have taken longer than I wanted. To quote the old adage, “it’s darkest right before the dawn.” I was so discouraged about the company right before Christmas that I was strongly considering giving up. Then JasperFx signed a couple big deals that gave me enough space and revenue to keep going, and even allowed JasperFx to bring on Babu Annamalai as a consultant and collaborator.

I’ve long been telling folks who ask me how it’s going that the technology side looks good, but the business is going just well enough to be encouraging but not well enough to feel confident yet. Right now I think I can finally say the business is strong enough to be thinking much more about how to grow and become sustainable than working on exit strategies. To put it more succinctly, I think that potential clients can trust that JasperFx Software is going to be around as a technical partner.

The big goals for this next year are to grow enough to be able to bring on full time colleagues for round the clock support and to continue to grow the “Critter Stack” platform. A big part of that is the planned “Critter Stack Pro” and “CritterWatch” commercial add on products I’m hoping will be demonstrable by the end of the 3rd quarter this year.

Client Work Highlights

JasperFx Software doesn’t exist without its clients — and we could always use more! Here’s some of the highlights of our client work in the past year:

  • By my count just now, we have worked with clients headquartered in six different countries so far, but I’ve already lost track of which countries our client contacts are located in
  • Event sourcing is relatively new, so there’s been a lot of engagement with various clients about the best ways to model their domains with events or how to best use projections for read or write models
  • I’ve gotten to work very closely with a client building an IoT system using event sourcing with Marten, messaging and HTTP services with Wolverine, and Rabbit MQ. That work has led to several improvements especially with Wolverine that have helped reduce repetitive code
  • JasperFx has helped several clients set up automated testing strategies and mentored teams on unit testing practices. Honestly, I think this might be the one way in which we’ve delivered the most “bang for the buck” for our clients. Automated testing strategies and designing for testability are primary areas of expertise within JasperFx Software — and that might be a little lacking in the larger software community these days
  • Multiple clients have complicated requirements for multi-tenancy in their systems, and that’s mightily pushed forward capabilities for both Marten and Wolverine in reaction. The specific highlights have been dynamic multi-tenancy in both tools, multi-level multi-tenancy, HTTP integration, and an effective virtual actor capability per tenant for Wolverine
  • Helped several clients with support contracts with JasperFx Software to use the tools more effectively. This has typically involved issues around transactional integrity, using a transactional outbox, error handling, and generally how to make systems be more robust or more performant
  • Concurrency issues are quite common, and JasperFx has worked with several of our customers to either make their systems more resilient to concurrency or to reduce potential issues around concurrency through design changes. This has been a common enough issue that I’m going to build out a new conference talk gathering the issues and possible solutions called “Surviving Concurrency in Event Driven Architectures”
  • We greatly improved Marten’s integration behind GraphQL endpoints using Hot Chocolate for a client, and we’re continuing on with strategies to use Wolverine for GraphQL mutations with Hot Chocolate
  • JasperFx delivered an early release of the forthcoming “Critter Stack Pro” tools for a client to greatly improve their ability to scale up to very large numbers of expected event data by better distributing load throughout an application cluster
  • Wolverine got an MQTT integration as the request of a JasperFx client
  • I helped a client kick off a big new initiative (that will ultimately be built with Ruby, so that’s been interesting) by facilitating multi-day Event Storming workshops
  • Helping multiple clients evaluate their current application code looking for areas of concern and potential remediation or modernization efforts

If you have a problem, and trust me, you can easily find us, maybe you can engage with JasperFx Software for your own development efforts with an email to sales@jasperfx.net. Or by contacting me directly through Discord, what’s left of Twitter, or wherever.

The State of the Critter Stack

Most of the time I’m focused on what is not already good, soft spots, flaws, outright bugs, and holes in the capabilities of Marten or Wolverine. Occasionally folks pop into our Discord channels just to say positive things, but most of the interactions there or on GitHub are with folks who are having trouble with something at that very moment. Last week I spent some serious time comparing the Critter Stack’s capabilities to some other toolsets for Event Driven Architecture in the .NET ecosystem and a bigger offering in the JVM ecosystem. And you know what?

  • Marten is already the most robust and feature rich toolset for Event Sourcing in the .NET ecosystem. Sure, you can roll the barebones basics for Event Sourcing yourself, but there are a lot of significantly challenging technical issues around projected state data, subscriptions to event data, concurrency protections, and instrumentation that Marten already supports today in a robust way that you would otherwise have to build out yourself. Likewise, you could cobble a lot of what Marten already does today with various libraries (none of which are documented as well or as widely used as Marten) or other event stores, but you’d still end up writing a lot more non-trivial code to glue it all together and fill in holes than you would by just using Marten.
  • Wolverine is a unique server side application framework that can be used to dramatically reduce the amount of boilerplate code that is so common in server side .NET development. Moreover, it’s the perfect toolset for really using the “vertical slice architecture” approach in a simpler way. And if used idiomatically, Wolverine can help you structure your code for easy unit testing by separating your application logic from infrastructure concerns without ridiculously high ceremony Hexagonal/Clean/Onion Architectures.
  • Used together, the Critter Stack is arguably the most feature complete solution for Event Driven Architecture in .NET today — but I’ll weasel this a bit by saying that Actor-based platforms are so conceptually different in approach that I wouldn’t do a straight up comparison
  • The Critter Stack tools can lead to much better testability, both for unit testing and for integration testing with infrastructural elements, that help our users deliver software faster and with more confidence. I believe that the Critter Stack’s approach and support for automated testing has no peers in the .NET space.

The Critter Stack suite is going to grow this year with commercial add on libraries for increased scalability and with a custom monitoring and management tool codenamed “CritterWatch.” If all goes according to plan (and it won’t), the Critter Stack is going to also grow this year with event sourcing alternatives based on at least Sql Server as the underlying persistence with CosmosDb and/or DynamoDb following afterward.

How’d I get here?

Hey, this section is going to get into some pretty serious TMI, so feel free to skip any of this.

For various reasons, maybe because I’m terrible at organizational politics, disinterest in what I was working on, bad luck, or all of the above, I never managed to find a role or a company where I felt like it was my shop and I was happy right where I was at. A noticeable trend that always frustrated me was that I was frequently much better known and more professionally respected in the outside world that where I happened to be employed at the moment:

I’ve known for a long time that I enjoyed my OSS work building shared libraries and frameworks for other developers much more than what I happened to be doing for my real job. What I’ve long wished I could do was to was get to build development tools, preferably where I was one of the primary drivers of the company and I had a major hand in driving the vision. I thought I’d absolutely blown my chance when a very ambitious, prior project failed miserably years ago, and I had been admittedly pretty adrift in my career in the following years.

I had a couple possible opportunities to join Microsoft in their Dev Div division when I was younger, but there was always some personal reason not to take those positions. I’ve occasionally wished that would have worked out, but that was ages ago.

A couple years ago I told a therapist that I thought I was going through a mid-life crisis, and he laughingly told me that I just had some as yet unfulfilled ambitions. About the same time I got a message from another developer then working at a software tools company asking why we weren’t already monetizing Marten because he thought it was already better than certain commercial offerings. Those two incidents, other friends founding their own company at the same time, plus a lot of encouragement from my wife, and a few other breaks led me to finally go off and take a chance on myself and start JasperFx Software. I won’t lie and say it’s been all rainbows and unicorns and that I haven’t struggled mightily with stress this past year, but it’s trending upward right now and I love being able to roll out of bed knowing that I’m working on my technical vision (for at least some of the day!) or working to help clients who respect my input and contributions to their work.

Multi-Tenancy: Marten’s “Conjoined” Model

This is continuing a series about multi-tenancy with Marten, Wolverine, and ASP.Net Core:

  1. What is it and why do you care?
  2. Marten’s “Conjoined” Model (this post)

Let’s say that you definitely have the need for multi-tenanted storage in your system, but don’t expect enough data to justify splitting the tenant data over multiple databases, or maybe you just really don’t want to mess with all the extra overhead of multiple databases.

“Conjoined” is a term I personally coined for Marten years ago and isn’t anything that’s an “official” term in the industry. I’m not aware of any widely used pattern name for this strategy, but there surely is somewhere since this is so common.

This is where Marten’s “Conjoined” multi-tenancy model comes into play. Let’s say that we have a little document in our system named User just to store information about our users:

public class User
{
    public User()
    {
        Id = Guid.NewGuid();
    }

    public List<Friend> Friends { get; set; }

    public string[] Roles { get; set; }
    public Guid Id { get; set; }
    public string UserName { get; set; }
    public string FirstName { get; set; }
    public string LastName { get; set; }

    public string? Nickname { get; set; }
    public bool Internal { get; set; }
    public string Department { get; set; } = "";
    public string FullName => $"{FirstName} {LastName}";
    public int Age { get; set; }

    public DateTimeOffset ModifiedAt { get; set; }

    public void From(User user)
    {
        Id = user.Id;
    }

    public override string ToString()
    {
        return $"{nameof(FirstName)}: {FirstName}, {nameof(LastName)}: {LastName}";
    }
}

Now, the User document certainly needs to be tracked within a single logical tenant, so I’m going to tell Marten to do exactly that:

        // This is the same syntax to configuring Marten
        // by IServiceCollection.AddMarten()
        using var store = DocumentStore.For(opts =>
        {
            // other configuration

            // Make *only* the User document be stored by tenant
            opts.Schema.For<User>().MultiTenanted();
        });

In the case above, I am only telling Marten to make the User document be multi-tenanted as it’s frequently valuable — and certainly possible — for some reference documents to be common for all tenants. If instead we just wanted to say “all documents and the event store should be multi-tenanted,” we can do this:

        using var store = DocumentStore.For(opts =>
        {
            // other configuration

            opts.Policies.AllDocumentsAreMultiTenanted();
            opts.Events.TenancyStyle = TenancyStyle.Conjoined;
        });

Either way, if we’ve established that User should be multi-tenanted, Marten will add a tenant_id column to the storage table for the User document like this:

DROP TABLE IF EXISTS public.mt_doc_user CASCADE;
CREATE TABLE public.mt_doc_user (
    tenant_id           varchar                     NOT NULL DEFAULT '*DEFAULT*',
    id                  uuid                        NOT NULL,
    data                jsonb                       NOT NULL,
    mt_last_modified    timestamp with time zone    NULL DEFAULT (transaction_timestamp()),
    mt_version          uuid                        NOT NULL DEFAULT (md5(random()::text || clock_timestamp()::text)::uuid),
    mt_dotnet_type      varchar                     NULL,
CONSTRAINT pkey_mt_doc_user_tenant_id_id PRIMARY KEY (tenant_id, id)
);

As of Marten 7, Marten also places the tenant_id first in the primary key for more efficient index usage when querying large data tables.

You might also notice that Marten adds tenant_id to the primary key for the table. Marten will happily allow you to use the same identity for documents in different tenants. And even though that’s unlikely with a Guid as the identity, it’s very certainly possible with other identity strategies and early Marten users hit that occasionally.

Let’s see the conjoined tenancy in action:

        // I'm creating a session specifically for a tenant id of
        // "tenant1"
        using var session1 = store.LightweightSession("tenant1");

        // My youngest & I just saw the Phantom Menace in the theater
        var user = new User { FirstName = "Padme", LastName = "Amidala" };

        // Marten itself assigns the identity at this point
        // if the document doesn't already have one
        session1.Store(user);
        await session1.SaveChangesAsync();

        // Let's open a session to a completely different tenant
        using var session2 = store.LightweightSession("tenant2");

        // Try to find the same user we just persisted in the other tenant...
        var user2 = await session2.LoadAsync<User>(user.Id);

        // And it shouldn't exist!
        user2.ShouldBeNull();

In the very last call to Marten to try to load the same User, but from the “tenant2” tenant used this SQL:

select d.id, d.data from public.mt_doc_user as d where id = $1 and d.tenant_id = $2
  : f746f237-ed4f-4aaa-b805-ad05f7ae2cd3
  : tenant2

If you squint really hard, you can see that Marten automatically stuck in a second WHERE filter for the current tenant id. Moreover, if we switch to LINQ and try to query that way like so:

        var user3 = await session2.Query<User>().SingleOrDefaultAsync(x => x.Id == user.Id);
        user3.ShouldBeNull();

Marten is still quietly sticking in that tenant_id == [tenant id] filter for us with this SQL:

select d.id, d.data from public.mt_doc_user as d where (d.tenant_id = $1 and d.id = $2) LIMIT $3;
  $1: tenant2
  $2: bfc53828-d56b-4fea-8d93-e8a22fe2db40
  $3: 2

If you really, really need to do this, you can query across tenants with some special Marten LINQ helpers:

        var all = await session2
            .Query<User>()
            
            // Notice AnyTenant()
            .Where(x => x.AnyTenant())
            .ToListAsync();
        
        all.ShouldContain(x => x.Id == user.Id);

Or for specific tenants:

        var all = await session2
            .Query<User>()

            // Notice the Where()
            .Where(x => x.TenantIsOneOf("tenant1", "tenant2", "tenant3"))
            .ToListAsync();

        all.ShouldContain(x => x.Id == user.Id);

Summary

While I don’t think folks should willy nilly build out the “Conjoined” model from scratch without some caution, Marten’s model is pretty robust after 8-9 years of constant use from a large, unfortunately for me the maintainer, creative user base.

I didn’t discuss the Event Sourcing functionality in this post, but do note that Marten’s conjoined tenancy model also applies to Marten’s event store and the projected documents built by Marten as well.

In the next post, we’ll branch out to using different databases for different tenants.

Multi-Tenancy: What is it and why do you care?

I’m always on the lookout for ideas about how to endlessly promote both Marten & Wolverine. Since I’ve been fielding a lot of questions, issues, and client requests around multi-tenancy support in both tools the past couple weeks, now seems to be a good time for a new series exploring the existing foundation in both critter stack tools for handling quite a few basic to advanced multi-tenancy scenarios. But first, let’s start by just talking about what the phrase “multi-tenancy” even means for architecting software systems.

In the course of building systems, you’re frequently going to have a single system that needs to serve different sets of users or clients. Some examples I’ve run across have been systems that need to segregate data for different partner companies, different regions within the same company, or just flat out different users like online email services do today.

I don’t know the origin of the terminology, but we refer to those logical separations within the system data as “tenants.”

My youngest is very quickly outgrowing Dr. Seuss books, but we still read “Because a Bug went Kachoo!” above

It’s certainly going to be important many times to keep the data accessed through the system segregated so that nobody is able to access data that they should not. For example, I probably shouldn’t be able to read you email inbox when I lot into my gmail account. For another example from my early career, I worked with an early web application system that was used to gather pricing quotes from my very large manufacturing company’s suppliers for a range of parts. Due to a series of unfortunate design decisions (because a bug went kachoo!), that application did a very poor job being able to segregate data, and I figured out that some of our suppliers were able to see the quoted prices from their competitors and get unfair advantages.

So we can all agree that mixing up the data between users who shouldn’t see each other’s data is a bad thing, so what can we do about that? The most extreme solution is to just flat out deploy a completely different set of servers for each segregated group of users as shown below:

While there are some valid reasons once in awhile to do the completely separate deployments, that’s potentially a lot of overhead and extra hosting costs. At best, this is probably only viable for a finite number of deployments (Gmail is certainly not building out a separate web server for every one of us with a Gmail account for example).

When a single deployed system is able to serve different tenants, we call that “multi-tenancy.”

According to Wikipedia:

Software multitenancy is a software architecture in which a single instance of software runs on a server and serves multiple tenants.

With multi-tenancy, we’re ensuring that one single deployment of the logical service can handle requests for multiple tenants without allowing users from one tenant to inappropriately see data from other tenants.

Roughly speaking, I’m familiar with three different ways to achieve multi-tenancy.

The first approach is to use one database for all tenant data, but to use some sort of tenant id field that just denotes which tenant the data belongs to. This is what I termed “Conjoined Tenancy” in Marten. This approach is simpler in terms of the database deployment and database change management because after all, there’s only one of them! It is potentially more complex within your codebase because your persistence layer will always need to apply filters on the data being modified and accessed by the user and whichever tenant they are part of.

    There’s some inherent risk with this approach as developers aren’t perfectly omniscient, and there’s always a chance that we miss some scenarios and let data leak out inappropriately to the wrong users. I think this approach is much more viable when using persistence tooling that has strong support (like Marten!) for this type of “conjoined multi-tenancy” and mostly takes care of the tenancy filters for you.

    The second approach is to use a separate schema for each tenant within the same database. I’ve never used this approach myself, and I’m not aware of any tooling in my own .NET ecosystem that supports this approach out of the box. I think this would be a good approach if you were building something on top of a relational database from scratch with a custom data layer — but I think it would be a lot of extra overhead managing the database schema migrations.

    The third way to do multi-tenancy is to use a separate database for each tenant, but the single deployed system is smart enough to connect to the correct database throughout its persistence layer based on the current user (or through metadata on messages as we’ll see in a later entry on Wolverine multi-tenancy). This approach is shown below:

    There’s of course some challenges to this approach as well. First off, there’s more databases to worry about, and subsequently more overhead for database migrations and management. This approach does give you rock solid data segregation between tenants, and I’ve heard of strong business or regulatory requirements to take this approach even when the data volume wouldn’t require this. As my last statement hints at, we all know that the system database is very commonly the bottleneck for performance and scalability, so segregating different tenant data into separate databases may be a good way to improve the scalability of your system.

    It’s obviously going to be more difficult to do any kind of per-tenant data rollup or summary with the separate database approach, but some cloud providers have specialized infrastructure for per tenant database multi-tenancy.

    A Note about Scalability

    I was taught very early on that an effective way to scale systems was to design for any given server to be able to handle all the possible types of operations, then add more servers to the horizontal cluster. I think at the time this was in reaction to several systems we had where teams had tried to scale bigger systems by segregating all operations for one region to one set of servers, and a different set of servers for other regions. The end result was an explosion of deployed servers and frequently having servers absolutely pegged on CPU or memory while North America factories were in full swing while the servers tasked with handling factories on the Pacific Rim were completely dormant when their factories were closed. An architecture that can spread all the work across the cluster of running nodes might often be a much cheaper solution in the end than standing up many more nodes that can only service a subset of tenants.

    Then again, you might also want to prioritize some tenants over others, so take everything I just said with a grain of “it depends” salt.

    Thar be Dragons!

    In the next set of posts, I’ll get into first Marten, then Wolverine capabilities for multi-tenancy, but just know first that there’s a significant set of challenges ahead:

    • Managing multiple database schemas if using separate databases per tenant
    • Needing to use per-tenant filters if using the conjoined storage model for query segregation — and trust me as the author of a persistence tool, there’s plenty of edge case dragons here
    • Detecting the current tenant based on HTTP requests or messaging metadata
    • Communicating the tenant information when using asynchronous messaging
    • Querying across tenants
    • Dynamically spinning up new tenant databases at runtime — or tearing them down! — or even moving them at runtime?!?
    • Isolated data processing by tenant database
    • Multi-level tenancy!?! JasperFx helped a customer build this out with Marten
    • Transactional outbox support in a multi-tenanted work — which Wolverine can do today!

    The two “Critter Stack” tools help with most of these challenges today, and I’ll get around to some discussion about future work to help fill in the more advanced usages that some real users are busy running into right now.

    Wolverine’s Test Support Diagnostics

    I’m working on fixing a reported bug with Wolverine today and its event forwarding from Marten feature. I can’t say that I yet know why this should-be-very-straightforward-and-looks-exactly-like-the-currently-passing-tests bug is happening, but it’s a good time to demonstrate Wolverine’s automated testing support and even how it can help you to understand test failures.

    First off, and I’ll admit that there’s some missing context here, I’m setting up a system such that when this message handler is executed:

    public record CreateShoppingList();
    
    public static class CreateShoppingListHandler
    {
        public static string Handle(CreateShoppingList _, IDocumentSession session)
        {
            var shoppingListId = CombGuidIdGeneration.NewGuid().ToString();
            session.Events.StartStream<ShoppingList>(shoppingListId, new ShoppingListCreated(shoppingListId));
            return shoppingListId;
        }
    }
    

    The configured Wolverine + Marten integration should be kicking in to publish the event appended in the handler above to a completely different handler shown below with the Marten IEvent wrapper so that you can use Marten event store metadata within the secondary, cascaded message:

    public static class IntegrationHandler
    {
        public static void Handle(IEvent<ShoppingListCreated> _)
        {
            // Don't need a body here, and I'll show why not
            // next
        }
    }
    

    Knowing those two things, here’s the test I wrote to reproduce the problem:

        [Fact]
        public async Task publish_ievent_of_t()
        {
            // The "Arrange"
            using var host = await Host.CreateDefaultBuilder()
                .UseWolverine(opts =>
                {
                    opts.Policies.AutoApplyTransactions();
    
                    opts.Services.AddMarten(m =>
                    {
                        m.Connection(Servers.PostgresConnectionString);
                        m.DatabaseSchemaName = "forwarding";
    
                        m.Events.StreamIdentity = StreamIdentity.AsString;
                        m.Projections.LiveStreamAggregation<ShoppingList>();
                    }).UseLightweightSessions()
                    .IntegrateWithWolverine()
                    .EventForwardingToWolverine();;
                }).StartAsync();
            
            // The "Act". This method is an extension method in Wolverine
            // specifically for facilitating integration testing that should
            // invoke the given message with Wolverine, then wait until all
            // additional "work" is complete
            var session = await host.InvokeMessageAndWaitAsync(new CreateShoppingList());
    
            // And finally, just assert that a single message of
            // type IEvent<ShoppingListCreated> was executed
            session.Executed.SingleMessage<IEvent<ShoppingListCreated>>()
                .ShouldNotBeNull();
        }
    

    And now, when I run the test — which “helpfully” reproduces reported bug from earlier today — I get this output:

    System.Exception: No messages of type Marten.Events.IEvent<MartenTests.Bugs.ShoppingListCreated> were received
    
    System.Exception
    No messages of type Marten.Events.IEvent<MartenTests.Bugs.ShoppingListCreated> were received
    Activity detected:
    
    ----------------------------------------------------------------------------------------------------------------------
    | Message Id                             | Message Type                          | Time (ms)   | Event               |
    ----------------------------------------------------------------------------------------------------------------------
    | 018f82a9-166d-4c71-919e-3bcb04a93067   | MartenTests.Bugs.CreateShoppingList   |          873| ExecutionStarted    |
    | 018f82a9-1726-47a6-b657-2a59d0a097cc   | System.String                         |         1057| NoRoutes            |
    | 018f82a9-17b1-4078-9997-f6117fd25e5c   | EventShoppingListCreated              |         1242| Sent                |
    | 018f82a9-166d-4c71-919e-3bcb04a93067   | MartenTests.Bugs.CreateShoppingList   |         1243| ExecutionFinished   |
    | 018f82a9-17b1-4078-9997-f6117fd25e5c   | EventShoppingListCreated              |         1243| Received            |
    | 018f82a9-17b1-4078-9997-f6117fd25e5c   | EventShoppingListCreated              |         1244| NoHandlers          |
    ----------------------------------------------------------------------------------------------------------------------
    

    EDIT: If I’d read this more closely before, I would have noticed that the problem was somewhere different than the routing that I first suspected from a too casual read.

    The textual table above is Wolverine telling me what it did do during the failed test. In this case, the output does tip me off that there’s some kind of issue with the internal message routing in Wolverine that should be applying some special rules for IEvent<T> wrappers, but was not in this case. While that work fixing the real bug continues for me, what I hope you get out of this is how Wolverine is trying to help you diagnose test failures by providing diagnostic information about what was actually happening internally during all the asynchronous processing. As a long veteran of test automation efforts, I will vociferously say that it’s important for test automation harnesses to be able to adequately explain the inevitable test failures. Like Wolverine helpfully does.

    Now, back to work trying to actually fix the problem…

    Scheduled Message Delivery with Wolverine

    Wolverine has the ability to schedule the delivery of messages for a later time. While Wolverine certainly isn’t trying to be Hangfire or Quartz.Net, the message scheduling in Wolverine today is valuable for “timeout” messages in sagas, or “retry this evening” type scenarios, or reminders of all sorts.

    If using the Azure Service Bus transport, scheduled messages sent to Azure Service Bus queues or topics will use native Azure Service Bus scheduled delivery. For everything else today, Wolverine is doing the scheduled delivery for you. To make those scheduled messages be durable (i.e. not completely lost when the application is shut down), you’re going to want to add message persistence to your Wolverine application as shown in the sample below using SQL Server:

    // This is good enough for what we're trying to do
    // at the moment
    builder.Host.UseWolverine(opts =>
    {
        // Just normal .NET stuff to get the connection string to our Sql Server database
        // for this service
        var connectionString = builder.Configuration.GetConnectionString("SqlServer");
        
        // Telling Wolverine to build out message storage with Sql Server at 
        // this database and using the "wolverine" schema to somewhat segregate the 
        // wolverine tables away from the rest of the real application
        opts.PersistMessagesWithSqlServer(connectionString, "wolverine");
        
        // In one fell swoop, let's tell Wolverine to make *all* local
        // queues be durable and backed up by Sql Server 
        opts.Policies.UseDurableLocalQueues();
    });
    

    Finally, with all that said, here’s one of the ways to schedule message deliveries:

        public static async Task use_message_bus(IMessageBus bus)
        {
            // Send a message to be sent or executed at a specific time
            await bus.ScheduleAsync(new DebitAccount(1111, 100), DateTimeOffset.UtcNow.AddDays(1));
    
            // Or do the same, but this time express the time as a delay
            await bus.ScheduleAsync(new DebitAccount(1111, 225), 1.Days());
            
            // ScheduleAsync is really just syntactic sugar for this:
            await bus.PublishAsync(new DebitAccount(1111, 225), new DeliveryOptions { ScheduleDelay = 1.Days() });
        }
    

    Or, if you want to utilize Wolverine’s cascading message functionality to keep most if not all of your handler method signatures “pure”, you can use this syntax within message handlers or HTTP endpoints:

        public static IEnumerable<object> Consume(Incoming incoming)
        {
            // Delay the message delivery by 10 minutes
            yield return new Message1().DelayedFor(10.Minutes());
    
            // Schedule the message delivery for a certain time
            yield return new Message2().ScheduledAt(new DateTimeOffset(DateTime.Today.AddDays(2)));
        }
    

    Finally, one last alternative that was primarily meant for saga usage, subclassing TimeoutMessage like so:

    public record EnforceAccountOverdrawnDeadline(Guid AccountId) : TimeoutMessage(10.Days()), IAccountCommand;
    

    By subclassing TimeoutMessage, the message type above is “scheduled” for a later time when it’s returned as a cascading message.

    Wolverine’s HTTP Model Does More For You

    One of the things I’m wrestling with right now is frankly how to sell Wolverine as a server side toolset. Yes, it’s technically a message library like MassTransit or NServiceBus. It can also be used as “just” a mediator tool like MediatR. With Wolverine.HTTP, it’s even an alternative HTTP endpoint framework that’s technically an alternative to FastEndpoints, MVC Core, or Minimal API. You’ve got to categorize Wolverine somehow, and we humans naturally understand something new by comparing it to some older thing we’re already familiar with. In the case of Wolverine, it’s drastically selling the toolset short by comparing it to any of the older application frameworks I rattled off above because Wolverine fundamentally does much more to remove code ceremony, improve testability throughout your codebase, and generally just let you focus more on core application functionality than older application frameworks.

    This post was triggered by a conversation I had with a friend last week who told me he was happy with his current toolset for HTTP API creation and couldn’t imagine how Wolverine’s HTTP endpoint model could possibly reduce his efforts. Challenge accepted!

    For just this moment, consider a simplistic HTTP service that works on this little entity:

    public record Counter(Guid Id, int Count);
    

    Now, let’s build an HTTP endpoint that will:

    1. Receive route arguments for the Counter.Id and the current tenant id because of course let’s say that we’re using multi-tenancy with a separate database per tenant
    2. Try to look up the existing Counter entity by its id from the right tenant database
    3. If the entity doesn’t exist, return a status code 404 and get out of there
    4. If the entity does exist, increment the Count property and save the entity to the database and return a status code 204 for a successful request with an empty body

    Just to make it easier on me because I already had this example code, we’re going to use Marten for persistence which happens to have much stronger multi-tenancy built in than EF Core. Knowing all that, here’s a sample MVC Core controller to implement the functionality I described above:

    public class CounterController : ControllerBase
    {
        [HttpPost("/api/tenants/{tenant}/counters/{id}")]
        [ProducesResponseType(204)] // empty response
        [ProducesResponseType(404)]
        public async Task<IResult> Increment(
            Guid id, 
            string tenant, 
            [FromServices] IDocumentStore store)
        {
            // Open a Marten session for the right tenant database
            await using var session = store.LightweightSession(tenant);
            var counter = await session.LoadAsync<Counter>(id, HttpContext.RequestAborted);
            if (counter == null)
            {
                return Results.NotFound();
            }
            else
            {
                counter = counter with { Count = counter.Count + 1 };
                await session.SaveChangesAsync(HttpContext.RequestAborted);
                return Results.Empty;
            }
        }
    }
    

    I’m completely open to recreating the multi-tenancy support from the Marten + Wolverine combo for EF Core and SQL Server through Wolverine, but I’m shamelessly waiting until another company is willing to engage with JasperFx Software to deliver that.

    Alright, now let’s switch over to using Wolverine.HTTP with its WolverineFx.Http.Marten add on Nuget setup. Let’s drink some Wolverine koolaid and write a functionally identical endpoint the Wolverine way:

    You need Wolverine 2.7.0 for this by the way!

        [WolverinePost("/api/tenants/{tenant}/counters/{id}")]
        public static IMartenOp Increment([Document(Required = true)] Counter counter)
        {
            counter = counter with { Count = counter.Count + 1 };
            return MartenOps.Store(counter);
        }
    

    Seriously, this is the same functionality and even the same generated OpenAPI documentation. Some things to note:

    • Wolverine is able to derive much more of the OpenAPI documentation from the type signatures and from policies applied to the endpoint method, like…
    • The usage of the Document(Required = true) tells Wolverine that it will be trying to load a document of type Counter from Marten, and by default it’s going to do that through a route argument named “id”. The Required property tells Wolverine to return a 404 NotFound status code automatically if the Counter document doesn’t exist. This attribute usage also applies some OpenAPI smarts to tag the route as potentially returning a 404
    • The return value of the method is an IMartenOpside effect” just saying “go save this document”, which Wolverine will do as part of this endpoint execution. Using the side effect makes this method a nice, simple pure function that’s completely synchronous. No wrestling with async Task, await, or schlepping around CancellationToken every which way
    • Because Wolverine can see there will not be any kind of response body, it’s going to use a 204 status code to denote the empty body and tag the OpenAPI with that as well.
    • There is absolutely zero Reflection happening at runtime because Wolverine is generating and compiling code at runtime (or ahead of time for faster cold starts) that “bakes” in all of this knowledge for fast execution
    • Wolverine + Marten has a far more robust support for multi-tenancy all the way through the technology stack than any other application framework I know of in .NET (web frameworks, mediators, or messaging libraries), and you can see that evident in the code above where Marten & Wolverine would already know how to detect tenant usage in an HTTP request and do all the wiring for you all the way through the stack so you can focus on just writing business functionality.

    To make this all more concrete, here’s the generated code:

    // <auto-generated/>
    #pragma warning disable
    using Microsoft.AspNetCore.Routing;
    using System;
    using System.Linq;
    using Wolverine.Http;
    using Wolverine.Marten.Publishing;
    using Wolverine.Runtime;
    
    namespace Internal.Generated.WolverineHandlers
    {
        // START: POST_api_tenants_tenant_counters_id_inc2
        public class POST_api_tenants_tenant_counters_id_inc2 : Wolverine.Http.HttpHandler
        {
            private readonly Wolverine.Http.WolverineHttpOptions _wolverineHttpOptions;
            private readonly Wolverine.Runtime.IWolverineRuntime _wolverineRuntime;
            private readonly Wolverine.Marten.Publishing.OutboxedSessionFactory _outboxedSessionFactory;
    
            public POST_api_tenants_tenant_counters_id_inc2(Wolverine.Http.WolverineHttpOptions wolverineHttpOptions, Wolverine.Runtime.IWolverineRuntime wolverineRuntime, Wolverine.Marten.Publishing.OutboxedSessionFactory outboxedSessionFactory) : base(wolverineHttpOptions)
            {
                _wolverineHttpOptions = wolverineHttpOptions;
                _wolverineRuntime = wolverineRuntime;
                _outboxedSessionFactory = outboxedSessionFactory;
            }
    
    
    
            public override async System.Threading.Tasks.Task Handle(Microsoft.AspNetCore.Http.HttpContext httpContext)
            {
                var messageContext = new Wolverine.Runtime.MessageContext(_wolverineRuntime);
                // Building the Marten session
                await using var documentSession = _outboxedSessionFactory.OpenSession(messageContext);
                if (!System.Guid.TryParse((string)httpContext.GetRouteValue("id"), out var id))
                {
                    httpContext.Response.StatusCode = 404;
                    return;
                }
    
    
                var counter = await documentSession.LoadAsync<Wolverine.Http.Tests.Bugs.Counter>(id, httpContext.RequestAborted).ConfigureAwait(false);
                // 404 if this required object is null
                if (counter == null)
                {
                    httpContext.Response.StatusCode = 404;
                    return;
                }
    
                
                // The actual HTTP request handler execution
                var martenOp = Wolverine.Http.Tests.Bugs.CounterEndpoint.Increment(counter);
    
                if (martenOp != null)
                {
                    
                    // Placed by Wolverine's ISideEffect policy
                    martenOp.Execute(documentSession);
    
                }
    
                
                // Commit any outstanding Marten changes
                await documentSession.SaveChangesAsync(httpContext.RequestAborted).ConfigureAwait(false);
    
                
                // Have to flush outgoing messages just in case Marten did nothing because of https://github.com/JasperFx/wolverine/issues/536
                await messageContext.FlushOutgoingMessagesAsync().ConfigureAwait(false);
    
                // Wolverine automatically sets the status code to 204 for empty responses
                if (!httpContext.Response.HasStarted) httpContext.Response.StatusCode = 204;
            }
    
        }
    
        // END: POST_api_tenants_tenant_counters_id_inc2
        
        
    }
    
    

    Summary

    Wolverine isn’t “just another messaging library / mediator / HTTP endpoint alternative.” Rather, Wolverine is a completely different animal that while fulfilling those application framework roles for server side .NET, potentially does a helluva lot more than older frameworks to help you write systems that are maintainable, testable, and resilient. And do all of that with a lot less of the typical “Clean/Onion/Hexagonal Architecture” cruft that shines in software conference talks and YouTube videos but helps lead teams into a morass of unmaintainable code in larger systems in the real world.

    But yes, the Wolverine community needs to find a better way to communicate how Wolverine adds value above and beyond the more traditional server side application frameworks in .NET. I’m completely open to suggestions — and fully aware that some folks won’t like the “magic” in the “drank all the Wolverine Koolaid” approach I used.

    You can of course use Wolverine with 100% explicit code and none of the magic.