Command Line Support for Marten Projections

Marten 5.7 was published earlier this week with mostly bug fixes. The one, big new piece of functionality was an improved version of the command line support for event store projections. Specifically, Marten added support for multi-tenancy through multiple databases and the ability to use separate document stores in one application as part of our V5 release earlier this year, but the projections command didn’t really catch up and support that — but now it can with Marten v5.7.0.

From a sample project in Marten we use to test this functionality, here’s part of the Marten setup that has a mix of asynchronous and inline projections, as well as uses the database per tenant strategy:

services.AddMarten(opts =>
    opts.AutoCreateSchemaObjects = AutoCreate.All;
    opts.DatabaseSchemaName = "cli";

    // Note this app uses multiple databases for multi-tenancy
        .WithTenants("tenant1", "tenant2", "tenant3");

    // Register all event store projections ahead of time
        .Add(new TripAggregationWithCustomName(), ProjectionLifecycle.Async);
        .Add(new DayProjection(), ProjectionLifecycle.Async);
        .Add(new DistanceProjection(), ProjectionLifecycle.Async);

        .Add(new SimpleAggregate(), ProjectionLifecycle.Inline);

    // This is actually important to register "live" aggregations too for the code generation

At this point, let’s introduce the Marten.CommandLine Nuget dependency to the system just to add Marten related command line options directly to our application for typical database management utilities. Marten.CommandLine brings with it a dependency on Oakton that we’ll actually use as the command line parser for our built in tooling. Using the now “old-fashioned” pre-.NET 6 manner of running a console application, I add Oakton to the system like this:

public static Task<int> Main(string[] args)
    // Use Oakton for running the command line
    return CreateHostBuilder(args).RunOaktonCommands(args);

When you use the dotnet command line options, just keep in mind that the “–” separator you’re seeing me here is used to separate options directly to the dotnet executable itself on the left from arguments being passed to the application itself on the right of the “–” separator.

Now, turning to the command line at the root of our project, I’m going to type out this command to see the Oakton options for our application:

dotnet run -- help

Which gives us this output:

If you’re wondering, the commands db-apply and marten-apply are synonyms that’s there as to not break older users when we introduced the now, more generic “db” commands.

And next I’m going to see the usage for the projections command with dotnet run -- help projections, which gives me this output:

For the simplest usage, I’m just going to list off the known projections for the entire system with dotnet run -- projections --list:

Which will show us the four registered projections in the main IDocumentStore, and tells us that there are no registered projections in the separate IOtherStore.

Now, I’m just going to continuously run the asynchronous projections for the entire application — while another process is constantly pumping random events into the system so there’s always new work to be doing — with dotnet run -- projections, which will spit out this continuously updating table (with an assist from Spectre.Console):

What I hope you can tell here is that every asynchronous projection is actively running for each separate tenant database. The blue “High Water Mark” is telling us where the current event store for each database is at.

And finally, for the main reason why I tackled the projections command line overhaul last week, folks needed a way to rebuild projections for every database when using a database per tenant strategy.

While the new projections command will happily let you rebuild any combination of database, store, and projection name by flags or even an interactive mode, we can quickly trigger a full rebuild of all the asynchronous projections with dotnet run -- projections --rebuild, which is going to loop through every store and database like so:

For the moment, the rebuild works on all the projections for a single database at a time. I’m sure we’ll attempt some optimizations of the rebuilding process and try to understand how much we can really parallelize more, but for right now, our users have an out of the box way to rebuild projections across separate databases or separate stores.

This *might* be a YouTube video soon just to kick off my new channel for Marten/Jasper/Oakton/Alba/Lamar content.

A Vision for Stateful Resources at Development or Deployment Time

As is not atypical, I found a couple little issues with both Oakton and Jasper in the course of writing this post. To that end, if you want to use the functionality shown here yourself, just make sure you’re on at least Oakton 4.6.1 and Jasper 2.0-alpha-3.

I’ve spit out quite a bit of blogging content the past several weeks on both Marten and Jasper:

I’ve been showing some new integration between Jasper, Marten, and Rabbit MQ. This time out, I want to show the new “stateful resource” model in a third tool named Oakton to remove development and deployment time friction when using these tools on a software project. Oakton itself is a command line processing tool, that more importantly, can be used to quickly add command line utilities directly to your .Net executable.

Drawing from a sample project in the Jasper codebase, here’s the configuration for an issue tracking application that uses Jasper, Marten, and RabbitMQ with Jasper’s inbox/outbox integration using Postgresql:

using IntegrationTests;
using Jasper;
using Jasper.Persistence.Marten;
using Jasper.RabbitMQ;
using Marten;
using MartenAndRabbitIssueService;
using MartenAndRabbitMessages;
using Oakton;
using Oakton.Resources;

var builder = WebApplication.CreateBuilder(args);


builder.Host.UseJasper(opts =>
    // I'm setting this up to publish to the same process
    // just to see things work
        .ToRabbitExchange("issue_events", exchange => exchange.BindQueue("issue_events"))


    opts.UseRabbitMq(factory =>
        // Just connecting with defaults, but showing
        // how you *could* customize the connection to Rabbit MQ
        factory.HostName = "localhost";
        factory.Port = 5672;

// This is actually important, this directs
// the app to build out all declared Postgresql and
// Rabbit MQ objects on start up if they do not already
// exist

// Just pumping out a bunch of messages so we can see
// statistics

builder.Services.AddMarten(opts =>
    // I think you would most likely pull the connection string from
    // configuration like this:
    // var martenConnectionString = builder.Configuration.GetConnectionString("marten");
    // opts.Connection(martenConnectionString);

    opts.DatabaseSchemaName = "issues";

    // Just letting Marten know there's a document type
    // so we can see the tables and functions created on startup

    // I'm putting the inbox/outbox tables into a separate "issue_service" schema

var app = builder.Build();

app.MapGet("/", () => "Hello World!");

// Actually important to return the exit code here!
return await app.RunOaktonCommands(args);

Just to describe what’s going on up above, the .NET code above is going to depend on:

  1. A Postgresql database with the necessary tables and functions that Marten needs to be able to persist issue data
  2. Additional tables in the Postgresql database for persisting the outgoing and incoming messages in the inbox/outbox usage
  3. A Rabbit MQ broker with the necessary exchanges, queues, and bindings for the issue application as it’s configured

In a perfect world, scratch that, in an acceptable world, a developer should be able to start from a fresh clone of this issue tracking codebase and be able to run the system and/or any integration tests locally almost immediately with very minimal friction along the way.

At this point, I’m a big fan of trying to run development infrastructure in Docker where it’s easy to spin things up on demand, and just as quickly shut it all down when you no longer need it. To that end, let’s just say we’ve got a docker-compose.yml file for both Postgresql and Rabbit MQ. Having that, I’ll type docker compose up -d from the command line to spin up both infrastructure elements.

Cool, but now I need to have the database schemas built out with Marten tables and the Jasper inbox/outbox tables plus the Rabbit MQ queues for the application. This is where Oakton and its new “stateful resource” model comes into play. Jasper’s Rabbit MQ plugin and the inbox/outbox storage both expose Oakton’s IStatefulResource interface for easy setup. Likewise, Marten has support for this model as well (in this case it’s just a very slim wrapper around Marten’s longstanding database schema management functionality).

If you’re not familiar with this, the double dash “–” argument in dotnet run helps .NET to know which arguments (“run”) apply to the dotnet executable and the arguments to the right of the “–” that are passed into the application itself.

Opening up the command line terminal of your preference to the root of the project, I type dotnet run -- help to see what options are available in our Jasper application through the usage of Oakton:

There’s a couple commands up there that will help us out with the database management, but I want to focus on the resources command. To that end, I’m going to type dotnet run -- resources list just to see what resources our issue tracker application has:

Just through the configuration up above, the various Jasper elements have registered “stateful resource” adapters to for Oakton for the underlying Marten database, the inbox/outbox data (Envelope Storage above), and Rabbit MQ.

In the next case, I’m going to use dotnet run -- resources check to see if all our infrastructure is configured the way our application needs — and I’m going to do this without first starting the database or the message broker, so this should fail spectacularly!

Here’s the summary output:

If you were to scroll up a bit, you’d see a lot of exceptions thrown describing what’s wrong (helpfully color coded by Spectre.Console) including this one explaining that an expected Rabbit MQ queue is missing:

So that’s not good. No worries though, I’ll start up the docker containers, then go back to the command line and type:

dotnet run -- resources setup

And here’s some of the output:

Forget the command line…

If you’ll notice the single line of code `builder.Services.AddResourceSetupOnStartup();` in the bootstrapping code above, that’s adding a hosted service to our application from Oakton that will verify and apply all configured set up to the known Marten database, the inbox/outbox storage, and the required Rabbit MQ objects. No command line chicanery necessary. I’m hopeful that this will enable developers to be more productive by dealing with this kind of environmental setup directly inside the application itself rather than recreating the definition of what’s necessary in external scripts.

This was a fair amount of work, so I’d be very welcome to any kind of feedback here.

A Vision for Low Ceremony CQRS with Event Sourcing

Let me just put this stake into the ground. The combination of Marten and Jasper will quickly become the most productive and lowest ceremony tooling for event sourcing and CQRS architectures on the .NET stack.

And for any George Strait fans out there, this song may be relevant to the previous paragraph:)

CQRS and Event Sourcing

Just to define some terminology, the Command Query Responsibility Separation (CQRS) pattern was first described by Greg Young as an approach to software architecture where you very consciously separate “writes” that change the state of the system from “reads” against the system state. The key to CQRS is to use separate models of the system state for writing versus querying. That leads to something I think of as the “scary view of CQRS”:

The “scary” view of CQRS

The “query database” in my diagram is meant to be an optimized storage for the queries needed by the system’s clients. The “domain model storage” is some kind of persisted storage that’s optimized for writes — both capturing new data and presenting exactly the current system state that the command handlers would need in order to process or validate incoming commands.

Many folks have a visceral, negative reaction to this kind of CQRS diagram as it does appear to be more complexity than the more traditional approach where you have a single database model — almost inevitably in a relational database of some kind — and both reads and writes work off the same set of database tables. Hell, I walked out of an early presentation by Greg Young about what later became to be known as CQRS at QCon 2008 shaking my head thinking this was all nuts.

Here’s the deal though, when we’re writing systems against the one, true database model, we’re potentially spending a lot of time mapping incoming messages to the stored model, and probably even more time transforming the raw database model into a shape that’s appropriate for our system’s clients in a way that also creates some decoupling between clients and the ugly, raw details of our underlying database. The “scary” CQRS architecture arguably just brings that sometimes hidden work and complexity into the light of day.

Now let’s move on to Event Sourcing. Event sourcing is a style of system persistence where each state change is captured and stored as an explicit event object. There’s all sorts of value from keeping the raw change events around for later, but you of course do need to compile the events into derived, or “projected” models that represent the current system state. When combining event sourcing into a CQRS architecture, some projected models from the raw events can serve as both the “write model” inside of incoming commands, while others can be built as the “query model” that clients will use to query our system.

At the end of this section, I want to be clear that event sourcing and CQRS can be used independently of each other, but to paraphrase Forrest Gump, event sourcing and CQRS go together like peas and carrots.

Now, the event sourcing side of this especially might sound a little scary if you’re not familiar with some of the existing tooling, so let’s move on first to Marten…

Marten has a mature feature set for event sourcing already, and we’re grown quite a bit in capability toward our support for projections (read-only views of the raw events). If you’ll peek back at the “scary” view of CQRS above, Marten’s projection support solves the problem of keeping the raw events synchronized to both any kind of “write model” for incoming commands and the richer “query model” views for system clients within a single database. In my opinion, Marten removes the “scary” out of event sourcing and we’re on our way to being the best possible “event sourcing in a box” solution for .NET.

As that support has gotten more capable and robust though, our users have frequently been asking about how to subscribe to events being captured in Marten and how to relay those events reliably to some other kind of infrastructure — which could be a completely separate database, outgoing message queues, or some other kind of event streaming scenario.

Oskar Dudycz and I have been discussing how to solve this in Marten, and we’ve come up with these three main mechanisms for supporting subscriptions to event data:

  1. Some or all of the events being captured by Marten should be forwarded automatically at the time of event capture (IDocumentSession.SaveChangesAsync())
  2. When strict ordering is required, we’ll need some kind of integration into Marten’s async daemon to relay events to external infrastructure
  3. For massive amounts of data with ambitious performance targets, use something like Debezium to directly stream Marten events from Postgresql to Kafka/Pulsar/etc.

Especially for the first item in that list above, we need some kind of outbox integration with Marten sessions to reliably relay events from Marten to outgoing transports while keeping the system in a consistent state (i.e., don’t publish the events if the transaction fails).

Fortunately, there’s a functional outbox implementation for Marten in…

To be clear, Jasper as a project was pretty well mothballed by the combination of COVID and the massive slog toward Marten 4.0 (and then a smaller slog to 5.0). However, I’ve been able to get back to Jasper and yesterday kicked out a new Jasper 2.0.0-alpha-2 release and (Re) Introducing Jasper as a Command Bus.

For this post though, I want to explore the potential of the Jasper + Marten combination for CQRS architectures. To that end, a couple weeks I published Marten just got better for CQRS architectures, which showed some new APIs in Marten to simplify repetitive code around using Marten event sourcing within CQRS architectures. Part of the sample code in that post was this MVC Core controller that used some newer Marten functionality to handle an incoming command:

public async Task CompleteCharting(
    [FromBody] CompleteCharting charting, 
    [FromServices] IDocumentSession session)
    var stream = await session
    // Validation on the ProviderShift aggregate
    if (stream.Aggregate.Status != ProviderStatus.Charting)
        throw new Exception("The shift is not currently charting");
    // We "decided" to emit one new event
    stream.AppendOne(new ChartingFinished(stream.Aggregate.AppointmentId.Value, stream.Aggregate.BoardId));
    await session.SaveChangesAsync();

To review, that controller method:

  1. Takes in a command message of type CompleteCharting
  2. Loads the current state of the aggregate ProviderShift model referred to by the incoming command, and does so in a way that takes care of concurrency for us by waiting to get an exclusive lock on the particular ProviderShift
  3. Assuming that the validation against the ProviderShift succeeds, emits a new ChartingFinished event
  4. Saves the pending work with a database transaction

In that post, I pointed out that there were some potential flaws or missing functionality with this approach:

  1. We probably want some error handling to retry the operation if we hit concurrency exceptions or timeout trying to get the exclusive lock. In other words, we have to plan for concurrency exceptions
  2. It’d be good to be able to automatically publish the new ChartingFinished event to a queue to take further action within our system (or an external service if we were using messaging here)
  3. Lastly, I’d argue there’s some repetitive code up there that could be simplified

To address these points, I’m going to introduce Jasper and its integration with Marten (Jasper.Persistence.Marten) to the telehealth portal sample from my previous blog post.

I’m going to move the actual handling of the CompleteCharting to a Jasper handler shown below that is functionally equivalent to the controller method shown earlier (except I switched the concurrency protection to being optimistic):

// This is auto-discovered by Jasper
public class CompleteChartingHandler
    [MartenCommandWorkflow] // this opts into some Jasper middlware 
    public ChartingFinished Handle(CompleteCharting charting, ProviderShift shift)
        if (shift.Status != ProviderStatus.Charting)
            throw new Exception("The shift is not currently charting");

        return new ChartingFinished(charting.AppointmentId, shift.BoardId);

And the controller method gets simplified down to just relaying the command to Jasper:

    public Task CompleteCharting(
        [FromBody] CompleteCharting charting, 
        [FromServices] ICommandBus bus)
        // Just delegating to Jasper here
        return bus.InvokeAsync(charting, HttpContext.RequestAborted);

There’s some opportunity for some mechanisms to make the code above be a little less repetitive and efficient. Maybe by riding on Minimal APIs. That’s for a later date though:)

By using the new [MartenCommandWorkflow] attribute, we’re directing Jasper to surround the command handler with middleware that handles much of the Marten mechanics by:

  1. Loading the aggregate ProviderShift for the incoming CompleteCharting command (I’m omitting some details here for brevity, but there’s a naming convention that can be explicitly overridden to pluck the aggregate identity off the incoming command)
  2. Passing that ProviderShift aggregate into the Handle() method above
  3. Applying the returned event to the event stream for the ProviderShift
  4. Committing the outstanding changes in the active Marten session

The Handle() code above becomes an example of a Decider function. Even better yet, it’s completely decoupled from any kind of infrastructure and fully synchronous. I’m going to argue that this approach will make command handlers much easier to unit test, definitely easier to write, and easier to read later just because you’re only focused on the business logic.

So that covers the repetitive code problem, but let’s move on to automatically publishing the ChartingCompleted event and some error handling. I’m going to add Jasper through the application’s bootstrapping code as shown below:

builder.Host.UseJasper(opts =>
    // I'm choosing to process any ChartingFinished event messages
    // in a separate, local queue with persistent messages for the inbox/outbox
    // If we encounter a concurrency exception, just try it immediately 
    // up to 3 times total
    // It's an imperfect world, and sometimes transient connectivity errors
    // to the database happen
        .RetryWithCooldown(50.Milliseconds(), 100.Milliseconds(), 250.Milliseconds());

Jasper comes with some pretty strong exception handling policy capabilities you’ll need to do grown up development. In this case, I’m just setting up some global policies in the application to retry message failures on either Marten concurrency exceptions or the inevitable, transient Postgresql connectivity hiccups. In the case of the concurrency exception, you may just need to start the work over to ensure you’re starting from the most recent aggregate changes. I used globally applied policies here, but Jasper will also allow you to override that on a message type by message type basis.

Lastly, let’s add the Jasper outbox integration for Marten and opt into automatic event publishing with this bit of configuration chained to the standard AddMarten() usage:

builder.Services.AddMarten(opts =>
    // Marten configuration...
    // I added this to enroll Marten in the Jasper outbox
    // I also added this to opt into events being forward to
    // the Jasper outbox during SaveChangesAsync()

And that’s actually that. The configuration above will add the Jasper outbox tables to the Marten database for us, and let Marten’s database schema management manage those extra database objects.

Back to the command handler (mildly elided):

public class CompleteChartingHandler
    public ChartingFinished Handle(CompleteCharting charting, ProviderShift shift)
        // validation code goes here!

        return new ChartingFinished(charting.AppointmentId, shift.BoardId);

By opting into the outbox integration and event forwarding to Jasper from Marten, when this command handler is executed, the ChartingFinished events will be published — in this case just to an in-memory queue, but it could also be to an external transport — with Jasper’s outbox implementation that guarantees that the message will be delivered at least once as long as the database transaction to save the new event succeeds.

Conclusion and What’s Next?

There’s a tremendous amount of work in the rear window to get to the functionality that I demonstrated here, and a substantial amount of ambition in the future to drive this forward. I would love any possible feedback both positive and negative. Marten is a team effort, but Jasper’s been mostly my baby for the past 3-4 years, and I’d be happy for anybody who would want to get involved with that. I’m way behind in documentation for Jasper and somewhat for Marten, but that’s in flight.

My next couple posts to follow up on this are to:

  • Do a deeper dive into Jasper’s outbox and explain why it’s different and arguably more useful than the outbox implementations in other leading .NET tools
  • Introduce the usage of Rabbit MQ with Jasper for external messaging
  • Take a detour into the development and deployment time command line utilities built into Jasper & Marten through Oakton

(Re) Introducing Jasper as a Command Bus

EDIT 6/15/2022: The correct Nuget is “Jasper.Persistence.Marten”

I just released a second alpha of Jasper 2.0 to Nuget. You can find the project goals for Jasper 2.0 here, and an update from a couple weeks ago here. Be aware that the published documentation for Jasper is very, very far behind. I’m working on it:)

Jasper is a long running open source project with the goal of creating a low ceremony and highly productive framework for building systems in .Net that would benefit from either an in memory command bus or utilize asynchronous messaging. The big driver for Jasper right now is using it in combination with the event sourcing capabilities of Marten as a full stack CQRS architectural framework. Later this week I’ll preview the ongoing Marten + Jasper integration, but for today I’d like to just introduce Jasper itself a little bit.

For a simplistic sample application, let’s say that we’re building an online system for potential customers to make reservations at any number of participating restaurants. I’m going to start by laying down a brand new .Net 6 Web API project. I’m obviously going to choose Marten as my persistence tooling, so the next steps are to add a Nuget reference to the Jasper.Persistence.Marten Nuget which will bring transitive dependencies over for both Jasper and Marten.

Jasper also has some working integration with EF Core using either Sql Server or Postgresql as the backing store so far.

Let’s build an online reservation system for your favorite restaurants!

Let’s say that as a result of an event storming requirements session, we’ve determined that we want both a command message to confirm a reservation, and a corresponding event message out to the internal systems of the various restaurants. I’m going to eschew event sourcing to keep this simpler and just opt for a persistent Reservation document in Marten. All that being said, here’s our code to model everything I just described:

public record ConfirmReservation(Guid ReservationId);
public record ReservationConfirmed(Guid ReservationId);

public class Reservation
    public Guid Id { get; set; }
    public DateTimeOffset Time { get; set; }
    public string RestaurantName { get; set; }
    public bool IsConfirmed { get; set; }

Building Our First Message Handlers

In this contrived example, the ReservationConfirmed event message will be published separately because it spawns a call to an external system where I’d strongly prefer to have a separate “retry loop” around just that call. That being said, this is the first cut for a command handler for the ConfirmReservation message:

public class ConfirmReservationHandler
    public async Task Handle(ConfirmReservation command, IDocumentSession session, IExecutionContext publisher)
        var reservation = await session.LoadAsync<Reservation>(command.ReservationId);

        reservation!.IsConfirmed = true;

        // Watch out, this could be a race condition!!!!
        await publisher.PublishAsync(new ReservationConfirmed(reservation.Id));

        // We're coming back to this in a bit......
        await session.SaveChangesAsync();

To be technical, Jasper uses an in memory outbox for all message processing even if there’s no durable message storage to at least guarantee that outgoing messages are only published when the original message is successfully handled. I just wanted to show the potential danger here.

So a couple things to note that are different from existing tools like NServiceBus or MassTransit:

  • Jasper locates message handlers strictly through naming conventions. Public methods named either Handle() or Consume() on public types that are suffixed by Handler or Consumer. There are no mandatory attributes or interfaces. Hell, there’s not even a mandatory method signature except that the first argument is always assumed to be the message type.
  • Jasper Handle() methods can happily support method injection, meaning that the IDocumentSession parameter above is pushed into the method from Jasper itself. In my experience, using method injection frequently simplifies the message handler code as opposed to the idiomatic C# approach of using constructor injection and relaying things through private fields.
  • Message types in Jasper are just concrete types, and there’s no necessary Event or Message base classes of any type — but that may be introduced later strictly for optional diagnostics.

Lastly, notice my comment about the race condition between publishing the outgoing ReservationConfirmed event message and committing the database work through IDocumentSession.SaveChangesAsync(). That’s obviously a problem waiting to bite us, so we’ll come back to that.

Next, let’s move on to the separate handler for ReservationConfirmed:

[RetryNow(typeof(HttpRequestException), 50, 100, 250)]
public class ReservationConfirmedHandler
    public async Task Handle(ReservationConfirmed confirmed, IQuerySession session, IRestaurantProxy restaurant)
        var reservation = await session.LoadAsync<Reservation>(confirmed.ReservationId);

        // Make a call to an external web service through a proxy
        await restaurant.NotifyRestaurant(reservation);

All this handler does is look up the current state of the reservation and post that to an external system through a proxy interface (IRestaurantProxy).

As I said before, I strongly prefer that calls out to external systems be isolated to their own retry loops. In this case, the [RetryNow] attribute is setting up Jasper to retry a command through this handler on transient HttpException errors with a 50, 100, and then 250 millisecond cooldown period between attempts. Jasper’s error handling policies go much deeper than this, but hopefully you can already see what’s possible.

The usage of the [LocalQueue("Notifications")] attribute is directing Jasper to execute the ReservationConfirmed messages in a separate, local queue named “Notifications”. In effect, we’ve got a producer/consumer solution between the incoming ConfirmReservation command and ReservationConfirmed event messages. The local queueing is done with the TPL Dataflow library. Maybe Jasper will eventually move to using System.Threading.Channels, but for right now there’s just bigger issues to worry about.

Don’t fret if you don’t care for sprinkling attributes all over your code, all of the configuration I’ve done above with attributes can also be done with a fluent interface at bootstrapping time, or even within the message handler classes themselves.

Bootstrapping Jasper

Stepping into the Program.cs file for our new system, I’m going to add bootstrapping for both Marten and Jasper in the simplest possible way like so:

using CommandBusSamples;
using Jasper;
using Marten;
using Oakton;

var builder = WebApplication.CreateBuilder(args);

builder.Services.AddMarten(opts =>

// Adding Jasper as a straight up Command Bus with all
// its default configuration

var app = builder.Build();

// This isn't *quite* the most efficient way to do this,
// but it's simple to understand, so please just let it go...
// I'm just delegating the HTTP request body as a command 
// directly to Jasper's in memory ICommandBus
app.MapPost("/reservations/confirm", (ConfirmReservation command, ICommandBus bus) => bus.InvokeAsync(command));

// This opts into using Oakton for extended command line options for this app
// Oakton is also a transitive dependency of Jasper itself
return await app.RunOaktonCommands(args);

Okay, so let’s talk about some of the things in that code up above:

  • Jasper tries to embrace the generic host building and core abstractions that came with .Net Core (IHostBuilder, ILogger, IHostedService etc.) wherever possible, so hence the integration happens with the UseJasper() call seen above.
  • The call to UseJasper() also quietly sets up Lamar as the underlying IoC container for your application. I won’t get into that much here, but there are optimizations in Jasper’s runtime model that require Lamar.
  • I used Oakton as the command line parser. That’s not 100% necessary, but there are a lot of development time utilities with Oakton for Jasper development. I’ll show some of that in later posts building on this one.

The one single HTTP route calls directly into the Jasper ICommandBus.InvokeAsync() method to immediately execute the message handler inline for the ConfirmReservation message. As someone who’s a skeptic of “mediator” tools in AspNetCore, I’m not sure this really adds much value as the handler for ConfirmReservation is currently written. However, we can add some transient error handling to our application’s bootstrapping that would apply to the ICommandBus.InvokeAsync() calls like so:

builder.Host.UseJasper(opts =>
    // Just setting up some retries on transient database connectivity errors
        .RetryWithCooldown(50.Milliseconds(), 100.Milliseconds(), 250.Milliseconds());

We can also opt to make our ConfirmReservation commands be processed in background threads rather than inline through our web request by changing the Minimal API route to:

// This isn't *quite* the most efficient way to do this,
// but it's simple to understand, so please just let it go...
app.MapPost("/reservations/confirm", (ConfirmReservation command, ICommandBus bus) => bus.EnqueueAsync(command));

The EnqueueAsync() method above places the incoming command message into an in-memory queue.

What if the process dies mid-flight?!?

An experienced user of asynchronous messaging tools will have happily spotted several potential problems in the solution so far. One, there’s a potential race condition in the ConfirmReservationHandler code between database changes being committed and the outgoing message being processed. Two, what if the process dies? If we’re using all this in-memory queueing stuff, that all dies when the process dies, right?

Fortunately, Jasper with some significant help from Postgresql and Marten here, already has a robust inbox and outbox implementation we’ll add next for durable messaging.

For clarity, here’s the original handler code again for the ConfirmReservation message:

public class ConfirmReservationHandler
    public async Task Handle(ConfirmReservation command, IDocumentSession session, IExecutionContext publisher)
        var reservation = await session.LoadAsync<Reservation>(command.ReservationId);

        reservation!.IsConfirmed = true;

        // Watch out, this could be a race condition!!!!
        await publisher.PublishAsync(new ReservationConfirmed(reservation.Id));

        // We're coming back to this in a bit......
        await session.SaveChangesAsync();

Please note the comment about the race condition in the code. What we need to do is to introduce Jasper’s outbox feature, then revisit this handler.

First though, I need to go back to the bootstrapping code in Program and add a little more code:

var builder = WebApplication.CreateBuilder(args);

builder.Services.AddMarten(opts =>
    // NEW! Adding Jasper outbox integration to Marten in the "messages"
    // database schema

// Adding Jasper as a straight up Command Bus
builder.Host.UseJasper(opts =>
    // Just setting up some retries on transient database connectivity errors
        .RetryWithCooldown(50.Milliseconds(), 100.Milliseconds(), 250.Milliseconds());

    // NEW! Apply the durable inbox/outbox functionality to the two in-memory queues

    // And I just opened a GitHub issue to make this config easier...

var app = builder.Build();

// This isn't *quite* the most efficient way to do this,
// but it's simple to understand, so please just let it go...
app.MapPost("/reservations/confirm", (ConfirmReservation command, ICommandBus bus) => bus.EnqueueAsync(command));

// This opts into using Oakton for extended command line options for this app
// Oakton is also a transitive dependency of Jasper itself
return await app.RunOaktonCommands(args);

This time I chained a call on the Marten configuration through the UseWithJasper() extension method. I also added a couple lines of code within the UseJasper() block to mark the in memory queues as being enrolled in the inbox/outbox mechanics through the DurablyPersistedLocally() method.

Now, back to our handler code. Keeping things explicit for now, I’m going to add some necessary mechanics for opting into outbox sending:

public class ConfirmReservationHandler
    public async Task Handle(ConfirmReservation command, IDocumentSession session, IExecutionContext publisher)
        // Enroll the execution context and Marten session in
        // outbox sending
        // This is an extension method in Jasper.Persistence.Marten
        await publisher.EnlistInOutboxAsync(session);
        var reservation = await session.LoadAsync<Reservation>(command.ReservationId);

        reservation!.IsConfirmed = true;

        // No longer a race condition, I'll explain more below:)
        await publisher.PublishAsync(new ReservationConfirmed(reservation.Id));

        // Persist, and kick out the outgoing messages
        await session.SaveChangesAsync();

I’m going to utilize Jasper/Marten middleware in the last section to greatly simplify the code above, so please read to the end:)

With this version of the handler code, thing are working a little differently:

  • The call to PublishAsync() does not immediately release the message to the in memory queue. Instead, it’s routed and held in memory by the IExecutionContext for later
  • When IDocumentSession.SaveChangesAsync() is called, the outgoing messages are persisted into the underlying database in the same database transaction as the change to the Reservation document. Using the Marten integration, the Jasper outbox can even do this within the exact same batched database command as a minor performance optimization
  • At the end of IDocumentSession.SaveChangesAsync() upon a successful transaction, the outgoing messages are kicked out into the outgoing message sending queues in Jasper.

The outbox usage here solves a couple issues. First, it eliminates the race condition between the outgoing messages and the database changes. Secondly, it prevents situations of system inconsistency where either the message succeeds and the database changes fail, or vice versa.

I’ll be writing up a follow up post later this week or next diving deeper into Jasper’s outbox implementation. For a quick preview, by taking a very different approach than existing messaging tools in .Net, Jasper’s outbox is already usable in more scenarios than other alternatives and I’ll try to back that assertion up next time. To answer the obvious question in the meantime, Jasper’s outbox gives you an at least once delivery guarantee even if the current process fails.

Streamline the Handler Mechanics

So the “register outbox, public messages, commit session transaction” dance could easily get repetitive in your code. Jasper’s philosophy is that repetitive code is wasteful, so let’s eliminate the cruft-y code we were writing strictly for Marten or Jasper in the ConfirmReservationHandler. The code below is the exact functional equivalent to the earlier handler — even down to enrolling in the outbox:

public static class ConfirmReservationHandler
    public static async Task<ReservationConfirmed> Handle(ConfirmReservation command, IDocumentSession session)
        var reservation = await session.LoadAsync<Reservation>(command.ReservationId);

        reservation!.IsConfirmed = true;


        // Kicking out a "cascaded" message
        return new ReservationConfirmed(reservation.Id);

The usage of the [Transactional] attribute opts only this handler into using Marten transactional middleware that handles all the outbox enrollment mechanics and calls IDocumentSession.SaveChangesAsync() for us after this method is called.

By returning Task<ReservationConfirmed>, I’m directing Jasper to publish the returned value as a message upon the successful completion of the incoming message. I personally like the cascading message pattern in Jasper as a way to make unit testing handler code easier. This was based on a much older custom service bus I helped build and run in production in the mid-10’s.

On the next episode of “please, please pay attention to my OSS projects!”

My very next post is a sequel to Marten just got better for CQRS architectures, but this time using some new Jasper functionality with Marten to further streamline out repetitive code.

Following behind that, I want to write follow ups doing a deeper dive on Jasper’s outbox implementation, using Rabbit MQ with Jasper, then a demonstration of all the copious command line utilities built into both Jasper and Marten.

As my mentor from Dell used to say, “thanks for listening, I feel better now”

My professional and OSS aspirations for 2022

I trot out one of these posts at the beginning of each year, but this time around it’s “aspirations” instead of “plans” because a whole lot of stuff is gonna be a repeat from 2020 and 2021 and I’m not going to lose any sleep over what doesn’t get done in the New Year or not be open to brand new opportunities.

In 2022 I just want the chance to interact with other developers. I’ll be at ThatConference in Round Rock, TX in January May? speaking about Event Sourcing with Marten (my first in person conference since late 2019). Other than that, my only goal for the year (Covid-willing) is to maybe speak at a couple more in person conferences just to be able to interact with other developers in real space again.

My peak as a technical blogger was the late aughts, and I think I’m mostly good with not sweating any kind of attempt to regain that level of readership. I do plan to write material that I think would be useful for my shop, or just about what I’m doing in the OSS space when I feel like it.

Which brings me to the main part of this post, my involvement with the JasperFx (Marten, Lamar, etc). family of OSS projects (plus Storyteller) which takes up most of my extracurricular software related time. Just for an idea of the interdependencies, here’s the highlights of the JasperFx world:

.NET Transactional Document DB and Event Store on PostgreSQL

Marten took a big leap forward late in 2021 with the long running V4.0 release. I think that release might have been the single biggest, most complicated OSS release that I’ve ever been a part of — FubuMVC 1.0 notwithstanding. There’s also a 5.0-alpha release out that addresses .Net 6 support and the latest version of Npgsql.

Right now Marten is a victim of its own success, and our chat room is almost constantly hair on fire with activity, which directly led to some planned improvements for V5 (hopefully by the end of January?) in this discussion thread:

  • Multi-tenancy through a separate database per tenant (long planned, long delayed, finally happening now)
  • Some kind of ability to register and resolve services for more than one Marten database in a single application
  • And related to the previous two bullet points, improved database versioning and schema migrations that could accommodate there being more than one database within a single .Net codebase
  • Improve the “generate ahead” model to make it easier to adopt. Think faster cold start times for systems that use Marten

Beyond that, some of the things I’d like to maybe do with Marten this year are:

  • Investigate the usage of Postgresql table partitioning and database sharding as a way to increase scalability — especially with the event sourcing support
  • Projection snapshotting
  • In conjunction with Jasper, expand Marten’s asynchronous projection support to shard projection work across multiple running nodes, introduce some sort of optimized, no downtime projection rebuilds, and add some options for event streaming with Marten and Kafka or Pulsar
  • Try to build an efficient GraphQL adapter for Marten. And by efficient, I mean that you wouldn’t have to bounce through a Linq translation first and hopefully could opt into Marten’s JSON streaming wherever possible. This isn’t likely, but sounds kind of interesting to play with.

In a perfect, magic, unicorns and rainbows world, I’d love to see the Marten backlog in GitHub get under 50 items and stay there permanently. Commence laughing at me on that one:(

Jasper is a toolkit for common messaging scenarios between .Net applications with a robust in process command runner that can be used either with or without the messaging.

I started working on rebooting Jasper with a forthcoming V2 version late last year, and made quite a bit of progress before Marten got busy and .Net 6 being released necessitated other work. There’s a non-zero chance I will be using Jasper at work, which makes that a much more viable project. I’m currently in flight with:

  • Building Open Telemetry tracing directly into Jasper
  • Bi-directional compatibility with MassTransit applications (absolutely necessary to adopt this in my own shop).
  • Performance optimizations
  • .Net 6 support
  • Documentation overhaul
  • Kafka as a message transport option (Pulsar was surprisingly easy to add, and I’m hopeful that Kafka is similar)

And maybe, just maybe, I might extend Jasper’s somewhat unique middleware approach to web services utilizing the new ASP.Net Core Minimal API support. The idea there is to more or less create an improved version of the old FubuMVC idiom for building web services.

Lamar is a modern IoC container and the successor to StructureMap

I don’t have any real plans for Lamar in the new year, but there are some holes in the documentation, and a couple advanced features could sure use some additional examples. 2021 ended up being busy for Lamar though with:

  1. Lamar v6 added interception (finally), a new documentation website, and a facility for overriding services at test time
  2. Lamar v7 added support for IAsyncEnumerable (also finally), a small enhancement for the Minimal API feature in ASP.Net Core, and .Net 6 support

Add Robust Command Line Options to .Net Applications

Oakton did have a major v4/4.1 release to accommodate .Net 6 and ASP.Net Core Minimal API usage late in 2021, but I have yet to update the documentation. I would like to shift Oakton’s documentation website to VitePress first. The only plans I have for Oakton this year is to maybe see if there’d be a good way for Oakton to enable “buddy” command line tools to your application like the dotnet ef tool using the HostFactoryResolver class.

The bustling metropolis of Alba, MO

Alba is a wrapper around the ASP.Net Core TestServer for declarative, in process testing of ASP.Net Core web services. I don’t have any plans for Alba in the new year other than to respond to any issues or opportunities to smooth out usage from my shop’s usage of Alba.

Alba did get a couple major releases in 2021 though:

  1. Alba 5.0 streamlined the entry API to mimic IHost, converted the documentation website to VitePress, and introduced new facilities for dealing with security in testing.
  2. Alba 6.0 added support for WebApplicationFactory and ASP.Net Core 6

Solutions for creating robust, human readable acceptance tests for your .Net or CoreCLR system and a means to create “living” technical documentation.

Storyteller has been mothballed for years, and I was ready to abandon it last year, but…

We still use Storyteller for some big, long running integration style tests in both Marten and Jasper where I don’t think xUnit/NUnit is a good fit, and I think maybe I’d like to reboot Storyteller later this year. The “new” Storyteller (I’m playing with the idea of calling it “Bobcat” as it might be a different tool) would be quite a bit smaller and much more focused on enabling integration testing rather than trying to be a BDD tool.

Not sure what the approach might be, it could be:

  • “Just” write some extension helpers to xUnit or NUnit for more data intensive tests
  • “Just” write some extension helpers to SpecFlow
  • Rebuild the current Storyteller concept, but also support a Gherkin model
  • Something else altogether?

My goals if this happens is to have a tool for automated testing that maybe supports:

  • Much more data intensive tests
  • Better handles integration tests
  • Strong support for test parallelization and even test run sharding in CI
  • Could help write characterization tests with a record/replay kind of model against existing systems (I’d *love* to have this at work)
  • Has some kind of model that is easy to use within an IDE like Rider or VS, even if there is a separate UI like Storyteller does today

And I’d still like to rewrite a subset of the existing Storyteller UI as an excuse to refresh my front end technology skillset.

To be honest, I don’t feel like Storyteller has ever been much of a success, but it’s the OSS project of mine that I’ve most enjoyed working on and most frequently used myself.


Weasel is a set of libraries for database schema migrations and ADO.Net helpers that we spun out of Marten during its V4 release. I’m not super excited about doing this, but Weasel is getting some sort of database migration support very soon. Weasel isn’t documented itself yet, so that’s the only major plan other than supporting whatever Marten and/or Jasper needs this year.


Baseline is a grab bag of helpers and extension methods that dates back to the early FubuMVC project. I haven’t done much with Baseline in years, and it might be time to prune it a little bit as some of what Baseline does is now supported in the .Net framework itself. The file system helpers especially could be pruned down, but then also get asynchronous versions of what’s left.


I don’t think that I got a single StructureMap question last year and stopped following its Gitter room. There are still plenty of systems using StructureMap out there, but I think the mass migration to either Lamar or another DI container is well underway.

JasperFx OSS Plans for .Net 6 (Marten et al)

I’m going to have to admit that I got caught flat footed by the .Net 6 release a couple weeks ago. I hadn’t really been paying much attention to the forthcoming changes, maybe got cocky by how easy the transition from netcoreapp3.1 to .Net 5 was, and have been unpleasantly surprised by how much work it’s going to take to move some OSS projects up to .Net 6. All at the same time that the advance users of the world are clamoring for all their dependencies to target .Net 6 yesterday.

All that being said, here’s my running list of plans to get the projects in the JasperFx GitHub organization successfully targeting .Net 6. I’ll make edits to this page as things get published to Nuget.


Baseline is a grab bag utility library full of extension methods that I’ve relied on for years. Nobody uses it directly per se, but it’s a dependency of just about every other project in the organization, so it went first with the 3.2.2 release adding a .Net 6 target. No code changes were necessary other than adding .Net 6 to the CI testing. Easy money.


EDIT: Oakton v4.0 is up on Nuget. WebApplication is supported, but you can’t override configuration in commands with this model like you can w/ HostBuilder only. I’ll do a follow up at some point to fill in this gap.

Oakton is a tool to add extensible command line options to .Net applications based on the HostBuilder model. Oakton is my problem child right now because it’s a dependency in several other projects and its current model does not play nicely with the new WebApplicationBuilder approach for configuring .Net 6 applications. I’d also like to get the Oakton documentation website moved to the VitePress + MarkdownSnippets model we’re using now for Marten and some of the other JasperFx projects. I think I’ll take a shortcut here and publish the Nuget and let the documentation catch up later.


Alba is an automated testing helper for ASP.Net Core. Just like Oakton, Alba worked very well with the HostBuilder model, but was thrown for a loop with the new WebApplicationBuilder configuration model that’s the mechanism for using the new Minimal API (*cough* inevitable Sinatra copy *cough*) model. Fortunately though, Hawxy came through with a big pull request to make Alba finally work with the WebApplicationFactory model that can accommodate the new WebApplicationBuilder model, so we’re back in business soon. Alba 5.1 will be published soon with that work after some documentation updates and hopefully some testing with the Oakton + WebApplicationBuilder + Alba model.

EDIT: Alba 7.0 is up with the necessary changes, but the docs will come later this week


Lamar is an IoC/DI container and the modern successor to StructureMap. The biggest issue with Lamar on v6 was Nuget dependencies on the IServiceCollection model, plus needing some extra implementation to light up the implied service model of Minimal APIs. All the current unit tests and even integration tests with ASP.Net Core are passing on .Net 6. To finish up a new Lamar 7.0 release is:

  • One .Net 6 related bug in the diagnostics
  • Better Minimal API support
  • Upgrade Oakton & Baseline dependencies in some of the Lamar projects
  • Documentation updates for the new IAsyncDisposable support and usage with WebApplicationBuilder with or without Minimal API usage

EDIT: Lamar 7.0 is up on Nuget with .Net 6 support


We just made the gigantic V4 release a couple months ago knowing that we’d have to follow up quickly with a V5 release with a few breaking changes to accommodate .Net 6 and the latest version of Npgsql. We are having to make a full point release, so that opens the door for other breaking changes that didn’t make it into V4 (don’t worry, I think shifting from V4 to V5 will be easy for most people). The other Marten core team members have been doing most of the work for this so far, but I’m going to jump into the fray later this week to do some last minute changes:

  • Review some internal changes to Npgsql that might have performance impacts on Marten
  • Consider adding an event streaming model within the new V4 async daemon. For folks that wanna use that to publish events to some kind of transport (Kafka? Some kind of queue?) with strict ordering. This won’t be much yet, but it keeps coming up so we might as well consider it.
  • Multi-tenancy through multiple databases. It keeps coming up, and potentially causes breaking API changes, so we’re at least going to explore it

I’m trying not to slow down the Marten V5 release with .Net 6 support for too long, so this is all either happening really fast or not at all. I’ll blog more later this week about multi-tenancy & Marten.

Weasel is a spin off library from Marten for database change detection and ADO.Net helpers that are reused in other projects now. It will be published simultaneously with Marten.


Oh man, I’d love, love, love to have Jasper 2.0 done by early January so that it’ll be available for usage at my company on some upcoming work. This work is on hold while I deal with the other projects, my actual day job, and family and stuff.

Self Diagnosing Deployments with Oakton and Lamar

So here’s the deal, sometimes, somehow you deploy a new version of a system into a testing, staging, or production environment and it doesn’t work. Shocking and sometimes distressing when that happens, right?

There are any number of problems that could be the cause. Maybe a database is in an invalid state, maybe a file got missed in the deployment, maybe some bit of configuration is wrong, maybe a downstream or upstream collaborating system is down or unreachable. Who knows, right? Well, what if we could make our systems self-diagnosing so they can do a quick rundown of how they’re configured and running right at start up time and fail fast if something is detected to be wrong with the deployment? And also have the system tell us exactly what is wrong through first causes without having to later trace runtime failures to the ultimate root cause?

To that end, let’s talk about some mechanisms in both the Oakton and Lamar libraries to quickly add in what I like to call “environment checks”, meaning self diagnosing checks on system startup to test out system configuration and validity. Oakton has a facility to run environment checks in either a separate diagnostic command directly in your application, or to run the environment checks at system start up time, make a detailed report of any failures, and make the process stop and return an error code. In turn, a continuous deployment script should be able to detect the failure to start the system and rollback to a previously known, good state.

In the past, I’ve worked in teams where we’ve embedded environment checks to:

  • Verify that a configured database is reachable
  • Ping an external web service dependency
  • Check for the existence of required files
  • Verify that an expected COM dependency was registered (shudder, there’s some bad memories in that one)
  • Test that security features are correctly configured and usable — and I think that’s a big one
  • Assert that the system has the ability to read and write configured file system directories — also some serious scar tissue

The point here is to make your deployments fail fast anytime there’s an environmental misconfiguration, and do so in a way that makes it easy for you to spot exactly what’s wrong. Environment checks can help teams avoid system down time and keep testers from blowing up the defect lists with false negatives from misconfigured systems.

Getting Started with Oakton

Just to get started, I spun up a new .Net 5 web service with dotnet new webapi. That template gives you this code in the Program.Main() method:

        public static void Main(string[] args)

I’m going to add a Nuget reference to Oakton, then change that method up above so that Oakton is handling the command line parsing and system startup like this:

        // It's important to return Task<int> so that
        // Oakton can signal failures with non-zero
        // return codes
        public static Task<int> Main(string[] args)
            return CreateHostBuilder(args).RunOaktonCommands(args);

There’s still nothing going on in our web service, but from the command line we could run dotnet run -- check-env just to start up our application’s IHost and run all the environment checks in a test mode. Nothing there yet, so you’d get output like this:

   ___            _      _
  / _ \    __ _  | | __ | |_    ___    _ __
 | | | |  / _` | | |/ / | __|  / _ \  | '_ \
 | |_| | | (_| | |   <  | |_  | (_) | | | | |
  \___/   \__,_| |_|\_\  \__|  \___/  |_| |_|

No environment checks.
All environment checks are good!

We can add environment checks directly to our application with Oakton’s built in mechanisms like this example that just validates that there’s an appsettings.json file in the application path:

        public void ConfigureServices(IServiceCollection services)
            // Literally just proving out that appsettings.json is available
            // Other registrations

And now that same dotnet run -- check-env file gives us this output:

   ___            _      _
  / _ \    __ _  | | __ | |_    ___    _ __
 | | | |  / _` | | |/ / | __|  / _ \  | '_ \
 | |_| | | (_| | |   <  | |_  | (_) | | | | |
  \___/   \__,_| |_|\_\  \__|  \___/  |_| |_|

   1.) Success: File 'appsettings.json' exists

Running Environment Checks ---------------------------------------- 100%

All environment checks are good!

One of the other things that Oakton does is intercept the basic dotnet run command with extra options, so we can use the --check flag like so:

dotnet run -- --check

That command is going to:

  1. Bootstrap and start the IHost for your application
  2. Load and execute all the registered environment checks for your application
  3. Report the status of each environment check to the console
  4. Fail the executable if any environment checks fail

In my tiny sample app I’m building here, the output of that call is this:

C:\code\JasperSamples\EnvironmentChecks\WebApplication\WebApplication>dotnet run -- --check
   1.) Success: File 'appsettings.json' exists

Running Environment Checks ---------------------------------------- 100%

info: Microsoft.Hosting.Lifetime[0]
      Now listening on: https://localhost:5001
info: Microsoft.Hosting.Lifetime[0]
      Now listening on: http://localhost:5000
info: Microsoft.Hosting.Lifetime[0]
      Application started. Press Ctrl+C to shut down.
info: Microsoft.Hosting.Lifetime[0]
      Hosting environment: Development
info: Microsoft.Hosting.Lifetime[0]
      Content root path: C:\code\JasperSamples\EnvironmentChecks\WebApplication\WebApplication

At deployment time, that call is probably just [your application name] --check to start the application.

Environment Checks with Lamar

Now that we’ve got the basics for environment checks built into our new web service with Oakton, I want to introduce Lamar as the DI/IoC tool for the application. I’ll add a Nuget reference to Lamar.Microsoft.DependencyInjection to my project. First to make Lamar be the IoC container for my new service, I’ll add this line of code to the Program.CreateHostBuilder() method:

        public static IHostBuilder CreateHostBuilder(string[] args) =>
                // Make Lamar the application container
                .ConfigureWebHostDefaults(webBuilder =>

Now, I’m going to go a little farther and add a reference to the Lamar.Diagnostics Nuget as well. That adds some Lamar specific diagnostic commands to our command line options, but also allows us to add Lamar container checks at startup like this:

        public void ConfigureServices(IServiceCollection services)
            // Literally just proving out that appsettings.json is available
            // Do a full check of the Lamar configuration
            // and run Lamar environment checks too!

Running our dotnet run -- check-env command again gives us:

   ___            _      _
  / _ \    __ _  | | __ | |_    ___    _ __
 | | | |  / _` | | |/ / | __|  / _ \  | '_ \
 | |_| | | (_| | |   <  | |_  | (_) | | | | |
  \___/   \__,_| |_|\_\  \__|  \___/  |_| |_|

   1.) Success: File 'appsettings.json' exists
   2.) Success: Lamar IoC Service Registrations
   3.) Success: Lamar IoC Type Scanning

Running Environment Checks ---------------------------------------- 100%

All environment checks are good!

The Lamar container checks are a little heavyweight, so watch out for that because you might want to dial them back. Those checks run through every single service registration in Lamar, verifies that all the dependencies exist, and tries to build every registration at least once.

Next though, let’s look at how Lamar lets us plug its own form of environment checks. On any concrete class that is built by Lamar, you can directly embed environment checks with methods like this fake service:

    public class SometimesMisconfiguredService
        private readonly IConfiguration _configuration;

        public SometimesMisconfiguredService(IConfiguration configuration)
            _configuration = configuration;

        public void Validate() // The method name does not matter
            var connectionString = _configuration.GetConnectionString("database");
            using var conn = new SqlConnection(connectionString);
            // Just try to connect to the configured database

The method name above doesn’t matter. All that Lamar needs to see is the [ValidationMethod] attribute. See Lamar’s Environment Tests for a little more information about using this feature. And I don’t really need to “do” anything other than throw an exception if the check logically fails.

And I’ll register that service in the container in the Startup.ConfigureServices() method like so:


Now, going back to the application and I’ll try to start it up with the environment check flag active — but I haven’t configured a database connection string or even attempted to spin up a database at all, so this should fail fast. And it does with a call to dotnet run -- --check:

   1.) Success: File 'appsettings.json' exists
   2.) Failed: Lamar IoC Service Registrations

ERROR: Lamar.IoC.ContainerValidationException: Error in WebApplication.SometimesMisconfiguredService.Validate()


To get at just the Lamar container validation, you can also use dotnet run -- lamar-validate. IoC container tools are a dime a dozen in .Net, and many of them are perfectly competent. When I’m asked “why use Lamar?”, my stock answer is to use Lamar for its diagnostic capabilities.

More Information

A sample environment check for OIDC authenticated web services

EDIT 8/25: Make sure you’re using Oakton 3.2.2 if you try to reproduce this sample, because of course I found and fixed an Oakton bug in this functionality in the process of writing the post:(

I think that what I’m calling an “environment check” here is a very valuable, and under utilized technique in software development today. I had a good reason to stop and create one today to troubleshoot an issue I stubbed my toe on, so here’s a blog post about environment checks.

One of the things on my plate at work right now is prototyping out the usage of Identity Server 5 as our new identity provider across our enterprise. Today I was working with a spike that needed to run a pair of ASP.Net Core applications:

  1. A web application that used the Duende.BFF library and JWT bearer tokens as its server side authentication mechanism.
  2. The actual identity server application

All I was trying to do was to run both applications and try to log into secure portions of the web application. No luck though, I was getting weird looking exceptions about missing configuration. I eventually debugged through the ASP.Net Core OIDC authentication innards, and found out that my issue was that the web application was configured with the wrong authority Url to the identity server service.

That turned out to be a very simple problem to fix, but the exception message was completely unrelated to the actual error, and that’s the kind of problem that’s unfortunately likely to come up again as this turns into a real solution that’s separately deployed in several different environments later (local development, CI, testing, staging, production, you know the drill).

As a preventative measure — or at least a diagnostic tool — I stopped and added an environment check into the application just to assert that yes, it can successfully reach the configured OIDC authority at the configured Url we expect it to be. Mechanically, I added Oakton as the command line runner for the application. After that, I utilized Oakton’s facility for adding and executing environment checks to a .Net Core or .Net 5+ application.

Inside the Startup.ConfigureServices() method for the main web application, I have this code to configure the server-side authentication (sorry, I know this is a lot of code):


            // This is specific to this application. This is just 
            // a cheap way of doing strong typed configuration
            // without using the ugly IOptions<T> stuff
            var settings = Configuration.Get<OidcSettings>();
            // configure server-side authentication and session management
            services.AddAuthentication(options =>
                    options.DefaultScheme = "cookie";
                    options.DefaultChallengeScheme = "oidc";
                    options.DefaultSignOutScheme = "oidc";
                .AddCookie("cookie", options =>
                    // host prefixed cookie name
                    options.Cookie.Name = settings.CookieName;

                    // strict SameSite handling
                    options.Cookie.SameSite = SameSiteMode.Strict;
                .AddOpenIdConnect("oidc", options =>
                    // This is the root Url for the OIDC authority
                    options.Authority = settings.Authority;

                    // confidential client using code flow + PKCE + query response mode
                    options.ClientId = settings.ClientId;
                    options.ClientSecret = settings.ClientSecret;
                    options.ResponseType = "code";
                    options.ResponseMode = "query";
                    options.UsePkce = true;

                    options.MapInboundClaims = true;
                    options.GetClaimsFromUserInfoEndpoint = true;

                    // save access and refresh token to enable automatic lifetime management
                    options.SaveTokens = true;

                    // request scopes

                    // request refresh token

The main thing I want you to note in the code above is that the Url for the OIDC identity service is bound from the raw configuration to the OidcSettings.Authority property. Now that we know where the authority Url is, formulating an environment check is this code, again within the Startup.ConfigureServices() method:

            // This extension method registers an environment check with Oakton
            services.CheckEnvironment<HttpClient>($"Is the expected OIDC authority at {settings.Authority} running", async (client, token) =>
                var document = await client.GetDiscoveryDocumentAsync(settings.Authority, cancellationToken: token);
                if (document == null)
                    throw new Exception("Unable to load the OIDC discovery document. Is the OIDC authority server running?");
                else if (document.IsError)
                    throw document.Exception;

Now, if the authority Url is misconfigured, or the OIDC identity service is not running, I can quickly diagnose that issue by running this command from the main web application directory:

dotnet run -- check-env

If I were to run this command with the identity service is down, I’ll get this lovely output:

Stop, dogs, stop! The light is red!

Hopefully, I can tell from the description and the exception message that the identity service that is supposed to be at https://localhost:5010 is not responsive. In this case it was because I had purposely turned it off to show you the failure, so let’s turn the identity service on and try again with dotnet run -- check-env:

Go, dogs, go! It’s green ahead!

As part of turning this work over to real developers to turn this work into real, production worthy code, we’ll document the dotnet run -- check-env tooling for detecting errors. More effectively though, with Oakton the command line call will return a non-zero error code if the environment checks fail, so it’s useful to run this command in automated deployments as a “fail fast” check on the environment that can point to specifically where the application is misconfigured or external dependencies are unreachable.

As another alternative, Oakton can run these environment checks for you at application startup time by using the -c or --check flag when the application executable is executed.

Dynamic Code Generation in Marten V4

Marten V4 extensively uses runtime code generation backed by Roslyn runtime compilation for dynamic code. This is both much more powerful than source generators in what it allows us to actually do, but can have significant memory usage and “cold start” problems (seems to depend on exact configurations, so it’s not a given that you’ll have these issues). In this post I’ll show the facility we have to “generate ahead” the code to dodge the memory and cold start issues at production time.

Before V4, Marten had used the common model of building up Expression trees and compiling them into lambda functions at runtime to “bake” some of our dynamic behavior into fast running code. That does work and a good percentage of the development tools you use every day probably use that technique internally, but I felt that we’d already outgrown the dynamic Expression generation approach as it was, and the new functionality requested in V4 was going to substantially raise the complexity of what we were going to need to do.

Instead, I (Marten is a team effort, but I get all the blame for this one) opted to use the dynamic code generation and compilation approach using LamarCodeGeneration and LamarCompiler that I’d originally built for other projects. This allowed Marten to generate much more complex code than I thought was practical with other models (we could have used IL generation too of course, but that’s an exercise in masochism). If you’re interested, I gave a talk about these tools and the approach at NDC London 2019.

I do think this has worked out in terms of performance improvements at runtime and certainly helped to be able to introduce the new document and event store metadata features in V4, but there’s a catch. Actually two:

  1. The Roslyn compiler sucks down a lot of memory sometimes and doesn’t seem to ever release it. It’s gotten better with newer releases and it’s not consistent, but still.
  2. There’s a sometimes significant lag in cold start scenarios on the first time Marten needs to generate and compile code at runtime

What we could do though, is provide what I call..

Marten’s “Generate Ahead” Strategy

To side step the problems with the Roslyn compilation, I developed a model (I did this originally in Jasper) to generate the code ahead of time and have it compiled into the entry assembly for the system. The last step is to direct Marten to use the pre-compiled types instead of generating the types at runtime.

Jumping straight into a sample console project to show off this functionality, I’m configuring Marten with the AddMarten() method you can see in this code on GitHub.

The important line of code you need to focus on here is this flag:

opts.GeneratedCodeMode = TypeLoadMode.LoadFromPreBuiltAssembly;

This flag in the Marten configuration directs Marten to first look in the entry assembly of the application for any types that it would normally try to generate at runtime, and if that type exists, load it from the entry assembly and bypass any invocation of Roslyn. I think in a real application you’d wrap that call something like this so that it only applies when the application is running in production mode:

if (Environment.IsProduction())

    options.GeneratedCodeMode = 

The next thing to notice is that I have to tell Marten ahead of time what the possible document types and even about any compiled query types in this code so that Marten will “know” what code to generate in the next section. The compiled query registration is new, but you already had to let Marten know about the document types to make the schema migration functionality work anyway.

Generating and exporting the code ahead of time is done from the command line through an Oakton command. First though, add the LamarCodeGeneration.Commands Nuget to your entry project, which will also add a transitive reference to Oakton if you’re not already using it. This is all described in the Oakton getting started page, but you’ll need to change your Program.Main() method slightly to activate Oakton:

        // The return value needs to be Task<int> 
        // to communicate command success or failure
        public static Task<int> Main(string[] args)
            return CreateHostBuilder(args)
                // This makes Oakton be your CLI
                // runner

If you’ll open up the command terminal of your preference at the root directory of the entry project, type this command to see the available Oakton commands:

dotnet run -- help

That’ll spit out a list of commands and the assemblies where Oakton looked for command types. You should see output similar to this:

Searching 'LamarCodeGeneration.Commands, Version=, Culture=neutral, PublicKeyToken=null' for commands
Searching 'Marten.CommandLine, Version=, Culture=neutral, PublicKeyToken=null' for commands

    Available commands:
        check-env -> Execute all environment checks against the application
          codegen -> Utilities for working with LamarCodeGeneration and LamarCompiler

and assuming you got that far, now type dotnet run -- help codegen to see the list of options with the codegen command.

If you just want to preview the generated code in the console, type:

dotnet run -- codegen preview

To just verify that the dynamic code can be successfully generated and compiled, use:

dotnet run -- codegen test

To actually export the generated code, use:

dotnet run — codegen write

That command will write a single C# file at /Internal/Generated/DocumentStorage.cs for any document storage types and another at `/Internal/Generated/Events.cs` for the event storage and projections.

Just as a short cut to clear out any old code, you can use:

dotnet run -- codegen delete

If you’re curious, the generated code — and remember that it’s generated code so it’s going to be butt ugly — is going to look like this for the document storage, and this code for the event storage and projections.

The way that I see this being used is something like this:

  • The LoadFromPreBuiltAssembly option is only turned on in Production mode so that developers can iterate at will during development. That should be disabled at development time.
  • As part of the CI/CD process for a project, you’ll run the dotnet run -- codegen write command as an initial step, then proceed to the normal compilation and testing cycles. That will bake in the generated code right into the compiled assemblies and enable you to also take advantage of any kind AOT compiler optimizations

Duh. Why not source generators doofus?

Why didn’t we use source generators you may ask? The Roslyn-based approach in Marten is both much better and much worse than source generators. Source generators are part of the compilation process itself and wouldn’t have any kind of cold start problem like Marten has with the runtime Roslyn compilation approach. That part is obviously much better, plus there’s no weird two step compilation process at CI/CD time. But on the downside, source generators can only use information that’s available at compilation time, and the Marten code generation relies very heavily on type reflection, conventions applied at runtime, and configuration built up through a fluent interface API. I do not believe that we could use source generators for what we’ve done in Marten because of that dependency on runtime information.

Oakton v3 super charges the .Net Core/5 command line, and helps Lamar deliver uniquely useful IoC diagnostics

With the Great Texas Ice and Snow Storm of 2021 last week playing havoc with basically everything, I switched to easier work on Oakton and Lamar in the windows of time when we had power. The result of that is the planned v3 release that merged the former Oakton.AspNetCore library into Oakton itself, and took a dependency on Spectre.Console to make all the command output a whole lot more usable. As a logical follow up to the Oakton improvements, I mostly rewrote the Lamar.Diagnostics package that uses Oakton to add Lamar-related troubleshooting commands to your application. I used Spectre.Console to improve how the Lamar configuration was rendered to enhance its usability. I wanted the improved Lamar diagnostics for an ongoing Calavista client project, and that’ll be in heavy usage this very week.

Late last week I released version 3.0.1 of Oakton, a library for building rich command line functionality into .Net Core / .Net 5.0 applications. While competent command line parsing tools are a dime a dozen in the .Net space by this point, Oakton stands out by integrating directly into the generic HostBuilder and extending the command line operations of your currently configured .Net system (Oakton can still be used without HostBuilder as well, see Using the CommandExecutor for more information).

Jumping right into the integration effort, let’s assume that you have a .Net Core 3.1 or .Net 5 application generated by one of the built in project templates. If so, your application’s entry point probably looks like this Program.Main() method:

public static void Main(string[] args)

To integrate Oakton commands, add a reference to the Oakton Nuget, then change Program.Main() to this:

public static Task<int> Main(string[] args)
    return CreateHostBuilder(args)

It’s important to change the return type to Task<int> because that’s used to communicate exit codes to the operating system to denote success or failure for commands that do some kind of system verification.

When you use dotnet run, you can pass arguments or flags to the dotnet run command itself, and also arguments and flags to your actual application. The way that works is that arguments are delimited by a double dash, so that in this usage: dotnet run --framework net5.0 -- check-env, the --framework flag is passed to dotnet run itself to modify its behavior, and the check-env argument is passed to your application.

Out of the box, Oakton adds a couple named commands to your application along with a default “run my application” command. You can access the built in Oakton command line help with dotnet run -- help. The output of that is shown below:

     Available commands:
     check-env -> Execute all environment checks against the application
      describe -> Writes out a description of your running application to either the console or a file
          help -> list all the available commands
           run -> Start and run this .Net application

Slightly Enhanced “Run”

Once Oakton is in place, your basic dotnet run command has a couple new flag options.

  1. dotnet run still starts up your application just like it did before you applied Oakton
  2. dotnet run -- -e Staging using the -e flag starts the application with the host environment name overridden to the value of the flag. In this example, I started the application running as “Staging.”
  3. dotnet run -- --check or dotnet run -- -c will run your application, but first perform any registered environment checks before starting the full application. More on this below.

Environment Checks

Oakton has a simple mechanism to embed environment checks directly into your application. The point of environment checks is to make your .Net application be able to check out its configuration during deployment in order to “fail fast” if external dependencies like databases are unavailable, or required files or missing, or necessary configuration items are invalid.

To use environment checks in your application with Oakton, just run this command:

dotnet run -- check-env

The command will fail if any of the environment checks fail, and you’ll get a report of every failure.

I wrote about this a couple years ago, and Oakton v3 just makes the environment check output a little prettier. If you’re using Lamar as your application’s IoC container, the Lamar.Diagnostics package adds some easier recipes for exposing environment checks in your code in combination with Oakton’s check-env command. I’ll show that later in this post.

The environment check functionality can be absolutely invaluable when your application is being deployed to an environment that isn’t perfectly reliable or has dependencies outside the control of your team. Ask me how I know this to be the case…

Describing the Application

A couple of the OSS projects I support are very sensitive to application configuration, and it’s sometimes challenging to get the right information from users who are having problems with my OSS libraries. As a possible way to ameliorate that issue, there’s Oakton’s extensible describe command:

dotnet run -- describe

Out of the box it just lists basic information about the application and referenced assemblies, but it comes with a simple extensibility mechanism I’m hoping to exploit for embedding Jasper, Marten, and Lamar diagnostics in applications.

Here’s the sample output from a simplistic application built with the worker template:

── About WorkerService ─────────────────────────────────────────
          Entry Assembly: WorkerService
        Application Name: WorkerService
             Environment: Development
       Content Root Path: the content root folder
AppContext.BaseDirectory: the application's base directory

── Referenced Assemblies ───────────────────────────────────────
│ Assembly Name                                         │ Version │
│ System.Runtime                                        │ │
│ Microsoft.Extensions.Configuration.UserSecrets        │ │
│ Microsoft.Extensions.Hosting.Abstractions             │ │
│ Microsoft.Extensions.DependencyInjection.Abstractions │ │
│ Microsoft.Extensions.Logging.Abstractions             │ │
│ Oakton                                                │ │
│ Microsoft.Extensions.Hosting                          │ │

Lamar Diagnostics

As an add-on to Oakton, I updated the Lamar.Diagnostics package that uses Oakton to add Lamar diagnostics to a .Net application. If this package is added to a .Net project that uses Oakton for command line parsing and Lamar as its IoC container, you’ll see new commands light up from dotnet run -- help:

    lamar-scanning -> Runs Lamar's type scanning diagnostics
    lamar-services -> List all the registered Lamar services
    lamar-validate -> Runs all the Lamar container validations

The lamar-scanning command just gives you access to Lamar’s type scanning diagnostics report (something my team & I needed post haste for an ongoing Calavista client project).

The lamar-validate command runs the AssertConfigurationIsValid() method on your application’s configured container to validate that every service registration is valid, can be built, and also tries out Lamar’s own built in environment check system. You can read more about what this method does in the Lamar documentation.

You can also add the Lamar container validation step into the Oakton environment check system with this service registration:

// This adds a Container validation
// to the Oakton "check-env" command
// "services" would be an IServiceCollection object
// either in StartUp.ConfigureServices() or in a Lamar
// ServiceRegistry

As I said earlier, Lamar still supports a very old StructureMap feature to embed environment checks right into your application’s services with the [ValidateMethod] attribute. Let’s say that you have a class in your system called DatabaseUsingService that requires a valid connection string from configuration, and DatabaseUsingService is registered in your Lamar container. You could add a method on that class just to check if the database is reachable like the Validate() method below:

    public class DatabaseUsingService
        private readonly DatabaseSettings _settings;

        public DatabaseUsingService(DatabaseSettings settings)
            _settings = settings;

        public void Validate()
            // For *now*, Lamar requires validate methods be synchronous
            using (var conn = new SqlConnection(_settings.ConnectionString))
                // If this blows up, the environment check fails:)

When Lamar validates the container, it will see the methods marked with [ValidationMethod], pull the service from the configured container, and try to call this method and record any exceptions.

Lamar Services Preview

The big addition in the latest Lamar.Diagnostics package is a big improvement in how it renders a report about the registered services compared to the old, built in WhatDoIHave() textual report.

Starting from a pretty basic ASP.Net Core application, using the dotnet run -- lamar-services command gives us this output:

If you want to dig into the dependency train and how Lamar is going to build a certain registration, you can use this flag dotnet run -- lamar-services --build-plans. Which is going to look like this:

Hat tip to Spectre.Console for making the layout a lot prettier and easier to read.

Where does Oakton go next?

I’m somewhat hopeful that other folks would pick up Oakton’s command line model and add more utilities, application describer parts, and environment checks in either Oakton itself or in extension libraries. For my part, Marten v4’s command line support will use Oakton to add schema management and event store utilities directly to your application. I think it’d also be helpful to have an ASP.Net specific set of application described parts to at least preview routes.