Wolverine’s Test Support Diagnostics

I’m working on fixing a reported bug with Wolverine today and its event forwarding from Marten feature. I can’t say that I yet know why this should-be-very-straightforward-and-looks-exactly-like-the-currently-passing-tests bug is happening, but it’s a good time to demonstrate Wolverine’s automated testing support and even how it can help you to understand test failures.

First off, and I’ll admit that there’s some missing context here, I’m setting up a system such that when this message handler is executed:

public record CreateShoppingList();

public static class CreateShoppingListHandler
{
    public static string Handle(CreateShoppingList _, IDocumentSession session)
    {
        var shoppingListId = CombGuidIdGeneration.NewGuid().ToString();
        session.Events.StartStream<ShoppingList>(shoppingListId, new ShoppingListCreated(shoppingListId));
        return shoppingListId;
    }
}

The configured Wolverine + Marten integration should be kicking in to publish the event appended in the handler above to a completely different handler shown below with the Marten IEvent wrapper so that you can use Marten event store metadata within the secondary, cascaded message:

public static class IntegrationHandler
{
    public static void Handle(IEvent<ShoppingListCreated> _)
    {
        // Don't need a body here, and I'll show why not
        // next
    }
}

Knowing those two things, here’s the test I wrote to reproduce the problem:

    [Fact]
    public async Task publish_ievent_of_t()
    {
        // The "Arrange"
        using var host = await Host.CreateDefaultBuilder()
            .UseWolverine(opts =>
            {
                opts.Policies.AutoApplyTransactions();

                opts.Services.AddMarten(m =>
                {
                    m.Connection(Servers.PostgresConnectionString);
                    m.DatabaseSchemaName = "forwarding";

                    m.Events.StreamIdentity = StreamIdentity.AsString;
                    m.Projections.LiveStreamAggregation<ShoppingList>();
                }).UseLightweightSessions()
                .IntegrateWithWolverine()
                .EventForwardingToWolverine();;
            }).StartAsync();
        
        // The "Act". This method is an extension method in Wolverine
        // specifically for facilitating integration testing that should
        // invoke the given message with Wolverine, then wait until all
        // additional "work" is complete
        var session = await host.InvokeMessageAndWaitAsync(new CreateShoppingList());

        // And finally, just assert that a single message of
        // type IEvent<ShoppingListCreated> was executed
        session.Executed.SingleMessage<IEvent<ShoppingListCreated>>()
            .ShouldNotBeNull();
    }

And now, when I run the test — which “helpfully” reproduces reported bug from earlier today — I get this output:

System.Exception: No messages of type Marten.Events.IEvent<MartenTests.Bugs.ShoppingListCreated> were received

System.Exception
No messages of type Marten.Events.IEvent<MartenTests.Bugs.ShoppingListCreated> were received
Activity detected:

----------------------------------------------------------------------------------------------------------------------
| Message Id                             | Message Type                          | Time (ms)   | Event               |
----------------------------------------------------------------------------------------------------------------------
| 018f82a9-166d-4c71-919e-3bcb04a93067   | MartenTests.Bugs.CreateShoppingList   |          873| ExecutionStarted    |
| 018f82a9-1726-47a6-b657-2a59d0a097cc   | System.String                         |         1057| NoRoutes            |
| 018f82a9-17b1-4078-9997-f6117fd25e5c   | EventShoppingListCreated              |         1242| Sent                |
| 018f82a9-166d-4c71-919e-3bcb04a93067   | MartenTests.Bugs.CreateShoppingList   |         1243| ExecutionFinished   |
| 018f82a9-17b1-4078-9997-f6117fd25e5c   | EventShoppingListCreated              |         1243| Received            |
| 018f82a9-17b1-4078-9997-f6117fd25e5c   | EventShoppingListCreated              |         1244| NoHandlers          |
----------------------------------------------------------------------------------------------------------------------

EDIT: If I’d read this more closely before, I would have noticed that the problem was somewhere different than the routing that I first suspected from a too casual read.

The textual table above is Wolverine telling me what it did do during the failed test. In this case, the output does tip me off that there’s some kind of issue with the internal message routing in Wolverine that should be applying some special rules for IEvent<T> wrappers, but was not in this case. While that work fixing the real bug continues for me, what I hope you get out of this is how Wolverine is trying to help you diagnose test failures by providing diagnostic information about what was actually happening internally during all the asynchronous processing. As a long veteran of test automation efforts, I will vociferously say that it’s important for test automation harnesses to be able to adequately explain the inevitable test failures. Like Wolverine helpfully does.

Now, back to work trying to actually fix the problem…

Scheduled Message Delivery with Wolverine

Wolverine has the ability to schedule the delivery of messages for a later time. While Wolverine certainly isn’t trying to be Hangfire or Quartz.Net, the message scheduling in Wolverine today is valuable for “timeout” messages in sagas, or “retry this evening” type scenarios, or reminders of all sorts.

If using the Azure Service Bus transport, scheduled messages sent to Azure Service Bus queues or topics will use native Azure Service Bus scheduled delivery. For everything else today, Wolverine is doing the scheduled delivery for you. To make those scheduled messages be durable (i.e. not completely lost when the application is shut down), you’re going to want to add message persistence to your Wolverine application as shown in the sample below using SQL Server:

// This is good enough for what we're trying to do
// at the moment
builder.Host.UseWolverine(opts =>
{
    // Just normal .NET stuff to get the connection string to our Sql Server database
    // for this service
    var connectionString = builder.Configuration.GetConnectionString("SqlServer");
    
    // Telling Wolverine to build out message storage with Sql Server at 
    // this database and using the "wolverine" schema to somewhat segregate the 
    // wolverine tables away from the rest of the real application
    opts.PersistMessagesWithSqlServer(connectionString, "wolverine");
    
    // In one fell swoop, let's tell Wolverine to make *all* local
    // queues be durable and backed up by Sql Server 
    opts.Policies.UseDurableLocalQueues();
});

Finally, with all that said, here’s one of the ways to schedule message deliveries:

    public static async Task use_message_bus(IMessageBus bus)
    {
        // Send a message to be sent or executed at a specific time
        await bus.ScheduleAsync(new DebitAccount(1111, 100), DateTimeOffset.UtcNow.AddDays(1));

        // Or do the same, but this time express the time as a delay
        await bus.ScheduleAsync(new DebitAccount(1111, 225), 1.Days());
        
        // ScheduleAsync is really just syntactic sugar for this:
        await bus.PublishAsync(new DebitAccount(1111, 225), new DeliveryOptions { ScheduleDelay = 1.Days() });
    }

Or, if you want to utilize Wolverine’s cascading message functionality to keep most if not all of your handler method signatures “pure”, you can use this syntax within message handlers or HTTP endpoints:

    public static IEnumerable<object> Consume(Incoming incoming)
    {
        // Delay the message delivery by 10 minutes
        yield return new Message1().DelayedFor(10.Minutes());

        // Schedule the message delivery for a certain time
        yield return new Message2().ScheduledAt(new DateTimeOffset(DateTime.Today.AddDays(2)));
    }

Finally, one last alternative that was primarily meant for saga usage, subclassing TimeoutMessage like so:

public record EnforceAccountOverdrawnDeadline(Guid AccountId) : TimeoutMessage(10.Days()), IAccountCommand;

By subclassing TimeoutMessage, the message type above is “scheduled” for a later time when it’s returned as a cascading message.

Wolverine’s HTTP Model Does More For You

One of the things I’m wrestling with right now is frankly how to sell Wolverine as a server side toolset. Yes, it’s technically a message library like MassTransit or NServiceBus. It can also be used as “just” a mediator tool like MediatR. With Wolverine.HTTP, it’s even an alternative HTTP endpoint framework that’s technically an alternative to FastEndpoints, MVC Core, or Minimal API. You’ve got to categorize Wolverine somehow, and we humans naturally understand something new by comparing it to some older thing we’re already familiar with. In the case of Wolverine, it’s drastically selling the toolset short by comparing it to any of the older application frameworks I rattled off above because Wolverine fundamentally does much more to remove code ceremony, improve testability throughout your codebase, and generally just let you focus more on core application functionality than older application frameworks.

This post was triggered by a conversation I had with a friend last week who told me he was happy with his current toolset for HTTP API creation and couldn’t imagine how Wolverine’s HTTP endpoint model could possibly reduce his efforts. Challenge accepted!

For just this moment, consider a simplistic HTTP service that works on this little entity:

public record Counter(Guid Id, int Count);

Now, let’s build an HTTP endpoint that will:

  1. Receive route arguments for the Counter.Id and the current tenant id because of course let’s say that we’re using multi-tenancy with a separate database per tenant
  2. Try to look up the existing Counter entity by its id from the right tenant database
  3. If the entity doesn’t exist, return a status code 404 and get out of there
  4. If the entity does exist, increment the Count property and save the entity to the database and return a status code 204 for a successful request with an empty body

Just to make it easier on me because I already had this example code, we’re going to use Marten for persistence which happens to have much stronger multi-tenancy built in than EF Core. Knowing all that, here’s a sample MVC Core controller to implement the functionality I described above:

public class CounterController : ControllerBase
{
    [HttpPost("/api/tenants/{tenant}/counters/{id}")]
    [ProducesResponseType(204)] // empty response
    [ProducesResponseType(404)]
    public async Task<IResult> Increment(
        Guid id, 
        string tenant, 
        [FromServices] IDocumentStore store)
    {
        // Open a Marten session for the right tenant database
        await using var session = store.LightweightSession(tenant);
        var counter = await session.LoadAsync<Counter>(id, HttpContext.RequestAborted);
        if (counter == null)
        {
            return Results.NotFound();
        }
        else
        {
            counter = counter with { Count = counter.Count + 1 };
            await session.SaveChangesAsync(HttpContext.RequestAborted);
            return Results.Empty;
        }
    }
}

I’m completely open to recreating the multi-tenancy support from the Marten + Wolverine combo for EF Core and SQL Server through Wolverine, but I’m shamelessly waiting until another company is willing to engage with JasperFx Software to deliver that.

Alright, now let’s switch over to using Wolverine.HTTP with its WolverineFx.Http.Marten add on Nuget setup. Let’s drink some Wolverine koolaid and write a functionally identical endpoint the Wolverine way:

You need Wolverine 2.7.0 for this by the way!

    [WolverinePost("/api/tenants/{tenant}/counters/{id}")]
    public static IMartenOp Increment([Document(Required = true)] Counter counter)
    {
        counter = counter with { Count = counter.Count + 1 };
        return MartenOps.Store(counter);
    }

Seriously, this is the same functionality and even the same generated OpenAPI documentation. Some things to note:

  • Wolverine is able to derive much more of the OpenAPI documentation from the type signatures and from policies applied to the endpoint method, like…
  • The usage of the Document(Required = true) tells Wolverine that it will be trying to load a document of type Counter from Marten, and by default it’s going to do that through a route argument named “id”. The Required property tells Wolverine to return a 404 NotFound status code automatically if the Counter document doesn’t exist. This attribute usage also applies some OpenAPI smarts to tag the route as potentially returning a 404
  • The return value of the method is an IMartenOpside effect” just saying “go save this document”, which Wolverine will do as part of this endpoint execution. Using the side effect makes this method a nice, simple pure function that’s completely synchronous. No wrestling with async Task, await, or schlepping around CancellationToken every which way
  • Because Wolverine can see there will not be any kind of response body, it’s going to use a 204 status code to denote the empty body and tag the OpenAPI with that as well.
  • There is absolutely zero Reflection happening at runtime because Wolverine is generating and compiling code at runtime (or ahead of time for faster cold starts) that “bakes” in all of this knowledge for fast execution
  • Wolverine + Marten has a far more robust support for multi-tenancy all the way through the technology stack than any other application framework I know of in .NET (web frameworks, mediators, or messaging libraries), and you can see that evident in the code above where Marten & Wolverine would already know how to detect tenant usage in an HTTP request and do all the wiring for you all the way through the stack so you can focus on just writing business functionality.

To make this all more concrete, here’s the generated code:

// <auto-generated/>
#pragma warning disable
using Microsoft.AspNetCore.Routing;
using System;
using System.Linq;
using Wolverine.Http;
using Wolverine.Marten.Publishing;
using Wolverine.Runtime;

namespace Internal.Generated.WolverineHandlers
{
    // START: POST_api_tenants_tenant_counters_id_inc2
    public class POST_api_tenants_tenant_counters_id_inc2 : Wolverine.Http.HttpHandler
    {
        private readonly Wolverine.Http.WolverineHttpOptions _wolverineHttpOptions;
        private readonly Wolverine.Runtime.IWolverineRuntime _wolverineRuntime;
        private readonly Wolverine.Marten.Publishing.OutboxedSessionFactory _outboxedSessionFactory;

        public POST_api_tenants_tenant_counters_id_inc2(Wolverine.Http.WolverineHttpOptions wolverineHttpOptions, Wolverine.Runtime.IWolverineRuntime wolverineRuntime, Wolverine.Marten.Publishing.OutboxedSessionFactory outboxedSessionFactory) : base(wolverineHttpOptions)
        {
            _wolverineHttpOptions = wolverineHttpOptions;
            _wolverineRuntime = wolverineRuntime;
            _outboxedSessionFactory = outboxedSessionFactory;
        }



        public override async System.Threading.Tasks.Task Handle(Microsoft.AspNetCore.Http.HttpContext httpContext)
        {
            var messageContext = new Wolverine.Runtime.MessageContext(_wolverineRuntime);
            // Building the Marten session
            await using var documentSession = _outboxedSessionFactory.OpenSession(messageContext);
            if (!System.Guid.TryParse((string)httpContext.GetRouteValue("id"), out var id))
            {
                httpContext.Response.StatusCode = 404;
                return;
            }


            var counter = await documentSession.LoadAsync<Wolverine.Http.Tests.Bugs.Counter>(id, httpContext.RequestAborted).ConfigureAwait(false);
            // 404 if this required object is null
            if (counter == null)
            {
                httpContext.Response.StatusCode = 404;
                return;
            }

            
            // The actual HTTP request handler execution
            var martenOp = Wolverine.Http.Tests.Bugs.CounterEndpoint.Increment(counter);

            if (martenOp != null)
            {
                
                // Placed by Wolverine's ISideEffect policy
                martenOp.Execute(documentSession);

            }

            
            // Commit any outstanding Marten changes
            await documentSession.SaveChangesAsync(httpContext.RequestAborted).ConfigureAwait(false);

            
            // Have to flush outgoing messages just in case Marten did nothing because of https://github.com/JasperFx/wolverine/issues/536
            await messageContext.FlushOutgoingMessagesAsync().ConfigureAwait(false);

            // Wolverine automatically sets the status code to 204 for empty responses
            if (!httpContext.Response.HasStarted) httpContext.Response.StatusCode = 204;
        }

    }

    // END: POST_api_tenants_tenant_counters_id_inc2
    
    
}

Summary

Wolverine isn’t “just another messaging library / mediator / HTTP endpoint alternative.” Rather, Wolverine is a completely different animal that while fulfilling those application framework roles for server side .NET, potentially does a helluva lot more than older frameworks to help you write systems that are maintainable, testable, and resilient. And do all of that with a lot less of the typical “Clean/Onion/Hexagonal Architecture” cruft that shines in software conference talks and YouTube videos but helps lead teams into a morass of unmaintainable code in larger systems in the real world.

But yes, the Wolverine community needs to find a better way to communicate how Wolverine adds value above and beyond the more traditional server side application frameworks in .NET. I’m completely open to suggestions — and fully aware that some folks won’t like the “magic” in the “drank all the Wolverine Koolaid” approach I used.

You can of course use Wolverine with 100% explicit code and none of the magic.

Controlling Parallelism with Wolverine Background Processing

A couple weeks back I started a new blog series meant to explore Wolverine’s capabilities for background processing. Working in very small steps and only one new concept at a time, the first time out just showed how to set up Wolverine inside a new ASP.Net Core web api service and immediately use it for offloading some processing from HTTP endpoints to background processing by using Wolverine’s local queues and message handlers for background processing. In the follow up post, I added durability to the background processing so that our work being executed in the background would be durable even in the face of application restarts.

In this post, let’s look at how Wolverine allows you to either control the parallelism of your background processing, or restrict the processing to be strictly sequential.

To review, in previous posts we were “publishing” a SignupRequest message from a Minimal API endpoint to Wolverine like so:

app.MapPost("/signup", (SignUpRequest request, IMessageBus bus) 
    => bus.PublishAsync(request));

In this particular case, our application has a message handler for SignupRequest, so Wolverine has a sensible default behavior of publishing the message to a local, in memory queue where each message will be processed in a separate thread from the original HTTP request, and do so asynchronously in the background.

So far, so good? By default, each message type gets its own local, in memory queue, with a default “maximum degree of parallelism” equal to the number of detected processors (Environment.ProcessorCount). In addition, the local queues do not enforce strict ordering by default.

But now, what if you do need to strict sequential ordering? Or if you want to restrict or expand the number of parallel messages that can be processed? Or the get really wild, constrain some messages to running sequentially while other messages run in parallel?

First, let’s see how we could alter the parallelism of our SignUpRequest to an absurd degree and say that up to 20 messages could theoretically be processed at one time by the system. We’ll do that by breaking into the UseWolverine() configuration and adding this:

builder.Host.UseWolverine(opts =>
{
    // The other stuff...

    // Make the SignUpRequest messages be published with even 
    // more parallelization!
    opts.LocalQueueFor<SignUpRequest>()
        
        // A maximum of 20 at a time because why not!
        .MaximumParallelMessages(20);
});

Easy enough, but now let’s say that we want all logical event messages in our system to be handled in the sequential order that our process publishes these messages. An easy way to do that with Wolverine is to have each event message type implement Wolverine’s IEvent marker interface like so:

public record Event1 : IEvent;
public record Event2 : IEvent;
public record Event3 : IEvent;

To be honest, the IEvent and corresponding IMessage and ICommand interfaces were added to Wolverine originally just to make it easier to transition a codebase from using NServiceBus to Wolverine, but those types have little actual meaning to Wolverine. The only way that Wolverine even uses them is for the purpose of “knowing” that a type is an outbound message so that Wolverine can preview the message routing for a type implementing one of these interfaces automatically in diagnostics.

Revisiting our UseWolverine() code block again, we’ll add that publishing rule like this:

builder.Host.UseWolverine(opts =>
{
    // Other stuff...

    opts.Publish(x =>
    {
        x.MessagesImplementing<IEvent>();
        x.ToLocalQueue("events")
            // Force every event message to be processed in the 
            // strict order they are enqueued, and one at a 
            // time
            .Sequential();
        });
});

With the code above, our application would be publishing every single message where the message type implements IEvent to that one local queue named “events” that has been configured to process messages in strict sequential order.

Summary and What’s Next

Wolverine makes it very easy to do work in background processing within your application, and even to easily control the desired parallelism in your application, or to make a subset of messages be processed in strict sequential order when that’s valuable instead.

To be honest, this series is what I go to when I feel like I need to write more Critter Stack content for the week, so it might be a minute or two before there’s a follow up. There’ll be at least two more posts, one on scheduling message execution and an example of using the local processing capabilities in Wolverine to implement the producer/consumer pattern.

Scaling Marten with PostgreSQL Read Replicas

JasperFx Software is open for business and offering consulting services (like helping you craft scalability strategies!) and support contracts for both Marten and Wolverine so you know you can feel secure taking a big technical bet on these tools and reap all the advantages they give for productive and maintainable server side .NET development.

First off, big thanks to Jaedyn Tonee for actually doing all the work I’m writing about here. JT recently accepted a long standing “offer” to be part of the official “Critter Stack Core Team.”

Marten 7 embraced several new-ish features in Npgsql, including the NpgsqlDataSource concept for connection management. This opened Marten up for a couple other capabilities like integrating Marten with .NET Aspire. It also enabled Marten to utilize PostgreSQL Read Replicas for read only query usage. Read Replicas are valuable both for high availability of database systems, and to offload heavy read intensive operations off of the main database server and onto read replicas.

To opt into read replicas with Marten, you need to utilize the new MultiHostNpgsqlDataSource in Npgsql and Marten as shown in this sample code:

// services is an IServiceCollection collection
services.AddMultiHostNpgsqlDataSource(ConnectionSource.ConnectionString);

services.AddMarten(x =>
    {
        // Will prefer standby nodes for querying.
        x.Advanced.MultiHostSettings.ReadSessionPreference = TargetSessionAttributes.PreferStandby;
    })
    .UseLightweightSessions()
    .UseNpgsqlDataSource();

Behind the scenes, if you are opting into this model, when you make a query with Marten like this:

       // theSession is an IDocumentSession 
       var users = await theSession
            .Query<User>()
            .Where(x => x.FirstName == "Sam")
            .ToListAsync();

Marten will be trying to connect to a PostgreSQL read replica to service that LINQ query.

Summary

I hope this is an important new tool for Marten users to achieve both high availability and scalability within systems with bigger data loads.

Linked Lists in Real Life

I’ve been occasionally writing posts about old design patterns or techniques that are still occasionally useful despite the decades long backlash to the old “Gang of Four” book:

A linked list structure is a simple data structure where each element has a reference to the next element. At its simplest, it’s no more than this single linked list implementation:

public abstract class Item
{
    public Item Next { get; set; }
}

Now, I got into software development through Shadow IT rather than a formal Computer Science degree, so I can’t really get into what Big O notation stuff a linked list gives you for sorting or finding or whatnot (and don’t really care either). What I can tell you that I have occasionally used linked list structures to great effect, with at least two samples within the greater “Critter Stack” / JasperFx codebases.

For the first example, consider the complex SQL generation from this LINQ query in Marten using Marten’s custom “Include Related Documents” feature:

        // theSession is an IDocumentSession from Marten
        var holders = await theSession.Query<TargetHolder>()
            .Include<Target>(x => x.TargetId, list, t => t.Color == Colors.Green)
            .ToListAsync();

Behind the scenes, Marten is generating this pile of lovely SQL:

drop table if exists mt_temp_id_list1; 
create temp table mt_temp_id_list1 as (select d.id, d.data from public.mt_doc_targetholder as d);
 select d.id, d.data from public.mt_doc_target as d where (d.id in (select CAST(d.data ->> 'TargetId' as uuid) from mt_temp_id_list1 as d) and CAST(d.data ->> 'Color' as integer) = $1);
  $1: 2

If you squint really hard, you can notice that Marten is actually executing four different SQL statements in one logical query. Internally, Marten is using a linked list structure to “plan” and then generate the SQL from the raw LINQ Expression tree using the Statement type, partially shown below:

public abstract partial class Statement: ISqlFragment
{
    public Statement Next { get; set; }
    public Statement Previous { get; set; }

    public StatementMode Mode { get; set; } = StatementMode.Select;

    /// <summary>
    ///     For common table expressions
    /// </summary>
    public string ExportName { get; protected internal set; }

    public bool SingleValue { get; set; }
    public bool ReturnDefaultWhenEmpty { get; set; }
    public bool CanBeMultiples { get; set; }

    public void Apply(ICommandBuilder builder)
    {
        configure(builder);
        if (Next != null)
        {
            if (Mode == StatementMode.Select)
            {
                builder.StartNewCommand();
            }

            builder.Append(" ");
            Next.Apply(builder);
        }
    }

    public void InsertAfter(Statement descendent)
    {
        if (Next != null)
        {
            Next.Previous = descendent;
            descendent.Next = Next;
        }

        if (object.ReferenceEquals(this, descendent))
        {
            throw new InvalidOperationException(
                "Whoa pardner, you cannot set Next to yourself, that's a stack overflow!");
        }

        Next = descendent;
        descendent.Previous = this;
    }

    /// <summary>
    ///     Place the descendent at the very end
    /// </summary>
    /// <param name="descendent"></param>
    public void AddToEnd(Statement descendent)
    {
        if (Next != null)
        {
            Next.AddToEnd(descendent);
        }
        else
        {
            if (object.ReferenceEquals(this, descendent))
            {
                return;
            }

            Next = descendent;
            descendent.Previous = this;
        }
    }

    public void InsertBefore(Statement antecedent)
    {
        if (Previous != null)
        {
            Previous.Next = antecedent;
            antecedent.Previous = Previous;
        }

        antecedent.Next = this;
        Previous = antecedent;
    }

    public Statement Top()
    {
        return Previous == null ? this : Previous.Top();
    }

    // Find the selector statement at the very end of 
    // the linked list
    public SelectorStatement SelectorStatement()
    {
        return (Next == null ? this : Next.SelectorStatement()).As<SelectorStatement>();
    }

    // And a some other stuff...
}

The Statement model is a “double linked list,” meaning that each element is aware of both its direct ancestor (Previous) and the next descendent (Descendent). With the Statement model, I’d like to call out a couple wrinkles that made the linked list strategy a great fit for complex SQL generation.

First off, the SQL generation itself frequently requires multiple statements from the top level statement down to the very last statement, and you can see that happening in the Apply() method above that writes out the SQL for the current Statement, then calls Next.Apply() all the way down to the end of the chain — all while helping Marten’s (really Weasel’s) batch SQL generation “know” when it should start a new command (see NpgsqlBatch for a little more context on what I mean there).

Also note all the methods for inserting a new Statement directly before or after the current Statement. Linked lists are perfect for when you frequently need to insert new elements before or after or even at the very end of the chain rather than at a known index. Here’s an example from Marten that kicks in sometimes when a user uses the LINQ Distinct() operator:

For more context if you’ve lived a more charmed life and have never run across them, here’s an explanation of Common Table Expressions from PostgreSQL (this is commonly supported in other databases).

internal class DistinctSelectionStatement: Statement
{
    public DistinctSelectionStatement(SelectorStatement parent, ICountClause selectClause, IMartenSession session)
    {
        parent.ConvertToCommonTableExpression(session);

        ConvertToCommonTableExpression(session);

        parent.InsertAfter(this);

        selectClause.OverrideFromObject(this);
        var selector = new SelectorStatement { SelectClause = selectClause };
        InsertAfter(selector);
    }

    protected override void configure(ICommandBuilder sql)
    {
        startCommonTableExpression(sql);
        sql.Append("select distinct(data) from ");
        sql.Append(Previous.ExportName);
        sql.Append(" as d");
        endCommonTableExpression(sql);
    }
}

When Marten has to apply this DistinctSelectionStatement, it modifies its immediate parent statement that probably started out life as a simple select some_field from some_table query into a common table expression query, and appends the DistinctSelectionStatement behind its parent to do the actual SQL distinct mechanics.

In this particular case of the SQL generation, it’s been frequently necessary for a Statement to “know” about its immediate ancestor, as you can see in the sample code above where the DistinctSelectStatement picks off the ExportName (the CTE name of the preceding statement) to use in generating the right SQL in DistinctSelectStatement.Apply().

For another example, both Marten and Wolverine use a runtime code generation model to build a lot of their “glue” code between the framework and user’s application code (you can see an example of that runtime code generation in this post). One of the core conceptual abstractions in the shared code generation model is the abstract Frame class which roughly equates to logical step in the generated code — usually just a single line of code. During the code generation process, the Frame objects are assembled in a single linked list structure so each Frame.

When actually writing out the generated source code, a typical Frame will write its code, then tell the next Frame to write out its code and so on — as shown by this sample class:

// This is from Wolverine, and weaves in code
// to add selected tags from incoming messages to
// message handlers into the current Open Telemetry Activity
public class AuditToActivityFrame : SyncFrame
{
    private readonly Type _inputType;
    private readonly List<AuditedMember> _members;
    private Variable? _input;

    public AuditToActivityFrame(IChain chain)
    {
        _inputType = chain.InputType()!;
        _members = chain.AuditedMembers;
    }

    public override IEnumerable<Variable> FindVariables(IMethodVariables chain)
    {
        _input = chain.FindVariable(_inputType);
        yield return _input;
    }

    public override void GenerateCode(GeneratedMethod method, ISourceWriter writer)
    {
        writer.WriteComment("Application-specific Open Telemetry auditing");
        foreach (var member in _members)
            writer.WriteLine(
                $"{typeof(Activity).FullNameInCode()}.{nameof(Activity.Current)}?.{nameof(Activity.SetTag)}(\"{member.OpenTelemetryName}\", {_input!.Usage}.{member.Member.Name});");

        // Tell the next frame to write its code too!
        Next?.GenerateCode(method, writer);
    }
}

Where the linked list structure really comes into play with the source generation is when you need to wrap the inner Next Frame with some kind of coding construct like a using block or a try/finally block maybe. Here’s an example of doing just that where the following CatchStreamCollisionFrame places a try/catch block around the code generated by the Next frame (and all of the other frames after the Next frame as well):

// This is the actual middleware that's injecting some code
// into the runtime code generation
internal class CatchStreamCollisionFrame : AsyncFrame
{
    public override void GenerateCode(GeneratedMethod method, ISourceWriter writer)
    {
        writer.WriteComment("Catches any existing stream id collision exceptions");
        writer.Write("BLOCK:try");
        
        // Write the inner code here
        Next?.GenerateCode(method, writer);
        
        writer.FinishBlock();
        writer.Write($@"
BLOCK:catch({typeof(ExistingStreamIdCollisionException).FullNameInCode()} e)
await {typeof(StreamCollisionExceptionPolicy).FullNameInCode()}.{nameof(StreamCollisionExceptionPolicy.RespondWithProblemDetails)}(e, httpContext);
return;
END

");
    }

The ugly generated code above will catch a Marten exception named ExistingStreamIdCollisionException in the generated code for a Wolverine.HTTP endpoint and return a ProblemDetails result explaining the problem and an HTTP status code of 400 (Invalid) instead of letting the exception bubble up.

By having the linked list structure where each Frame is aware of the next Frame, it makes it relatively easy to generate code when you need to wrap the inner code in some kind of C# block structure.

Summary

I spent a lot of time as a kid helping my Dad on his construction crew and helping my grandfather on his farm (you can’t possibly imagine how often farming equipment breaks). Both of them obviously had pretty large toolboxes — but there’s some tools that don’t come out very often, but man you were glad you had them when you did need them. The average developer probably isn’t going to use linked lists by hand very often, but I’ve found them to be very helpful when you need to either model a problem as outer/inner handlers or ancestor/descendents. Linked lists are also great when you need to be able to easily insert items into the greater collection relative to another item.

Anyway, dunno if these examples are too involved or too specious, but there the only two times I’ve used a linked list in the past decade.

I hope it’s obvious, but the JasperFx Software logo is meant to be a tractor tire around a cattle brand. The company name and logo is a little tribute to my family heritage, such as it is:)

Recent Marten & Wolverine Improvements and Roadmap Update

I’d love any feedback on any of this of course. And from something I wrote in a survey of sorts about the commercial product ideas down below yesterday (which is partially a response to a recent query wanting to know how Marten stacks up against AxonIQ from the JVM world):

There’s definitely an opportunity for a full blown Event Driven Architecture stack in the .NET ecosystem – and frankly, Jeremy really wants the Critter Stack to grow into the very best Event Driven Architecture toolset on the planet to the point where shops will purposely adopt .NET just because of the Critter Stack

I’m honestly just thinking out loud in this post, but a lot has been released for both Marten and Wolverine since the big Marten 7.0 release and the last time I published a roadmap update for the two big toolsets.

Here’s some recent highlights you might have missed from the past two months:

What’s Next?

Getting Marten 7.0 and the accompanying Wolverine 2.0 release across the finish line enabled a flurry of follow up features the past two month — largely driven by a couple JasperFx Software client projects (yeah!). Moving forward, I think these are the remaining strategic features that will hopefully go in soon:

  • Marten will get the ability to use PostgreSQL read replicas for read-only queries very soon as a way to scale applications
  • A new, alternative “Quick Append Event” workflow to Marten. The current internal mechanism in Marten for appending events is built for maximal support for “Inline” projections. This new model would simplify the runtime mechanics for appending events and hopefully make the Marten event store more robust in the face of concurrent requests than it is today. This model should also allow for faster performance if users opt into this mechanism.
  • Some ability to efficiently raise or append events (or side effects of some sort) from running projections. This has been in the backlog for a long time. I’d certainly feel better about this if we had some concrete use cases that folks want to do here. The “Quick Append Event” workflow would be a prerequisite
  • Using PostgreSQL partitioning on the Marten streams and events tables. This is the oldest item in the Marten backlog that’s been kicked down the road forever, but I think this is potentially huge for Marten scalability. This would probably be done in conjunction with some tactical improvements to the Marten projection model and the Wolverine aggregate handler workflow to make the archiving more accessible. The biggest issue has always been in how to handle the database migration model for this feature to convert brownfield applications
  • Wolverine 3.0
    • Try to eliminate the hard dependency on Lamar as the IoC container for Wolverine. Most people don’t care, but the folks who do care really don’t like that. So far from my research it looks like the answer is going to be supporting the built in .NET DI container or Lamar with the current Wolverine model — and we can maybe think about supporting other IoC containers with a step back in the runtime optimizations that Wolverine can do today with Lamar. I think it’s quickly coming to the point where all other IoC libraries besides the built in ServiceProvider container from Microsoft die off — even though there are still plenty of areas where that container is lacking compared to alternatives. Alas.
    • Try to apply the Wolverine error handling policies that today only work for Wolverine message handlers to HTTP endpoints

Critter Stack Pro

The Marten & Wolverine community is helping Babu, Jeffry Gonzalez & I brainstorm ideas for the future “Critter Stack Pro” suite of commercial add on tools. The goal is to (make money) make the “Critter Stack” be much more manageable in production environments, help troubleshoot production support issues, heal the system from runtime problems, and understand resource utilization. We don’t have the exact roadmap or exact technical approach locked down yet.

Right now that looks like:

  • A headless library to better distribute Marten projections and subscriptions across a running cluster of processes. This is “ready for testing” by a JasperFx customer
  • A management console application that will be offered both as an ASP.Net Core add on library for a single application or distributed as a standalone Docker image for managing multiple systems from one console
    • Analyze system configuration
    • Manage Wolverine’s “dead letter queue” for messages, including the ability to replay messages
    • Some integration with Open Telemetry and metrics data emitted from Marten and/or Wolverine applications, probably at a summary level with navigation to the “real” observability platform (Prometheus? Grafana? Something totally different?)
    • Management for Marten Asynchronous Projections and Subscriptions
      • Performance information
      • Triggering rebuilds or replays
      • Pausing/restarting projections or subscriptions
    • Tenant Management
      • Dynamically add or remove tenant databases
      • Pause tenants
      • Understand resource utilization and performance on a tenant by tenant basis
    • Marten Event Store Explorer — and we’re collecting several ideas for this
    • Wolverine Message Processing Explorer — ditto
    • Wolverine Scheduled Message Dashboard

My fervent hope is that this tooling will be demonstrable for friendly early adopters at the end of the 2nd quarter, and looking good in the 4th quarter to try to make a serious push for sales in the all important 1st quarter of next year.

And Beyond!

I’m very interested in porting just the event store functionality from Marten to a new library targeting SQL Server as the backing store. The goal here would be to give it the same Wolverine support as the existing Marten functionality. This would be pending some of the Marten projection model stabilizing up above.

Maybe adding CosmosDb and/or DynamoDb support to Wolverine.

And who knows? It’s likely something I’m not even aware of now will be the highest priority in the 3rd and 4th quarters!

Critter Stack Improvements for Event Driven Architecture

JasperFx Software is open for business and offering consulting services (like helping you craft modular monolith strategies!) and support contracts for both Marten and Wolverine so you know you can feel secure taking a big technical bet on these tools and reap all the advantages they give for productive and maintainable server side .NET development.

As a follow on post from First Class Event Subscriptions in Marten last week, let’s introduce Wolverine into the mix for end to end Event Driven Architecture approaches. Using Wolverine’s new Event Subscriptions model, “Critter Stack” systems can automatically process Marten event data with Wolverine message handlers:

If all we want to do is publish Marten event data through Wolverine’s message publishing (which remember, can be either to local queues or external message brokers), we have this simple recipe:

using var host = await Host.CreateDefaultBuilder()
    .UseWolverine(opts =>
    {
        opts.Services
            .AddMarten()
            
            // Just pulling the connection information from 
            // the IoC container at runtime.
            .UseNpgsqlDataSource()
            
            // You don't absolutely have to have the Wolverine
            // integration active here for subscriptions, but it's
            // more than likely that you will want this anyway
            .IntegrateWithWolverine()
            
            // The Marten async daemon most be active
            .AddAsyncDaemon(DaemonMode.HotCold)
            
            // This would attempt to publish every non-archived event
            // from Marten to Wolverine subscribers
            .PublishEventsToWolverine("Everything")
            
            // You wouldn't do this *and* the above option, but just to show
            // the filtering
            .PublishEventsToWolverine("Orders", relay =>
            {
                // Filtering 
                relay.FilterIncomingEventsOnStreamType(typeof(Order));

                // Optionally, tell Marten to only subscribe to new
                // events whenever this subscription is first activated
                relay.Options.SubscribeFromPresent();
            });
    }).StartAsync();

First off, what’s a “subscriber?” That would mean any event that Wolverine recognizes as having:

  • A local message handler in the application for the specific event type, which would effectively direct Wolverine to publish the event data to a local queue
  • A local message handler in the application for the specific IEvent<T> type, which would effectively direct Wolverine to publish the event with its IEvent Marten metadata wrapper to a local queue
  • Any event type where Wolverine can discover subscribers through routing rules

All the Wolverine subscription is doing is effectively calling IMessageBus.PublishAsync() against the event data or the IEvent<T> wrapper. You can make the subscription run more efficiently by applying event or stream type filters for the subscription.

If you need to do a transformation of the raw IEvent<T> or the internal event type to some kind of external event type for publishing to external systems when you want to avoid directly coupling other subscribers to your system’s internals, you can accomplish that by just building a message handler that does the transformation and publishes a cascading message like so:

public record OrderCreated(string OrderNumber, Guid CustomerId);

// I wouldn't use this kind of suffix in real life, but it helps
// document *what* this is for the sample in the docs:)
public record OrderCreatedIntegrationEvent(string OrderNumber, string CustomerName, DateTimeOffset Timestamp);

// We're going to use the Marten IEvent metadata and some other Marten reference
// data to transform the internal OrderCreated event
// to an OrderCreatedIntegrationEvent that will be more appropriate for publishing to
// external systems
public static class InternalOrderCreatedHandler
{
    public static Task<Customer?> LoadAsync(IEvent<OrderCreated> e, IQuerySession session,
        CancellationToken cancellationToken)
        => session.LoadAsync<Customer>(e.Data.CustomerId, cancellationToken);
    
    
    public static OrderCreatedIntegrationEvent Handle(IEvent<OrderCreated> e, Customer customer)
    {
        return new OrderCreatedIntegrationEvent(e.Data.OrderNumber, customer.Name, e.Timestamp);
    }
}

Process Events as Messages in Strict Order

In some cases you may want the events to be executed by Wolverine message handlers in strict order. With the recipe below:

using var host = await Host.CreateDefaultBuilder()
    .UseWolverine(opts =>
    {
        opts.Services
            .AddMarten(o =>
            {
                // This is the default setting, but just showing
                // you that Wolverine subscriptions will be able
                // to skip over messages that fail without
                // shutting down the subscription
                o.Projections.Errors.SkipApplyErrors = true;
            })

            // Just pulling the connection information from 
            // the IoC container at runtime.
            .UseNpgsqlDataSource()

            // You don't absolutely have to have the Wolverine
            // integration active here for subscriptions, but it's
            // more than likely that you will want this anyway
            .IntegrateWithWolverine()
            
            // The Marten async daemon most be active
            .AddAsyncDaemon(DaemonMode.HotCold)
            
            // Notice the allow list filtering of event types and the possibility of overriding
            // the starting point for this subscription at runtime
            .ProcessEventsWithWolverineHandlersInStrictOrder("Orders", o =>
            {
                // It's more important to create an allow list of event types that can be processed
                o.IncludeType<OrderCreated>();

                // Optionally mark the subscription as only starting from events from a certain time
                o.Options.SubscribeFromTime(new DateTimeOffset(new DateTime(2023, 12, 1)));
            });
    }).StartAsync();

In this recipe, Marten & Wolverine are working together to call IMessageBus.InvokeAsync() on each event in order. You can use both the actual event type (OrderCreated) or the wrapped Marten event type (IEvent<OrderCreated>) as the message type for your message handler.

In the case of exceptions from processing the event with Wolverine:

  1. Any built in “retry” error handling will kick in to retry the event processing inline
  2. If the retries are exhausted, and the Marten setting for StoreOptions.Projections.Errors.SkipApplyErrors is true, Wolverine will persist the event to its PostgreSQL backed dead letter queue and proceed to the next event. This setting is the default with Marten when the daemon is running continuously in the background, but false in rebuilds or replays
  3. If the retries are exhausted, and SkipApplyErrors = false, Wolverine will tell Marten to pause the subscription at the last event sequence that succeeded

Custom, Batched Subscriptions

The base type for all Wolverine subscriptions is the Wolverine.Marten.Subscriptions.BatchSubscription class. If you need to do something completely custom, or just to take action on a batch of events at one time, subclass that type. Here is an example usage where I’m using event carried state transfer to publish batches of reference data about customers being activated or deactivated within our system:

public record CompanyActivated(string Name);

public record CompanyDeactivated();

public record NewCompany(Guid Id, string Name);

// Message type we're going to publish to external
// systems to keep them up to date on new companies
public class CompanyActivations
{
    public List<NewCompany> Additions { get; set; } = new();
    public List<Guid> Removals { get; set; } = new();

    public void Add(Guid companyId, string name)
    {
        Removals.Remove(companyId);
        
        // Fill is an extension method in JasperFx.Core that adds the 
        // record to a list if the value does not already exist
        Additions.Fill(new NewCompany(companyId, name));
    }

    public void Remove(Guid companyId)
    {
        Removals.Fill(companyId);

        Additions.RemoveAll(x => x.Id == companyId);
    }
}

public class CompanyTransferSubscription : BatchSubscription
{
    public CompanyTransferSubscription() : base("CompanyTransfer")
    {
        IncludeType<CompanyActivated>();
        IncludeType<CompanyDeactivated>();
    }

    public override async Task ProcessEventsAsync(EventRange page, ISubscriptionController controller, IDocumentOperations operations,
        IMessageBus bus, CancellationToken cancellationToken)
    {
        var activations = new CompanyActivations();
        foreach (var e in page.Events)
        {
            switch (e)
            {
                // In all cases, I'm assuming that the Marten stream id is the identifier for a customer
                case IEvent<CompanyActivated> activated:
                    activations.Add(activated.StreamId, activated.Data.Name);
                    break;
                case IEvent<CompanyDeactivated> deactivated:
                    activations.Remove(deactivated.StreamId);
                    break;
            }
        }
        
        // At the end of all of this, publish a single message
        // In case you're wondering, this will opt into Wolverine's
        // transactional outbox with the same transaction as any changes
        // made by Marten's IDocumentOperations passed in, including Marten's
        // own work to track the progression of this subscription
        await bus.PublishAsync(activations);
    }
}

And the related code to register this subscription:

using var host = await Host.CreateDefaultBuilder()
    .UseWolverine(opts =>
    {
        opts.UseRabbitMq(); 
        
        // There needs to be *some* kind of subscriber for CompanyActivations
        // for this to work at all
        opts.PublishMessage<CompanyActivations>()
            .ToRabbitExchange("activations");
        
        opts.Services
            .AddMarten()

            // Just pulling the connection information from 
            // the IoC container at runtime.
            .UseNpgsqlDataSource()
            
            .IntegrateWithWolverine()
            
            // The Marten async daemon most be active
            .AddAsyncDaemon(DaemonMode.HotCold)

                                
            // Register the new subscription
            .SubscribeToEvents(new CompanyTransferSubscription());
    }).StartAsync();

Summary

The feature set shown here has been a very long planned set of capabilities to truly extend the “Critter Stack” into the realm of supporting Event Driven Architecture approaches from soup to nuts. Using the Wolverine subscriptions automatically gets you support to publish Marten events to any transport supported by Wolverine itself, and does so in a much more robust way than you can easily roll by hand like folks did previously with Marten’s IProjection interface. I’m currently helping a JasperFx Software client utilize this functionality for data exchange that has strict ordering and at least once delivery guarantees.

Marten, PostgreSQL, and .NET Aspire walk into a bar…

This is somewhat a follow up from yesterday’s post on Marten, Metrics, and Open Telemetry Support. I was very hopeful about the defunct Project Tye, and have been curious about .NET Aspire as a more ambitious offering since it was announced. As part of the massive Marten V7 release, we took some steps to ensure that Marten could use PostgreSQL databases controlled by .NET Aspire.

I finally got a chance to put together a sample Marten system named using .NET Aspire called MartenWithProjectAspire on GitHub. Simplified from some longstanding Marten test projects, consider this little system:

At runtime, the EventPublisher service continuously appends events that represent progress in a Trip event stream to the Marten-ized PostgreSQL database. The TripBuildingService is running Marten’s async daemon subsystem that constantly reads in new events to the PostgreSQL database and builds or updates projected documents back to the database to represent the current state of the event store.

The end result was a single project named AspireHost that when executed, will use .NET Aspire to start a new PostgreSQL docker container and start up the EventPublisher and TripBuildingService services while passing the connection string to the new PostgreSQL database to these services at runtime with a little bit of Environment variable sleight of hand.

You can see the various projects and containers from the Aspire dashboard:

and even see some of the Open Telemetry activity traced by Marten and visualized through Aspire:

Honestly, it took me a bit of trial and error to get this all working together. First, we need to configure Marten to use an NpgsqlDataSource connection to the PostgreSQL database that will be loaded from each service’s IoC container — then tell Marten to use that NpgsqlDataSource.

After adding Nuget references for Aspire.Npgsql and Marten itself, I added the second line of code shown below to the top of the Program file for both services using Marten:

var builder = Host.CreateApplicationBuilder();

// Register the NpgsqlDataSource in the IoC container using
// connection string named "marten" from IConfiguration
builder.AddNpgsqlDataSource("marten");

That’s really just a hook to add a registration for the NpgsqlDataSource type to the application’s IoC container with the expectation that the connection string will be in the application’s configuration connection string collection with the key “marten.”

One of the major efforts with Marten 7 was rewiring Marten’s internals (then Wolverine’s) to strictly use the new NpgsqlDataSource concept for database connectivity. If you maybe caught me being less than polite about Npgsql on what’s left of Twitter, just know that the process was very painful but it’s completely done now and working well outside of the absurd noisiness of built in Npgsql logging.

Next, I have to explicitly tell Marten itself to load the NpgsqlDataSource object from the application’s IoC container instead of the older, idiomatic approach of passing a connection string directly to Marten as shown below:

builder.Services.AddMarten(opts =>
    {
        // Other configuration, but *not* the connection
    })
    
    // Use PostgreSQL data source from the IoC container
    .UseNpgsqlDataSource();

Now, switching to the AspireHost, I needed to add a Nuget reference to Aspire.Hosting.PostgreSQL in order to be able to bootstrap the PostgreSQL database at runtime. I also made project references from AspireHost to EventPublisher and TripBuildingService — which is important because Aspire does some source generation build a strong typed enumeration representing your projects that we’ll use next. That last step confused me when I was first playing with Aspire, so hopefully now you get to bypass that confusion. Maybe.

In the Program file for AspireHost, it’s just this:

var builder = DistributedApplication.CreateBuilder(args);

var postgresdb = builder.AddPostgres("marten");

builder.AddProject<Projects.EventPublisher>("publisher")
    .WithReference(postgresdb);

builder.AddProject<Projects.TripBuildingService>("trip-building")
    .WithReference(postgresdb);

builder.Build().Run();

Now, run the AspireHost project and you are able to run the two other services with the newly activated PostgreSQL Docker container, which you can see from the Docker Desktop dashboard:

Ship it!

Summary

Is .NET Aspire actually useful (I think so, even if it’s just for local development and testing maybe)? Can I explain the new Open Telemetry data exported from Marten? Would I use this instead of a dirt simple Docker Compose file like I do today (probably not to be honest)? Is this all fake?

All these questions and more will be somewhat addressed tomorrow-ish when I try to launch a new YouTube channel for JasperFx Software using the sample from this blog post as the subject for my first ever solo YouTube video.

One more thing…

I did alter the launchSettings.json file of the Aspire host project so it didn’t need to care about HTTPS to this:

{
  "$schema": "https://json.schemastore.org/launchsettings.json",
  "profiles": {
    "http": {
      "commandName": "Project",
      "dotnetRunMessages": true,
      "launchBrowser": true,
      "applicationUrl": "http://localhost:15242",
      "environmentVariables": {
        "ASPNETCORE_ENVIRONMENT": "Development",
        "DOTNET_ENVIRONMENT": "Development",
        "DOTNET_DASHBOARD_OTLP_ENDPOINT_URL": "http://localhost:19076",
        "DOTNET_RESOURCE_SERVICE_ENDPOINT_URL": "http://localhost:20101",
        "ASPIRE_ALLOW_UNSECURED_TRANSPORT": "true"
      }
    }
  }
}

Note the usage of the ASPIRE_ALLOW_UNSECURED_TRANSPORT environment variable.

Marten, Metrics, and Open Telemetry Support

Marten 7.10 was released today, and (finally) brings some built in support for monitoring Marten performance by exporting Open Telemetry and Metrics about Marten activity and performance within your system.

To use a little example, there’s a sample application in the Marten codebase called EventPublisher that we use to manually test some of the command line tooling. All that EventPublisher does is to continuously publish randomized events to a Marten event store when it runs. That made it a good place to start with a test harness for our new Open Telemetry support and performance related metrics.

For testing, I was just using the Project Aspire dashboard for viewing metrics and Open Telemetry tracing. First off, I enabled the “opt in” connection tracing for Marten, and put it into a verbose mode that’s probably only suitable for debugging or performance optimization work:

        // builder is a HostApplicationBuilder object
        builder.Services.AddMarten(opts =>
        {
            // Other Marten configuration...
            
            // Turn on Otel tracing for connection activity, and
            // also tag events to each span for all the Marten "write"
            // operations
            opts.OpenTelemetry.TrackConnections = TrackLevel.Verbose;

            // This opts into exporting a counter just on the number
            // of events being appended. Kinda a duplication
            opts.OpenTelemetry.TrackEventCounters();
            );
        });

That’s just the Marten side of things, so to hook up an Open Telemetry exporter for the Aspire dashboard, I added (really copy/pasted) this code (note that you’ll need the OpenTelemetry.Extensions.Hosting and OpenTelemetry.Exporter.OpenTelemetryProtocol Nugets added to your project):

        builder.Logging.AddOpenTelemetry(logging =>
        {
            logging.IncludeFormattedMessage = true;
            logging.IncludeScopes = true;
        });

        builder.Services.AddOpenTelemetry()
            .WithMetrics(metrics =>
            {
                metrics
                    .AddRuntimeInstrumentation().AddMeter("EventPublisher");
            })
            .WithTracing(tracing =>
            {
                tracing.AddAspNetCoreInstrumentation()
                    .AddHttpClientInstrumentation();
            });

        var endpointUri = builder.Configuration["OTEL_EXPORTER_OTLP_ENDPOINT"];
        builder.Services.AddOpenTelemetry().UseOtlpExporter();

        builder.Services.AddOpenTelemetry()
            // Enable exports of Open Telemetry activities
            .WithTracing(tracing =>
            {
                tracing.AddSource("Marten");
            })
            
            // Enable exports of metrics
            .WithMetrics(metrics =>
            {
                metrics.AddMeter("Marten");
            });

And now after running that with Aspire, you can see the output:

By itself, these spans, especially when shown in context of being nested within an HTTP request or a message being handled in a service bus framework, can point out where you may have performance issues from chattiness between the application server and the database — which I have found to be a very common source of performance problems out in the real world.

This is an opt in mode, but there are metrics and Open Telemetry tracing that are automatic for the background, async daemon subsystem. Skipping ahead a little bit, here’s a preview of some performance metrics in a related application that shows the “health” of a projection running in Marten’s async daemon subsystem by visualizing the “gap” between the projection’s current progression and the “high water mark” of Marten’s event store (how far the projection is sequentially compared to how far the whole event store itself is):

Summary

This is a short blog post, but I hope even this is enough to demonstrate how useful this new tracing is going to be in this new world order of Open Telemetry tracing tools.