Building a Critter Stack Application: Dealing with Concurrency

Hey, did you know that JasperFx Software is ready for formal support plans for Marten and Wolverine? Not only are we trying to make the “Critter Stack” tools be viable long term options for your shop, we’re also interested in hearing your opinions about the tools and how they should change. We’re also certainly open to help you succeed with your software development projects on a consulting basis whether you’re using any part of the Critter Stack or any other .NET server side tooling.

Let’s build a small web service application using the whole “Critter Stack” and their friends, one small step at a time. For right now, the “finished” code is at CritterStackHelpDesk on GitHub.

The posts in this series are:

  1. Event Storming
  2. Marten as Event Store
  3. Marten Projections
  4. Integrating Marten into Our Application
  5. Wolverine as Mediator
  6. Web Service Query Endpoints with Marten
  7. Dealing with Concurrency (this post)
  8. Wolverine’s Aggregate Handler Workflow FTW!
  9. Command Line Diagnostics with Oakton
  10. Integration Testing Harness
  11. Marten as Document Database
  12. Asynchronous Processing with Wolverine
  13. Durable Outbox Messaging and Why You Care!
  14. Wolverine HTTP Endpoints
  15. Easy Unit Testing with Pure Functions
  16. Vertical Slice Architecture
  17. Messaging with Rabbit MQ
  18. The “Stateful Resource” Model
  19. Resiliency

Last time out we talked using Marten’s projection data in the context of building query services inside our CQRS architecture for our new incident tracking help desk application. Today I want to talk about how to protect our systems from concurrency and ordering issues when we start to have more users or subsystems trying to access and even modify the same issues.

Imagine some of these unfortunately likely scenarios:

  1. A user gets impatient with our user interface and clicks on a button multiple times which sends multiple requests to our back end to add the same note to an incident
  2. A technician pulls up the incident details for something new, but then gets called away (or goes to lunch). A second technician pulls up the incident and carries out some actions to change the category or priority. The first technician come back to their desk, and tries to change the priority of the incident based on the stale data about that incident they already had open on their screen
  3. Later on, we may have several automated workflows happening that could conceivably try to change an incident simultaneously. In this case it might be important that actions involving an incident only happen one at a time to prevent inconsistent system state

In later posts I’ll talk about how Wolverine can help your system be much more robust in the face of concurrency issues and works with Marten to make your code robust for concurrency while still being low ceremony. Today though, I strictly want to talk about Marten’s built in protections for concurrency before getting fancy.

To review from a couple posts ago when I introduced Wolverine command handlers, we had this code to process a CategoriseIncident in our system:

    public static async Task Handle(
        CategoriseIncident command, 
        IDocumentSession session, 
        CancellationToken cancellationToken)
    {
        // Find the existing state of the referenced Incident
        var existing = await session
            .Events
            .AggregateStreamAsync<IncidentDetails>(command.Id, token: cancellationToken);

        // Don't worry, we're going to clean this up later
        if (existing == null)
        {
            throw new ArgumentOutOfRangeException(nameof(command), "Unknown incident id " + command.Id);
        }
        
        // We need to validate whether this command actually 
        // should do anything
        if (existing.Category != command.Category)
        {
            var categorised = new IncidentCategorised
            {
                Category = command.Category,
                UserId = SystemId
            };

            session.Events.Append(command.Id, categorised);
            await session.SaveChangesAsync(cancellationToken);
        }
    }

I’m going to change this handler to introduce some concurrency protection against the single incident referred to by the CategoriseIncident command. To do that, I’m going to use Marten’s FetchForWriting() API that we introduced specifically to make Marten easier to use within CQRS command handling and rewrite this handler to use optimistic concurrency protections:

    public static async Task Handle(
        CategoriseIncident command, 
        IDocumentSession session, 
        CancellationToken cancellationToken)
    {
        // Find the existing state of the referenced Incident
        // but also set Marten up for optimistic version checking on
        // the incident upon the call to SaveChangesAsync()
        var stream = await session
            .Events
            .FetchForWriting<IncidentDetails>(command.Id, cancellationToken);

        // Don't worry, we're going to clean this up later
        if (stream.Aggregate == null)
        {
            throw new ArgumentOutOfRangeException(nameof(command), "Unknown incident id " + command.Id);
        }
        
        // We need to validate whether this command actually 
        // should do anything
        if (stream.Aggregate.Category != command.Category)
        {
            var categorised = new IncidentCategorised
            {
                Category = command.Category,
                UserId = SystemId
            };

            stream.AppendOne(categorised);
            
            // This call may throw a ConcurrencyException!
            await session.SaveChangesAsync(cancellationToken);
        }
    }

Notice the call to FetchForWriting(). That loads the current IncidentDetails aggregate data for our incident event stream. Under the covers, Marten is also loading the current revision number for that incident event stream and tracking that. When the IDocumentSession.SaveChangesAsync() is called, it will attempt to append the new event(s) to the incident event stream, but this operation will throw a Marten ConcurrencyException and roll back the underlying database transaction if the incident event stream has been revisioned between the call to FetchForWriting() and SaveChangesAsync().

Do note that the call to FetchForWriting() can happily work with aggregate projections that are configured as either “live” or persisted to the database. Our strong recommendation within your command handlers where you’re appending events is to rely on this API so that you can easily change up projection lifecycles as necessary.

While this crude protection might be helpful by itself for concurrency protection, we can go farther and avoid doing work that is just going to fail anyway by telling Marten that the current command was issued assuming that the event stream is currently at an expected revision.

Just as a reminder to close the loop here, when we write the aggregated projection for IncidentDetails document type shown below:

public record IncidentDetails(
    Guid Id,
    Guid CustomerId,
    IncidentStatus Status,
    IncidentNote[] Notes,
    IncidentCategory? Category = null,
    IncidentPriority? Priority = null,
    Guid? AgentId = null,
    
    // This is meant to be the revision number
    // of the event stream for this incident
    int Version = 1
);

Marten will “automagically” set the value of a Version property of the aggregated document to the latest revision number of the event stream. This (hopefully) makes it relatively easy for systems built with Marten to transfer the current event stream revision number to user interfaces or other clients specifically to make optimistic concurrency protection easier.

Now that our user interface “knows” what it thinks the current version of the incident data is, we’ll also transmit that version number through our command that we’re posting to the service:

public class CategoriseIncident
{
    public Guid Id { get; set; }
    public IncidentCategory Category { get; set; }

    // This is to communicate to the server that
    // this command was issued assuming that the 
    // incident is currently at this revision
    // number
    public int Version { get; set; }
}

We’re going to change our message handler one more time, but this time we want a little stronger concurrency protection upfront to disallow any work from proceeding if the incident has been revisioned past where the client knew about, but still retain the optimistic concurrency check on SaveChangesAsync(). Squint really hard at the call to FetchForWriting() where I pass in the version number from the command as that’s the only change:

    public static async Task Handle(
        CategoriseIncident command, 
        IDocumentSession session, 
        CancellationToken cancellationToken)
    {
        // Find the existing state of the referenced Incident
        // *But*, throw a ConcurrencyException if the stream has been revisioned past
        // the expected, starting version communicated by the command
        var stream = await session
            .Events
            .FetchForWriting<IncidentDetails>(command.Id, command.Version, cancellationToken);

        // Don't worry, we're going to clean this up later
        if (stream.Aggregate == null)
        {
            throw new ArgumentOutOfRangeException(nameof(command), "Unknown incident id " + command.Id);
        }
        
        // We need to validate whether this command actually 
        // should do anything
        if (stream.Aggregate.Category != command.Category)
        {
            var categorised = new IncidentCategorised
            {
                Category = command.Category,
                UserId = SystemId
            };

            stream.AppendOne(categorised);
            
            // This call may throw a ConcurrencyException!
            await session.SaveChangesAsync(cancellationToken);
        }
    }

In the previous couple revisions, I’ve strictly used “optimistic concurrency” where you work on the assumption that it’s more likely than not okay to proceed, but update the database in some way such that it will reject the changes if the expected starting revision does not match the current revision stored in the database. Marten also has the option to use exclusive database locks where only the current transaction is allowed to edit the event stream. That usage is shown below, but yet again, just squint at the changed call to FetchForExclusiveWriting():

    public static async Task Handle(
        CategoriseIncident command, 
        IDocumentSession session, 
        CancellationToken cancellationToken)
    {

        // Careful! This will try to wait until the database can grant us exclusive
        // write access to the specific incident event stream
        var stream = await session
            .Events
            .FetchForExclusiveWriting<IncidentDetails>(command.Id, cancellationToken);

        // Don't worry, we're going to clean this up later
        if (stream.Aggregate == null)
        {
            throw new ArgumentOutOfRangeException(nameof(command), "Unknown incident id " + command.Id);
        }
        
        // We need to validate whether this command actually 
        // should do anything
        if (stream.Aggregate.Category != command.Category)
        {
            var categorised = new IncidentCategorised
            {
                Category = command.Category,
                UserId = SystemId
            };

            stream.AppendOne(categorised);
            
            await session.SaveChangesAsync(cancellationToken);
        }
    }

This approach is something I think of as a “guilty until proven innocent” tool. While this is absolutely more rigid protection against making concurrent access or concurrent processing of commands to a single incident event stream, it’s comes with some drawbacks. Using the exclusive lock makes your database engine work harder and use more resources is one issue. The database might also cause timeouts on the initial call to FetchForExclusiveWriting() as it has to wait until any previous locks from an ongoing transaction finishes. In your application you may need to separately handle this kind of TimeoutException differently from the optimistic ConcurrencyException (we’ll talk about that a lot more in later posts). This usage does come with a bit of risk for deadlocks in the database.

Lastly, you can technically use serializable transactions with Marten to really, really make the data access be serialized on a single event stream like so:

    public static async Task Handle(
        CategoriseIncident command, 
        IDocumentStore store, 
        CancellationToken cancellationToken)
    {
        // This is your last resort approach!
        await using var session = 
            await store.LightweightSerializableSessionAsync(cancellationToken);
        
        var stream = await session
            .Events
            .FetchForWriting<IncidentDetails>(command.Id, cancellationToken);

        // Don't worry, we're going to clean this up later
        if (stream.Aggregate == null)
        {
            throw new ArgumentOutOfRangeException(nameof(command), "Unknown incident id " + command.Id);
        }
        
        // We need to validate whether this command actually 
        // should do anything
        if (stream.Aggregate.Category != command.Category)
        {
            var categorised = new IncidentCategorised
            {
                Category = command.Category,
                UserId = SystemId
            };

            stream.AppendOne(categorised);
            
            await session.SaveChangesAsync(cancellationToken);
        }
    }

But if the exclusive lock was a “guilty until proven innocent” approach, then serializable transactions because of their even heavier overhead are a “break glass in case of emergency” usage you should keep in your back pocket unless you really, really need it.

Summary and What’s Next?

In this post I introduced Marten’s built in concurrency protections for appending data to event streams. For the most part, I think you should assume the usage of optimistic concurrency as a default as that’s lighter on your PostgreSQL database. I also showed how to track the current event stream version through projections in CQRS queries where it can then be used by clients to pass the expected starting version in commands to be used for optimistic concurrency checks within our CQRS commands.

In the next post, I think I’m going to introduce Wolverine’s aggregate handler workflow with Marten as a way of making the message handler in this post much simpler and easier to test.

Building a Critter Stack Application: Web Service Query Endpoints with Marten

Hey, did you know that JasperFx Software is ready for formal support plans for Marten and Wolverine? Not only are we trying to make the “Critter Stack” tools be viable long term options for your shop, we’re also interested in hearing your opinions about the tools and how they should change. We’re also certainly open to help you succeed with your software development projects on a consulting basis whether you’re using any part of the Critter Stack or any other .NET server side tooling.

Let’s build a small web service application using the whole “Critter Stack” and their friends, one small step at a time. For right now, the “finished” code is at CritterStackHelpDesk on GitHub.

The posts in this series are:

  1. Event Storming
  2. Marten as Event Store
  3. Marten Projections
  4. Integrating Marten into Our Application
  5. Wolverine as Mediator
  6. Web Service Query Endpoints with Marten (this post)
  7. Dealing with Concurrency
  8. Wolverine’s Aggregate Handler Workflow FTW!
  9. Command Line Diagnostics with Oakton
  10. Marten as Document Database
  11. Asynchronous Processing with Wolverine
  12. Durable Outbox Messaging and Why You Care!
  13. Wolverine HTTP Endpoints
  14. Easy Unit Testing with Pure Functions
  15. Vertical Slice Architecture
  16. Messaging with Rabbit MQ
  17. The “Stateful Resource” Model
  18. Resiliency

Last time up we introduced Wolverine as to help us build command handlers as the “C” in the CQRS architecture. This time out, I want to turn our attention back to Marten and building out some query endpoints to get the “Q” part of CQRS going by exposing projected event data in a read-only way through HTTP web services.

When we talked before about Marten projections (read-only “projected” view representations of the source events), I mentioned that these projected views could be create with three different lifecycles:

  1. “Live” projections are built on demand based on the current event data
  2. “Inline” projections are updated at the time new events are captured such that the “read side” model is always strongly consistent with the raw event data
  3. “Async” projections are continuously built by a background process in Marten applications and give you an eventual consistency model.

Alright, so let’s talk about when you might use different lifecycles of projection creation, then we’ll move on to how that changes the mechanics of how we’ll deliver projection data through web services. Offhand, I’d recommend a decision tree something like:

  • If you want to optimize the system’s “read” performance more than the “writes”, definitely use the Inline lifecycle
  • If you want to optimize the “write” performance of event capture and also want a strongly consistent “read” model that exactly reflects the current state, choose the Live lifecycle. Know though that if you go that way, you will want to model your system in such a way that you can keep your event streams short. It’s of course not exactly simple, because the Live aggregation time can also negatively impact command processing time if you need to first derive the current state in order to “decide” what new events should be emitted.
  • If you want to optimize both the “read” and “write” performance, but can be a little relaxed about the read side consistency, you can opt for Async projections

For starters, let’s build just a simple HTTP endpoint that returns the current state for a single Incident within our new help desk system. As a quick reminder, the IncidentDetails aggerated projection we’re about to work with is built out like this:

public record IncidentDetails(
    Guid Id,
    Guid CustomerId,
    IncidentStatus Status,
    IncidentNote[] Notes,
    IncidentCategory? Category = null,
    IncidentPriority? Priority = null,
    Guid? AgentId = null,

    // Marten is going to set this for us in
    // the projection work
    int Version = 1
);

public record IncidentNote(
    IncidentNoteType Type,
    Guid From,
    string Content,
    bool VisibleToCustomer
);

public enum IncidentNoteType
{
    FromAgent,
    FromCustomer
}

// This class contains the directions for Marten about how to create the
// IncidentDetails view from the raw event data
public class IncidentDetailsProjection: SingleStreamProjection<IncidentDetails>
{
    public static IncidentDetails Create(IEvent<IncidentLogged> logged) =>
        new(logged.StreamId, logged.Data.CustomerId, IncidentStatus.Pending, Array.Empty<IncidentNote>());

    public IncidentDetails Apply(IncidentCategorised categorised, IncidentDetails current) =>
        current with { Category = categorised.Category };

    public IncidentDetails Apply(IncidentPrioritised prioritised, IncidentDetails current) =>
        current with { Priority = prioritised.Priority };

    public IncidentDetails Apply(AgentAssignedToIncident prioritised, IncidentDetails current) =>
        current with { AgentId = prioritised.AgentId };

    public IncidentDetails Apply(IncidentResolved resolved, IncidentDetails current) =>
        current with { Status = IncidentStatus.Resolved };

    public IncidentDetails Apply(ResolutionAcknowledgedByCustomer acknowledged, IncidentDetails current) =>
        current with { Status = IncidentStatus.ResolutionAcknowledgedByCustomer };

    public IncidentDetails Apply(IncidentClosed closed, IncidentDetails current) =>
        current with { Status = IncidentStatus.Closed };
}

I want to make sure that I draw your attention to the Version property of the IncidentDetails projected document. Marten itself has a naming convention (it can be overridden with attributes too) where it will set this member to the current stream version number when Marten builds this single stream projection. That’s going to be vital in the next post when we start introducing concurrency projection models.

For right now, let’s say that we’re choosing to use the Live style. In this case, we’ll need to do the aggregation on the fly, then stream that down the HTTP body like so with MVC Core:

    [HttpGet("/api/incidents/{incidentId}")]
    public async Task<IResult> Get(Guid incidentId)
    {
        // In this case, the IncidentDetails are projected "live"
        var details = await _session.Events.AggregateStreamAsync<IncidentDetails>(incidentId);

        return details != null
            ? Results.Json(details)
            : Results.NotFound();
    }

If, however, we chose to produce the projected IncidentDetails data Inline such that the projected data is already persisted to the Marten database as a document, we’d first make this addition to the AddMarten() configuration in the application’s Program file:

builder.Services.AddMarten(opts =>
{
    // You always have to tell Marten what the connection string to the underlying
    // PostgreSQL database is, but this is the only mandatory piece of 
    // configuration
    var connectionString = builder.Configuration.GetConnectionString("marten");
    opts.Connection(connectionString);
    
    // We have to tell Marten about the projection we built in the previous post
    // so that Marten will "know" how to project events to the IncidentDetails
    // projected view
    opts.Projections.Add<IncidentDetailsProjection>(ProjectionLifecycle.Inline);
});

Lastly, we could now instead write that web service method in our MVC Core controller as:

    [HttpGet("/api/incidents/{incidentId}")]
    public async Task<IResult> Get(Guid incidentId)
    {
        // In this case, the IncidentDetails are projected "live"
        var details = await _session.LoadAsync<IncidentDetails>(incidentId);

        return details != null
            ? Results.Json(details)
            : Results.NotFound();
    }

One last trick for now, let’s make the web service above much faster! I’m going to add another library into the mix with this Nuget reference:

dotnet add package Marten.AspNetCore

And let’s revisit the previous web service endpoint and change it to this:

public class IncidentController : ControllerBase
{
    private readonly IDocumentSession _session;

    public IncidentController(IDocumentSession session)
    {
        _session = session;
    }
    
    [HttpGet("/api/incidents/{incidentId}")]
    public Task Get(Guid incidentId)
    {
        return _session
            .Json
            .WriteById<IncidentDetails>(incidentId, HttpContext);
    }
    
    // other methods....

The WriteById() usage up above is an extension method from the Marten.AspNetCore package that lets you stream raw, persisted JSON data from Marten directly to the HTTP response body in an ASP.Net Core endpoint in a very efficient way. At no point are you even bothering to instantiate an IncidentDetails object in memory just to immediately turn around and serialize it right back to the HTTP response. There’s basically no other faster way to build a web service for this information.

Summary and What’s Next

In this entry we talked a little bit about the consequences of the projection lifecycle decision for your web service. We also mentioned about how Marten can provide the stream version into projected documents that will be valuable soon when we talk about concurrency. Lastly, I introduced the Marten.AspNetCore library and its extension methods to directly “stream” JSON data stored into PostgreSQL directly to the HTTP response in a very efficient way.

In the next post we’re going to look at Marten’s concurrency protections and discuss why you care about these abilities.

Building a Critter Stack Application: Wolverine as Mediator

Hey, did you know that JasperFx Software is ready for formal support plans for Marten and Wolverine? Not only are we trying to make the “Critter Stack” tools be viable long term options for your shop, we’re also interested in hearing your opinions about the tools and how they should change. We’re also certainly open to help you succeed with your software development projects on a consulting basis whether you’re using any part of the Critter Stack or any other .NET server side tooling.

Let’s build a small web service application using the whole “Critter Stack” and their friends, one small step at a time. For right now, the “finished” code is at CritterStackHelpDesk on GitHub.

The posts in this series are:

  1. Event Storming
  2. Marten as Event Store
  3. Marten Projections
  4. Integrating Marten into Our Application
  5. Wolverine as Mediator (this post)
  6. Web Service Query Endpoints with Marten
  7. Dealing with Concurrency
  8. Wolverine’s Aggregate Handler Workflow FTW!
  9. Command Line Diagnostics with Oakton
  10. Integration Testing Harness
  11. Marten as Document Database
  12. Asynchronous Processing with Wolverine
  13. Durable Outbox Messaging and Why You Care!
  14. Wolverine HTTP Endpoints
  15. Easy Unit Testing with Pure Functions
  16. Vertical Slice Architecture
  17. Messaging with Rabbit MQ
  18. The “Stateful Resource” Model
  19. Resiliency

In the previous posts I’ve been focused on Marten as a persistence tool. Today I want to introduce Wolverine into the mix, but strictly as a “Mediator” tool within the commonly used MVC Core or Minimal API tools for web service development.

While Wolverine does much, much more than what we’re going to use today, let’s stay with the them of keeping these posts short and just dip our toes into the Wolverine water with a simple usage.

Using our web service project from previous posts, I’m going to add a reference to the main Wolverine nuget through:

dotnet add package WolverineFx

Next, let’s add Wolverine to our application with this one line of code within our Program file:

builder.Host.UseWolverine(opts =>
{
    // We'll add more here later, but the defaults are all
    // good enough for now
});

As a quick aside, Wolverine is added directly to the IHostBuilder instead of IServiceCollection through a “Use****()” method because it’s also quietly sliding in Lamar as the underlying IoC container. Some folks have been upset at that, so let’s be upfront with that right now. While I may talk about Lamar diagnostics as part of this series, it’s unlikely that that will ever be an issue for most users in any way. Lamar has some specific functionality that was built specifically for Wolverine and utilized quite heavily.

This time out, let’s move into the “C(ommand)” part of our CQRS architecture and build some handling for the CategoriseIncident command we’d initially discovered in our Event Storming session:

public class CategoriseIncident
{
    public Guid Id { get; set; }
    public IncidentCategory Category { get; set; }
    public int Version { get; set; }
}

And next, let’s build our very first ever Wolverine message handler for this command that will load the existing IncidentDetails for the designated incident, decide if the category is being changed, and add a new event to the event stream using Marten’s IDocumentSession service. That handler in code done purposely in an explicit, “long hand” style could be this — but in later posts we will use other Wolverine capabilities to make this code much simpler while even introducing a lot more robust set of validations:

public static class CategoriseIncidentHandler
{
    public static async Task Handle(
        CategoriseIncident command, 
        IDocumentSession session, 
        CancellationToken cancellationToken)
    {
        // Find the existing state of the referenced Incident
        var existing = await session
            .Events
            .AggregateStreamAsync<IncidentDetails>(command.Id, token: cancellationToken);

        // Don't worry, we're going to clean this up later
        if (existing == null)
        {
            throw new ArgumentOutOfRangeException(nameof(command), "Unknown incident id " + command.Id);
        }
        
        // We need to validate whether this command actually 
        // should do anything
        if (existing.Category != command.Category)
        {
            var categorised = new IncidentCategorised
            {
                Category = command.Category,
                UserId = SystemId
            };

            session.Events.Append(command.Id, categorised);
            await session.SaveChangesAsync(cancellationToken);
        }
    }
    
    // This is kinda faked out, nothing to see here!
    public static readonly Guid SystemId = Guid.NewGuid();
}

There’s a couple things I want you to note about the handler class above:

  • We’re not going to make any kind of explicit configuration to help Wolverine discover and use that handler class. Instead, Wolverine is going to discover that within our main service assembly because it’s a public, concrete class suffixed with the name “Handler” (there are other alternatives for this discovery if you don’t like that approach).
  • Wolverine “knows” that the Handle() method is a handler for the CategoriseIncident command because the method is named “Handle” and the first argument is that command type
  • Note that this handler is a static type. It doesn’t have to be, but doing so helps Wolverine shave off some object allocations at runtime.
  • Also note that Wolverine message handlers happily support “method injection” and allow you to inject IoC service dependencies like the Marten IDocumentSession through method arguments. You can also do the more traditional .NET approach of pulling everything through a constructor and setting instance fields, but hey, why not write simpler code?
  • While it’s perfectly legal to handle multiple message types in the same handler class, I typically recommend making that a one to one relationship in most cases

And next, let’s put this into context by having an MVC Core controller expose an HTTP route for this command type, then pass the command on to Wolverine where it will mediate between the HTTP outer world and the inner world of the application services like Marten:

// I'm doing it this way for now because this is 
// a common usage, but we'll move away from 
// this later into more of a "vertical slice"
// approach of organizing code
public class IncidentController : ControllerBase
{
    [HttpPost("/api/incidents/categorize")]
    public Task Categorize(
        [FromBody] CategoriseIncident command,
        [FromServices] IMessageBus bus)

        // IMessageBus is the main entry point into
        // using Wolverine
        => bus.InvokeAsync(command);
}

Summary and Next Time

In this post we looked at the very simplest usage of Wolverine, how to integrate that into your codebase, and how to get started writing command handlers with Wolverine. What I’d like you to take away is that Wolverine is a very different animal from “IHandler of T” frameworks like MediatR, NServiceBus, MassTransit, or Brighter that require mandatory interface signatures and/or base classes. Even when writing long hand code as I did, I hope you can notice already how much lower code ceremony Wolverine requires compared to more typical .NET frameworks that solve similar problems to Wolverine.

I very purposely wrote the message handlers in a very explicit way, and left out some significant use cases like concurrency protection, user input validation, and cross cutting concerns. I’m not 100% sure where I want to go next, but in this next week we’ll look at concurrency protections with Marten, highly efficient GET HTTP endpoints with Marten and ASP.Net Core, and start getting into Wolverine’s HTTP endpoint model.

Why you might ask are all the Wolverine nugets suffixed with “Fx?” The Marten core team and some of our closest collaborators really liked the name “Wolverine” for this project and instantly came up with the project graphics, but when we tried to start publishing Nuget packages, we found out that someone is squatting on the name “Wolverine” in Nuget and we weren’t able to get the rights to that name. Rather than change course, we stubbornly went full speed ahead with the “WolverineFx” naming scheme just for the published Nugets.

Let’s Get Controversial for (only) a Minute!

When my wife and I watched the Silicon Valley show, I think she was bemused when I told her there was a pretty heated debate in development circles over “tabs vs spaces.”

I don’t want this to detract too much from the actual content of this series, but I have very mixed feelings about ASP.Net MVC Core as a framework and the whole idea of using a “mediator” as popularized by the MediatR library within an MVC Core application.

I’ve gone back and forth on both ASP.Net MVC in its various incarnations and also on MediatR both alone and as a complement to MVC Core. Where I’ve landed at right now is the opinion that MVC Core used by itself is a very flawed framework that can easily lead to unmaintainable code over time as an enterprise system grows over time as typical interpretations of the “Clean Architecture” style in concert with MVC Core’s routing rules lead unwary developers to creating bloated MVC controller classes.

While I was admittedly unimpressed with MediatR as I first encountered it on its own merits in isolation, what I will happily admit is that the usage of MediatR is helpful within MVC Core controllers as a way to offload operation specific code into more manageable pieces as opposed to the bloated controllers that frequently result from using MVC Core. I have since occasionally recommended the usage of MediatR within MVC Core codebases to my consulting clients as a way to help make their code easier to maintain over time.

If you’re interested, I touched on this theme somewhat in my talk A Contrarian View of Software Architecture from NDC Oslo 2023. And yes, I absolutely think you can build maintainable systems with MVC Core over time even without the MediatR crutch, but I think you have to veer away from the typical usage of MVC Core to do so and be very mindful of how you’re using the framework. In other words, MVC Core does not by itself lead teams to a “pit of success” for maintainable code in the long run. I think that MediatR or Wolverine with MVC Core can help, but I think we can do better in the long run by moving away from MVC Core. a

By the time this series is over, I will be leaning very hard into organizing code in a vertical slice architecture style and seeing how to use the Critter Stack to create maintainability and testability without the typically complex “Ports and Adapter” style architecture that well meaning server side development teams have been trying to use in the past decade or two.

While I introduced Wolverine today as a “mediator” tool within MVC Core, by the time this series is done we’ll move away from MVC Core with or without MediatR or “Wolverine as MediatR” and use Wolverine’s HTTP endpoint model by itself as simpler alternative with less code ceremony — and I’m going to try hard to make the case that that simpler model is a superior way to build systems.

Building a Critter Stack Application: Integrating Marten into Our Application

Hey, did you know that JasperFx Software is ready for formal support plans for Marten and Wolverine? Not only are we trying to make the “Critter Stack” tools be viable long term options for your shop, we’re also interested in hearing your opinions about the tools and how they should change.

Let’s build a small web service application using the whole “Critter Stack” and their friends, one small step at a time. For right now, the “finished” code is at CritterStackHelpDesk on GitHub.

The posts in this series are:

  1. Event Storming
  2. Marten as Event Store
  3. Marten Projections
  4. Integrating Marten into Our Application (this post)
  5. Wolverine as Mediator
  6. Web Service Query Endpoints with Marten
  7. Dealing with Concurrency
  8. Wolverine’s Aggregate Handler Workflow FTW!
  9. Command Line Diagnostics with Oakton
  10. Integration Testing Harness
  11. Marten as Document Database
  12. Asynchronous Processing with Wolverine
  13. Durable Outbox Messaging and Why You Care!
  14. Wolverine HTTP Endpoints
  15. Easy Unit Testing with Pure Functions
  16. Vertical Slice Architecture
  17. Messaging with Rabbit MQ
  18. The “Stateful Resource” Model
  19. Resiliency

In the previous couple posts I’ve introduced Marten as a standalone library and some of its capabilities for persisting events and creating projected views off those events within an event sourcing persistence strategy. Today I want to end the week by simply talking about how to integrate Marten into an ASP.Net Core application.

Oskar’s Introduction to Event Sourcing – Self Paced Kit has a world of information for folks getting started with event sourcing.

Let’s start a shell of a new web service project and add a Nuget reference to Marten through:

dotnet new webapi
dotnet add package Marten

If you’ll open up the Program.cs file in your new application, find this line of code at the top where it’s just starting to configure your application:

using Marten;
// Many other using statements

var builder = WebApplication.CreateBuilder(args);

Right underneath that (it doesn’t actually matter most times what order this all happens inside the Program code, but I’m giving Marten the seat at the head of the table so to speak), add this code:

// "AddTool()" is now the common .NET idiom
// for integrating tools into .NET applications
builder.Services.AddMarten(opts =>
{
    // You always have to tell Marten what the connection string to the underlying
    // PostgreSQL database is, but this is the only mandatory piece of 
    // configuration
    var connectionString = builder.Configuration.GetConnectionString("marten");
    opts.Connection(connectionString);
    
    // We have to tell Marten about the projection we built in the previous post
    // so that Marten will "know" how to project events to the IncidentDetails
    // projected view
    opts.Projections.Add<IncidentDetailsProjection>(ProjectionLifecycle.Inline);
})
    // This is a mild optimization
    .UseLightweightSessions();;

That little bit of code is adding the necessary Marten services to your application’s underlying IoC container with the correct scoping. The main services you’ll care about are:

ServiceDescriptionLifetime
IDocumentStoreRoot configuration of the Marten databaseSingleton
IQuerySessionRead-only subset of the IDocumentSessionScoped
IDocumentSessionMarten’s unit of work service that also exposes capabilities for querying and the event storeScoped
Marten services in the IoC container

You can read more about the bootstrapping options in Marten in the documentation. If you’re wondering what “Lightweight Session” means to Marten, you can learn more about the different flavors of sessions in the documentation, but treat that as an advanced subject that’s not terribly relevant to this post.

And also, the usage of AddMarten() should feel familiar to .NET developers now as it follows the common idioms for integrating external tools into .NET applications through the generic host infrastructure that came with .NET Core. As a long term .NET developer, I cannot exaggerate how valuable I think this standardization of application bootstrapping has been for the OSS community in .NET.

Using Marten in an MVC Controller

For right now, I want to assume that many of you are already familiar with ASP.Net MVC Core, so let’s start by showing the usage of Marten within a simple controller to build the first couple endpoints to log a new incident and fetch the current state of an incident in our new incident tracking, help desk service:

public class IncidentController : ControllerBase
{
    private readonly IDocumentSession _session;

    public IncidentController(IDocumentSession session)
    {
        _session = session;
    }

    [HttpPost("/api/incidents")]
    public async Task<IResult> Log(
        [FromBody] LogIncident command
        )
    {
        var userId = currentUserId();
        var logged = new IncidentLogged(command.CustomerId, command.Contact, command.Description, userId);

        var incidentId = _session.Events.StartStream(logged).Id;
        await _session.SaveChangesAsync(HttpContext.RequestAborted);

        return Results.Created("/incidents/" + incidentId, incidentId);
    }

    [HttpGet("/api/incidents/{incidentId}")]
    public async Task<IResult> Get(Guid incidentId)
    {
        // In this case, the IncidentDetails are projected
        // "inline", meaning we can load the pre-built projected
        // view
        var details = await _session.LoadAsync<IncidentDetails>(incidentId);

        return details != null
            ? Results.Json(details)
            : Results.NotFound();
    }

    private Guid currentUserId()
    {
        // let's say that we do something here that "finds" the
        // user id as a Guid from the ClaimsPrincipal
        var userIdClaim = User.FindFirst("user-id");
        if (userIdClaim != null && Guid.TryParse(userIdClaim.Value, out var id))
        {
            return id;
        }

        throw new UnauthorizedAccessException("No user");
    }
}

It’s important to note at this point (that might change in Marten 7) that the IDocumentSession should be disposed when you’re doing using it to tell Marten to close down any open database connections. In the usage above, the scoped IoC container mechanics in ASP.Net Core are handling all the necessary object disposal for you.

Summary and What’s Next

Today we strictly just looked at how to integrate Marten services into a .NET application using the AddMarten() mechanism. Using a simple MVC Core Controller, we saw how Marten services are available and managed in the application’s IoC container and how to perform basic event sourcing actions in the context of little web service endpoints.

In later posts in this series we’ll actually replace this IncidentController with Wolverine endpoints and some “special sauce” with Marten to be much more efficient.

In the next post, I think I want to talk through the CQRS architectural style with Marten. Bear with me, but I’ll still be using explicit code with MVC Core controllers that folks are probably already familiar with to talk over the requirements and Marten capabilities in isolation. Don’t worry though, I will eventually introduce Wolverine into the mix to show how the Wolverine + Marten integration can make your code very clean and testable.

Building a Critter Stack Application: Marten Projections

Hey, did you know that JasperFx Software is ready for formal support plans for Marten and Wolverine? Not only are we trying to make the “Critter Stack” tools be viable long term options for your shop, we’re also interested in hearing your opinions about the tools and how they should change.

Let’s build a small web service application using the whole “Critter Stack” and their friends, one small step at a time. For right now, the “finished” code is at CritterStackHelpDesk on GitHub.

The posts in this series are:

  1. Event Storming
  2. Marten as Event Store
  3. Marten Projections (this post)
  4. Integrating Marten into Our Application
  5. Wolverine as Mediator
  6. Web Service Query Endpoints with Marten
  7. Dealing with Concurrency
  8. Wolverine’s Aggregate Handler Workflow FTW!
  9. Command Line Diagnostics with Oakton
  10. Integration Testing Harness
  11. Marten as Document Database
  12. Asynchronous Processing with Wolverine
  13. Durable Outbox Messaging and Why You Care!
  14. Wolverine HTTP Endpoints
  15. Easy Unit Testing with Pure Functions
  16. Vertical Slice Architecture
  17. Messaging with Rabbit MQ
  18. The “Stateful Resource” Model
  19. Resiliency

In the previous post I showed how to use the Marten library as the storage mechanism for events and event streams within an event sourcing persistence strategy. If you’re following along, you’ve basically learned how to stuff little bits of JSON into a database as the authoritative source of truth for your system. You might be asking yourself “what the @#$%@# am I supposed to do this this stuff now?” In today’s post I’m going to show you how Marten can help you derive the current state of the system from the raw event data through its usage of projections.

For more information about the conceptual role of projections in an event sourcing system, see my colleague Oskar Dudycz‘s post Guide to Projections and Read Models in Event-Driven Architecture.

Back to our help desk service, last time we created event streams representing each incident with events like:

public record IncidentLogged(
    Guid CustomerId,
    Contact Contact,
    string Description,
    Guid LoggedBy
);
 
public class IncidentCategorised
{
    public IncidentCategory Category { get; set; }
    public Guid UserId { get; set; }
}
 
public record IncidentPrioritised(IncidentPriority Priority, Guid UserId);
 
public record AgentAssignedToIncident(Guid AgentId);
 
public record AgentRespondedToIncident(        
    Guid AgentId,
    string Content,
    bool VisibleToCustomer);
 
public record CustomerRespondedToIncident(
    Guid UserId,
    string Content
);
 
public record IncidentResolved(
    ResolutionType Resolution,
    Guid ResolvedBy,
    DateTimeOffset ResolvedAt
);

Those events are directly stored in our database as our single source of truth, but we will absolute need to derive the current state of an incident to support:

  • User interface screens
  • Reports
  • Decision making within the help desk workflow (what event sourcing folks call the “write model”

For now, let’s say that we’d really like to have this view of a single incident:

public class IncidentDetails
{
    public Guid Id { get; set; }
    public Guid CustomerId{ get; set; }
    public IncidentStatus Status{ get; set; }
    public IncidentNote[] Notes { get; set; } = Array.Empty<IncidentNote>();
    public IncidentCategory? Category { get; set; }
    public IncidentPriority? Priority { get; set; }
    public Guid? AgentId { get; set; }
    public int Version { get; set; }
}

Let’s teach Marten how to combine the raw events describing an incident into our new IncidentDetails view. The easiest possible way to do that is to drop some new methods onto our IncidentDetails class to “teach” Marten how to modify the projected view:

public class IncidentDetails
{
    public IncidentDetails()
    {
    }

    public IncidentDetails(IEvent<IncidentLogged> logged)
    {
        Id = logged.StreamId;
        CustomerId = logged.Data.CustomerId;
        Status = IncidentStatus.Pending;
    }

    public Guid Id { get; set; }
    public Guid CustomerId{ get; set; }
    public IncidentStatus Status{ get; set; }
    public IncidentNote[] Notes { get; set; } = Array.Empty<IncidentNote>();
    public IncidentCategory? Category { get; set; }
    public IncidentPriority? Priority { get; set; }
    public Guid? AgentId { get; set; }

    // Marten itself will set this to its tracked
    // revision number for the incident
    public int Version { get; set; }

    public void Apply(IncidentCategorised categorised) => Category = categorised.Category;
    public void Apply(IncidentPrioritised prioritised) => Priority = prioritised.Priority;
    public void Apply(AgentAssignedToIncident prioritised) => AgentId = prioritised.AgentId;
    public void Apply(IncidentResolved resolved) => Status = IncidentStatus.Resolved;
    public void Apply(ResolutionAcknowledgedByCustomer acknowledged) => Status = IncidentStatus.ResolutionAcknowledgedByCustomer;
    public void Apply(IncidentClosed closed) => Status = IncidentStatus.Closed;
}

In action, the simplest way to execute the projection is to do a “live aggregation” as shown below:

static async Task PrintIncident(IDocumentStore store, Guid incidentId)
{
    await using var session = store.LightweightSession();
    
    // Tell Marten to load all events -- in order -- for the designated
    // incident event stream, then project that data into an IncidentDetails
    // view
    var incident = await session.Events.AggregateStreamAsync<IncidentDetails>(incidentId);
}

You can see a more complicated version of this projection in action by running the EventSourcingDemo project from the command line. Just see the repository README for instructions on setting up the database.

Marten is using a set of naming conventions to “know” how to pass event data to the IncidentDetails objects. As you can probably guess, Marten is calling the Apply() overloads to mutate the InvoiceDetails object for each event based on the event type. Those conventions are documented here — and yes, there are plenty of other options for using more explicit code instead of the conventional approach if you don’t care for that.

This time, with immutability!

In the example above, I purposely chose the simplest possible approach, and that led me to using a mutable structure for InvoiceDetails that kept all the details of how to project the events in the InvoiceDetails class itself. As an alternative, let’s make the InvoiceDetails be immutable as a C# record instead like so:

public record IncidentDetails(
    Guid Id,
    Guid CustomerId,
    IncidentStatus Status,
    IncidentNote[] Notes,
    IncidentCategory? Category = null,
    IncidentPriority? Priority = null,
    Guid? AgentId = null,
    int Version = 1
);

And as another alternative, let’s say you’d rather have the Marten projection logic external to the nice, clean IncidentDetails code above. That’s still possible by creating a separate class. The most common projection type is to project the events of a single stream, and for that you can subclass the Marten SingleStreamProjection base class to create your projection logic as shown below:

public class IncidentDetailsProjection: SingleStreamProjection<IncidentDetails>
{
    public static IncidentDetails Create(IEvent<IncidentLogged> logged) =>
        new(logged.StreamId, logged.Data.CustomerId, IncidentStatus.Pending, Array.Empty<IncidentNote>());

    public IncidentDetails Apply(IncidentCategorised categorised, IncidentDetails current) =>
        current with { Category = categorised.Category };

    public IncidentDetails Apply(IncidentPrioritised prioritised, IncidentDetails current) =>
        current with { Priority = prioritised.Priority };

    public IncidentDetails Apply(AgentAssignedToIncident prioritised, IncidentDetails current) =>
        current with { AgentId = prioritised.AgentId };

    public IncidentDetails Apply(IncidentResolved resolved, IncidentDetails current) =>
        current with { Status = IncidentStatus.Resolved };

    public IncidentDetails Apply(ResolutionAcknowledgedByCustomer acknowledged, IncidentDetails current) =>
        current with { Status = IncidentStatus.ResolutionAcknowledgedByCustomer };

    public IncidentDetails Apply(IncidentClosed closed, IncidentDetails current) =>
        current with { Status = IncidentStatus.Closed };
}

The exact same set of naming conventions still apply here, with Apply() methods creating a new revision of the IncidentDetails for each event, and the Create() method helping Marten to start an IncidentDetails object for the first event in the stream.

This usage does require you to register the custom projection class upfront in the Marten configuration like this:

var connectionString = "Host=localhost;Port=5433;Database=postgres;Username=postgres;password=postgres";
await using var store = DocumentStore.For(opts =>
{
    opts.Connection(connectionString);
    
    // Telling Marten about the projection logic for the IncidentDetails
    // view of the events
    opts.Projections.Add<IncidentDetailsProjection>(ProjectionLifecycle.Live);
});

Don’t worry too much about that “Live” option, we’ll dive deeper into projection lifecycles as we progress in this series.

Summary and What’s Next

Projections are a Marten feature that enable you to create usable views out of the raw event data. We used the simplest projection recipes in this post to create an IncidentDetails vew out of the raw incident events that we will use later on to build our web service.

In this sample, I was showing Marten’s ability to evaluate projected views on the fly by loading the events into memory and combining them into the final projection result on demand. Marten also has the ability to persist these projected data views ahead of time for faster querying (“Inline” or “Async” projections). If you’re familiar with the concept of materialized views in databases that support that, projections running inline or in a background process are a close analogue.

In the next post, I think I just want to talk about how to integrate Marten into an ASP.Net Core application and utilize Marten in a simple MVC Core controller — but don’t worry, before we’re done, we’re going to replace the MVC Core code with much slimmer code using Wolverine, but one new concept or tool at a time!

Building a Critter Stack Application: Marten as Event Store

Hey, did you know that JasperFx Software is ready for formal support plans for Marten and Wolverine? Not only are we trying to make the “Critter Stack” tools be viable long term options for your shop, we’re also interested in hearing your opinions about the tools and how they should change.

Let’s build a small web service application using the whole “Critter Stack” and their friends, one small step at a time. For right now, the “finished” code is at CritterStackHelpDesk on GitHub.

The posts in this series are:

  1. Event Storming
  2. Marten as Event Store (this post)
  3. Marten Projections
  4. Integrating Marten into Our Application
  5. Wolverine as Mediator
  6. Web Service Query Endpoints with Marten
  7. Dealing with Concurrency
  8. Wolverine’s Aggregate Handler Workflow FTW!
  9. Command Line Diagnostics with Oakton
  10. Integration Testing Harness
  11. Marten as Document Database
  12. Asynchronous Processing with Wolverine
  13. Durable Outbox Messaging and Why You Care!
  14. Wolverine HTTP Endpoints
  15. Easy Unit Testing with Pure Functions
  16. Vertical Slice Architecture
  17. Messaging with Rabbit MQ
  18. The “Stateful Resource” Model
  19. Resiliency

Event Sourcing

Event Sourcing is a style of persistence where the single source of truth, system state is a read only, append only sequence of all the events that resulted in a change in the system state. Using our HelpDesk incident tracking application we first started describing in the previous post on Event Storming, that results in a sequence like this:

SequenceIncident IdEvent Type
11IncidentLogged
21IncidentCategorized
32IncidentLogged
43IncidentLogged
51IncidentResolved
An event log

As you could probably guess already from the table above, the events will be stored in one single log in the sequential order they were appended. You can also see that events will be categorized by their relationship to a single logical incident. This grouping is typically called a “stream” in event sourcing.

As a first quick foray into event sourcing, let’s look at using the Marten library to create an event store for our help desk application built on top of a PostgreSQL database.

In case you’re wondering, Marten is merely a fancy library that helps you access and treat the rock solid PostgreSQL database engine as both a document database and as an event store. Marten was purposely built on PostgreSQL specifically because of the unique JSON capabilities of PostgreSQL. It’s possible that the event store portion of Marten eventually gets ported to other databases in the future (Sql Server), but it’s highly unlikely that the document database feature set would ever follow.

Using Marten as an Event Store

This code is all taken from the CritterStackHelpDesk repository, and specifically the EventSourcingDemo console project. The repository’s README file has instructions on running that project.

First off, let’s build us some events that we can later store in our new event store:

public record IncidentLogged(
    Guid CustomerId,
    Contact Contact,
    string Description,
    Guid LoggedBy
);

public class IncidentCategorised
{
    public IncidentCategory Category { get; set; }
    public Guid UserId { get; set; }
}

public record IncidentPrioritised(IncidentPriority Priority, Guid UserId);

public record AgentAssignedToIncident(Guid AgentId);

public record AgentRespondedToIncident(        
    Guid AgentId,
    string Content,
    bool VisibleToCustomer);

public record CustomerRespondedToIncident(
    Guid UserId,
    string Content
);

public record IncidentResolved(
    ResolutionType Resolution,
    Guid ResolvedBy,
    DateTimeOffset ResolvedAt
);

You’ll notice there’s a (hopefully) consistent naming convention. The event types are named in the past tense and should refer clearly to a logical event in the system’s workflow. You might also notice that these events are all built with C# records. This isn’t a requirement, but it makes the code pretty terse and there’s no reason for these events to ever be mutable anyway.

Next, I’ve created a small console application and added a reference to the Marten library like so from the command line:

dotnet new console
dotnet add package Marten

Before we even think about using Marten itself, let’s get ourselves a new, blank PostgreSQL database spun up for our little application. Assuming that you have Docker Desktop or some functional alternative on your development machine, there’s a docker compose file in the root of the finished product that we can use to stand up a new database with:

docker compose up -d

Note, and this is an important point, there is absolutely nothing else you need to do to make this new database perfectly usable for the code we’re going to write next. No manual database setup, no SQL scripts for you to run, no other command line scripts. Just write code and go.

Next, we’re going to configure Marten in code, then:

  1. Start a new “Incident” stream with a couple events
  2. Append additional events to our new stream

The code to do nothing but what I described is shown below:

// This matches the docker compose file configuration
var connectionString = "Host=localhost;Port=5433;Database=postgres;Username=postgres;password=postgres";

// This is spinning up Marten with its default settings
await using var store = DocumentStore.For(connectionString);

// Create a Marten unit of work
await using var session = store.LightweightSession();

var contact = new Contact(ContactChannel.Email, "Han", "Solo");
var userId = Guid.NewGuid();

// I'm telling the Marten session about the new stream, and then recording
// the newly assigned Guid for this stream
var customerId = Guid.NewGuid();
var incidentId = session.Events.StartStream(
    new IncidentLogged(customerId, contact, "Software is crashing",userId),
    new IncidentCategorised
    {
        Category = IncidentCategory.Database,
        UserId = userId
    }
    
).Id;

await session.SaveChangesAsync();

// And now let's append an additional event to the 
// new stream
session.Events.Append(incidentId, new IncidentPrioritised(IncidentPriority.High, userId));
await session.SaveChangesAsync();

Let’s talk about what I just did — and did not do — in the code above. The DocumentStore class in Marten establishes the storage configuration for a single, logical Marten-ized database. This is an expensive object to create, so there should only ever be one instance in your system.

The actual work is done with Marten’s IDocumentSession service that I created with the call to store.LightweightSession(). The IDocumentSession is Marten’s unit of work implementation and plays the same role as DbContext does inside of EF Core. When you use Marten, you queue up operations (start a new event stream, append events, etc.), then commit them in one single database transaction when you call that SaveChangesAsync() method.

For anybody old enough to have used NHibernate reading this, DocumentStore plays the same role as NHibernate’s ISessionFactory.

So now, let’s read back in the events we just persisted, and print out serialized JSON of the Marten data just to see what Marten is actually capturing:

var events = await session.Events.FetchStreamAsync(incidentId);
foreach (var e in events)
{
    // I elided a little bit of code that sets up prettier JSON
    // formatting
    Console.WriteLine(JsonConvert.SerializeObject(e, settings));
}

The raw JSON output is this:

{
  "Data": {
    "CustomerId": "314d8fa1-3cca-4984-89fc-04b24122cf84",
    "Contact": {
      "ContactChannel": "Email",
      "FirstName": "Han",
      "LastName": "Solo",
      "EmailAddress": null,
      "PhoneNumber": null
    },
    "Description": "Software is crashing",
    "LoggedBy": "8a842212-3511-4858-a3f3-dd572a4f608f"
  },
  "EventType": "Helpdesk.Api.IncidentLogged, Helpdesk.Api, Version=1.0.0.0, Culture=neutral, PublicKeyToken=null",
  "EventTypeName": "incident_logged",
  "DotNetTypeName": "Helpdesk.Api.IncidentLogged, Helpdesk.Api",
  "IsArchived": false,
  "AggregateTypeName": null,
  "StreamId": "018c1c9b-5bd0-4273-947d-83d28c8e3210",
  "StreamKey": null,
  "Id": "018c1c9b-5f03-47f5-8c31-1d1ba70fd56a",
  "Version": 1,
  "Sequence": 1,
  "Timestamp": "2023-11-29T19:43:13.864064+00:00",
  "TenantId": "*DEFAULT*",
  "CausationId": null,
  "CorrelationId": null,
  "Headers": null
}
{
  "Data": {
    "Category": "Database",
    "UserId": "8a842212-3511-4858-a3f3-dd572a4f608f"
  },
  "EventType": "Helpdesk.Api.IncidentCategorised, Helpdesk.Api, Version=1.0.0.0, Culture=neutral, PublicKeyToken=null",
  "EventTypeName": "incident_categorised",
  "DotNetTypeName": "Helpdesk.Api.IncidentCategorised, Helpdesk.Api",
  "IsArchived": false,
  "AggregateTypeName": null,
  "StreamId": "018c1c9b-5bd0-4273-947d-83d28c8e3210",
  "StreamKey": null,
  "Id": "018c1c9b-5f03-4a19-82ef-9c12a84a4384",
  "Version": 2,
  "Sequence": 2,
  "Timestamp": "2023-11-29T19:43:13.864064+00:00",
  "TenantId": "*DEFAULT*",
  "CausationId": null,
  "CorrelationId": null,
  "Headers": null
}
{
  "Data": {
    "Priority": "High",
    "UserId": "8a842212-3511-4858-a3f3-dd572a4f608f"
  },
  "EventType": "Helpdesk.Api.IncidentPrioritised, Helpdesk.Api, Version=1.0.0.0, Culture=neutral, PublicKeyToken=null",
  "EventTypeName": "incident_prioritised",
  "DotNetTypeName": "Helpdesk.Api.IncidentPrioritised, Helpdesk.Api",
  "IsArchived": false,
  "AggregateTypeName": null,
  "StreamId": "018c1c9b-5bd0-4273-947d-83d28c8e3210",
  "StreamKey": null,
  "Id": "018c1c9b-5fef-4644-b213-56051088dc15",
  "Version": 3,
  "Sequence": 3,
  "Timestamp": "2023-11-29T19:43:13.909+00:00",
  "TenantId": "*DEFAULT*",
  "CausationId": null,
  "CorrelationId": null,
  "Headers": null
}

And that’s a lot of noise, so let me try to summarize the blob above:

  • Marten is storing each event as serialized JSON in one table, and that’s what you see as the Data leaf in each JSON document above
  • Marten is assigning a unique sequence number for each event
  • StreamId is the incident stream identity that groups the events
  • Each event is assigned a Version that reflects its position within its stream
  • Marten tracks the kind of metadata that you’d probably expect, like timestamps, optional header information, and optional causation/correlation information (we’ll use this much later in the series when I get around to discussing Open Telemetry)

Summary and What’s Next

In this post I introduced the core concepts of event sourcing, events, and event streams. I also introduced the bare bones usage of the Marten library as a way to create new event streams and append events to existing events. Lastly, we took a look at the important metadata that Marten tracks for you in addition to your raw event data. Along the way, we also previewed how the Critter Stack can reduce development time friction by very happily building out the necessary database schema objects for us as needed.

What you are probably thinking at this point is something to the effect of “So what?” After all, jamming little bits of JSON data into the database doesn’t necessarily help us build a user interface page showing a help desk technician what the current state of each open incident is. Heck, we don’t yet have any way to understand the actual current state of any incident!

Fear not though, because in the next post I’ll introduce Marten “Projections” capability that will help us create the “read side” view of the current system state out of the raw event data in whatever format happens to be most convenient for that data’s client or user.

Building a Critter Stack Application: Event Storming

Hey, did you know that JasperFx Software is ready for formal support plans for Marten and Wolverine? Not only are we trying to make the “Critter Stack” tools be viable long term options for your shop, we’re also interested in hearing your opinions about the tools and how they should change.

I did a series of presentations a couple weeks ago showing off the usage of Wolverine and Marten to build a small service using CQRS and Event Sourcing and you can see the video above from .NET Conf 2023. Great, but I thought the talk was way too dense and I’m going to rebuild the talk from scratch before CodeMash. So I’ve got a relatively complete sample application, and we get a lot of feedback that there needs to be a single sample application that’s more realistic to show off what Marten and Wolverine can actually do. Based on other feedback, I know there’s some value in having a series of short, focused posts that build up a sample application one little concept at a time.

To that end, this post will be the start of a multi-part series showing how to use Marten and Wolverine for a CQRS architecture in an ASP.Net Core web service that also uses event sourcing as a persistence strategy.

The series so far:

I blatantly stole (with permission) this sample application idea from Oskar Dudycz. His version of the app is also on GitHub.

If you’re reading this post, it’s very likely you’re a software professional and you’re already familiar with online incident tracking applications — but hey, let’s build yet another one for a help desk company just because it’s a problem domain you’re likely (all too) familiar with!

Let’s say that you’re magically able to get your help desk business experts and stakeholders in a room (or virtual meeting) with the development team all at one time. Crazy, I know, but bear with me. Since you’re altogether, this is a fantastic opportunity to get the new system started is a very collaborative approach called Event Storming that works very well for both event sourcing and CQRS approaches.

The format is pretty simple. Go to any office supply company and get the typical pack of sticky notes like these:

Start by asking the business experts to describe events within the desired workflow that would lead to a change in state or a milestone in the business process. Try to record their terminology on orange sticky notes with a short name that generally implies a past event. In the case of an incident service, those events might be:

  • IncidentLogged
  • IncidentCategorised
  • IncidentResolved

This isn’t waterfall, so you can happily jump back and forth between steps here, but the next general step is to try to identify the actions or “commands” in the system that would cause each of our previously identified events. Jot these commands down on blue sticky notes with a short name in an imperative form like “LogIncident” or “CategoriseIncident”. Create some record of cause and effect by putting the blue sticky command notes just to the left of the orange sticky notes for the related events.

It’s also helpful to organize the sticky notes roughly left to right to give some context to what commands or events happen in what order (which I did not do in my crude diagram in a second).

Even though my graphic below doesn’t do this, it’s perfectly possible for the relationship between commands and events to be one command to many events.

In the course of executing these newly discovered commands, we can start to call out possible “views” of the raw event data that we might need as necessary context. We’ll record these views with a short descriptive name on green sticky notes.

After some time, our wall should be covered in sticky notes in a manner something like this:

Right off the bat, we’re learning what the DDD folks call the ubiquitous language for our business domain that can be shared between us technical folks and the business domain experts. Moreover, as we’ll see in later posts, these names from is ostensibly a requirements gathering session can translate directly to actual code artifact names.

My experience with Event Storming has been very positive, but I’d guess that it depends on how cooperative and collaborative your business partners are with this format. I found it to be a great format to talk through a system’s requirements in a way that provides actual traceability to code implementation details. In other words, when you talk with the business folks and speak in terms of an IncidentLogged, there will actually be a type in your codebase like this:

public record IncidentLogged(
    Guid CustomerId,
    Contact Contact,
    string Description,
    Guid LoggedBy
);

or LogIncident:

public record LogIncident(
    Guid CustomerId,
    Contact Contact,
    string Description
);

Help Desk API

Just for some context, I’m going to step through the creation of a web service with a handful of web service endpoints to create, read, or alter a help desk incident. In much later posts, I’ll talk about publishing internal events to take action asynchronously within the web service, and also to publish other events externally to completely different systems through Rabbit MQ queues.

The “final” code is the CritterStackHelpDesk application under my GitHub profile.

I’m not going to go near a user interface for now, but someone is working up improvements to this service to put a user interface on top with this service as a Backend for Frontend (BFF).

Summary

Event Storming can be a very effective technique for collaboratively discovering system requirements and understanding the system’s workflow with your domain experts. As developers and testers, it can also help create traceability between the requirements and the actual code artifacts without manually intensive traceability matrix documentation.

Next time…

In the next post in this new series, I’ll introduce the event sourcing functionality with just Marten completely outside of any application just to get comfortable with Marten mechanics before we go on.

Tell Us What You Want in Marten and Wolverine!

I can’t prove this conclusively, but the cure for getting the “tell me what you want, what you really, really want” out of your head is probably to go fill out the linked survey on Marten and Wolverine!

As you may know, JasperFx Software is now up and able to offer formal support contracts to help users be successful with the open source Marten and Wolverine tools (the “Critter Stack”). As the next step in our nascent plan to create a sustainable business model around the Critter Stack tools, we’d really like to elicit some feedback from our users or potential users about what features your team would be most interested in next. And to be clear, we’re specifically thinking about complex features that would be part of a paid add on model to the Critter Stack for advanced usages.

We’d love to get any feedback for us you might have in this Google Form.

Some existing ideas for paid features include:

  • A module for GDPR compliance
  • A dead letter queue browser application for Wolverine that would also help you selectively replay messages
  • The ability to dynamically add new tenant databases for Marten + Wolverine at runtime with no downtime
  • Improved asynchronous projection support in Marten, including better throughput overall, the ability to load balance the projections across running nodes
  • Zero downtime projection rebuilds with asynchronous Marten event store projections
  • The capability to do blue/green deployments with Marten event store projections
  • A virtual actor capability for Wolverine
  • A management and monitoring user interface for Wolverine + Marten that would give you insights about running nodes, active event store projections, messaging endpoint health, node assignments
  • DevOps recipes for the Critter Stack?

Publishing Events from Marten through Wolverine

Aren’t martens really cute?

By the way, JasperFx Software is up and running for formal support plans for both Marten and Wolverine!

Wolverine 1.11.0 was released this week (here’s the release notes) with a small improvement to its ability to subscribe to Marten events captured within Wolverine message handlers or HTTP endpoints. Since Wolverine 1.0, users have been able to opt into having Marten forward events captured within Wolverine handlers to any known Wolverine subscribers for that event with the EventForwardingToWolverine() option.

The latest Wolverine release adds the ability to automatically publish an event as a different message using the event data and its metadata as shown in the sample code below:

builder.Services.AddMarten(opts =>
{
    var connectionString = builder.Configuration.GetConnectionString("marten");
    opts.Connection(connectionString);
})
    // Adds Wolverine transactional middleware for Marten
    // and the Wolverine transactional outbox support as well
    .IntegrateWithWolverine()
    
    .EventForwardingToWolverine(opts =>
    {
        // Setting up a little transformation of an event with its event metadata to an internal command message
        opts.SubscribeToEvent<IncidentCategorised>().TransformedTo(e => new TryAssignPriority
        {
            IncidentId = e.StreamId,
            UserId = e.Data.UserId
        });
    });

This isn’t a general purpose outbox, but rather immediately publishes captured events based on normal Wolverine publishing rules immediately at the time the Marten transaction is committed.

So in this sample handler:

public static class CategoriseIncidentHandler
{
    public static readonly Guid SystemId = Guid.NewGuid();
    
    // This Wolverine handler appends an IncidentCategorised event to an event stream
    // for the related IncidentDetails aggregate referred to by the CategoriseIncident.IncidentId
    // value from the command
    [AggregateHandler]
    public static IEnumerable<object> Handle(CategoriseIncident command, IncidentDetails existing)
    {
        if (existing.Category != command.Category)
        {
            // Wolverine will transform this event to a TryAssignPriority message
            // on the successful commit of the transaction wrapping this handler call
            yield return new IncidentCategorised
            {
                Category = command.Category,
                UserId = SystemId
            };
        }
    }
}

To try to close the loop, when Wolverine handles the CategoriseIncident message, it will:

  1. Potentially append an IncidentCategorised event to the referenced event stream
  2. Try to transform that event to a new TryAssignPriority message
  3. Commit the changes queued up to the underlying Marten IDocumentSession unit of work
  4. If the transaction is successful, publish the TryAssignPriority message — which in this sample case would be routed to a local queue within the Wolverine application and handled in a different thread later

That’s a lot of text and gibberish, but all I’m trying to say is that you can make Wolverine reliably react to events captured in the Marten event store.

Critter Stack at .NET Conf 2023

JasperFx Software will be shortly announcing the availability of official support plans for Marten, Wolverine, and other JasperFx open source tools. We’re working hard to build a sustainable ecosystem around these tools so that companies can feel confident in making a technical bet on these high productivity tools for .NET server side development.

I’ll be presenting a short talk at .NET Conf 2023 entitled “CQRS with Event Sourcing using the Critter Stack.” It’s going to be a quick dive into how to use Marten and Wolverine to build a very small system utilizing a CQRS Architecture with Event Sourcing as the persistence strategy.

Hopefully, I’ll be showing off:

  • How Wolverine’s runtime architecture is significantly different than other .NET tools and why its approach leads to much lower code ceremony and potentially higher performance
  • Marten and PostgreSQL providing a great local developer story both in development and in integration testing
  • How the Wolverine + Marten integration makes your domain logic easily unit testable without resorting to complicated Clean/Onion/Hexagonal Architectures
  • Wolverine’s built in integration testing support that you’ll wish you had today in other .NET messaging tools
  • The built in tooling for unraveling Wolverine or Marten’s “conventional magic”

Here’s the talk abstract:

CQRS with Event Sourcing using the “Critter Stack”

Do you have a system where you think would be a good fit for a CQRS architecture that also uses Event Sourcing for at least part of its persistence strategy? Are you intimidated by the potential complexity of that kind of approach? Fear not, using a combination of the PostgreSQL-backed Marten library for event sourcing and its newer friend Wolverine for command handling and asynchronous messaging, I’ll show you how you can quickly get started with both CQRS and Event Sourcing. Once we get past the quick start, I’ll show you how the Critter Stack’s unique approach to the “Decider” pattern will help you create robust command handlers with very little code ceremony while still enjoying easy testability. Moving beyond basic command handling, I’ll show you how to reliably subscribe to and publish the events or other messages created by your command handlers through Wolverine’s durable outbox and direct subscriptions to Marten’s event storage.