Building a Critter Stack Application: Marten as Event Store

Hey, did you know that JasperFx Software is ready for formal support plans for Marten and Wolverine? Not only are we trying to make the “Critter Stack” tools be viable long term options for your shop, we’re also interested in hearing your opinions about the tools and how they should change.

Let’s build a small web service application using the whole “Critter Stack” and their friends, one small step at a time. For right now, the “finished” code is at CritterStackHelpDesk on GitHub.

The posts in this series are:

  1. Event Storming
  2. Marten as Event Store (this post)
  3. Marten Projections
  4. Integrating Marten into Our Application
  5. Wolverine as Mediator
  6. Web Service Query Endpoints with Marten
  7. Dealing with Concurrency
  8. Wolverine’s Aggregate Handler Workflow FTW!
  9. Command Line Diagnostics with Oakton
  10. Integration Testing Harness
  11. Marten as Document Database
  12. Asynchronous Processing with Wolverine
  13. Durable Outbox Messaging and Why You Care!
  14. Wolverine HTTP Endpoints
  15. Easy Unit Testing with Pure Functions
  16. Vertical Slice Architecture
  17. Messaging with Rabbit MQ
  18. The “Stateful Resource” Model
  19. Resiliency

Event Sourcing

Event Sourcing is a style of persistence where the single source of truth, system state is a read only, append only sequence of all the events that resulted in a change in the system state. Using our HelpDesk incident tracking application we first started describing in the previous post on Event Storming, that results in a sequence like this:

SequenceIncident IdEvent Type
11IncidentLogged
21IncidentCategorized
32IncidentLogged
43IncidentLogged
51IncidentResolved
An event log

As you could probably guess already from the table above, the events will be stored in one single log in the sequential order they were appended. You can also see that events will be categorized by their relationship to a single logical incident. This grouping is typically called a “stream” in event sourcing.

As a first quick foray into event sourcing, let’s look at using the Marten library to create an event store for our help desk application built on top of a PostgreSQL database.

In case you’re wondering, Marten is merely a fancy library that helps you access and treat the rock solid PostgreSQL database engine as both a document database and as an event store. Marten was purposely built on PostgreSQL specifically because of the unique JSON capabilities of PostgreSQL. It’s possible that the event store portion of Marten eventually gets ported to other databases in the future (Sql Server), but it’s highly unlikely that the document database feature set would ever follow.

Using Marten as an Event Store

This code is all taken from the CritterStackHelpDesk repository, and specifically the EventSourcingDemo console project. The repository’s README file has instructions on running that project.

First off, let’s build us some events that we can later store in our new event store:

public record IncidentLogged(
    Guid CustomerId,
    Contact Contact,
    string Description,
    Guid LoggedBy
);

public class IncidentCategorised
{
    public IncidentCategory Category { get; set; }
    public Guid UserId { get; set; }
}

public record IncidentPrioritised(IncidentPriority Priority, Guid UserId);

public record AgentAssignedToIncident(Guid AgentId);

public record AgentRespondedToIncident(        
    Guid AgentId,
    string Content,
    bool VisibleToCustomer);

public record CustomerRespondedToIncident(
    Guid UserId,
    string Content
);

public record IncidentResolved(
    ResolutionType Resolution,
    Guid ResolvedBy,
    DateTimeOffset ResolvedAt
);

You’ll notice there’s a (hopefully) consistent naming convention. The event types are named in the past tense and should refer clearly to a logical event in the system’s workflow. You might also notice that these events are all built with C# records. This isn’t a requirement, but it makes the code pretty terse and there’s no reason for these events to ever be mutable anyway.

Next, I’ve created a small console application and added a reference to the Marten library like so from the command line:

dotnet new console
dotnet add package Marten

Before we even think about using Marten itself, let’s get ourselves a new, blank PostgreSQL database spun up for our little application. Assuming that you have Docker Desktop or some functional alternative on your development machine, there’s a docker compose file in the root of the finished product that we can use to stand up a new database with:

docker compose up -d

Note, and this is an important point, there is absolutely nothing else you need to do to make this new database perfectly usable for the code we’re going to write next. No manual database setup, no SQL scripts for you to run, no other command line scripts. Just write code and go.

Next, we’re going to configure Marten in code, then:

  1. Start a new “Incident” stream with a couple events
  2. Append additional events to our new stream

The code to do nothing but what I described is shown below:

// This matches the docker compose file configuration
var connectionString = "Host=localhost;Port=5433;Database=postgres;Username=postgres;password=postgres";

// This is spinning up Marten with its default settings
await using var store = DocumentStore.For(connectionString);

// Create a Marten unit of work
await using var session = store.LightweightSession();

var contact = new Contact(ContactChannel.Email, "Han", "Solo");
var userId = Guid.NewGuid();

// I'm telling the Marten session about the new stream, and then recording
// the newly assigned Guid for this stream
var customerId = Guid.NewGuid();
var incidentId = session.Events.StartStream(
    new IncidentLogged(customerId, contact, "Software is crashing",userId),
    new IncidentCategorised
    {
        Category = IncidentCategory.Database,
        UserId = userId
    }
    
).Id;

await session.SaveChangesAsync();

// And now let's append an additional event to the 
// new stream
session.Events.Append(incidentId, new IncidentPrioritised(IncidentPriority.High, userId));
await session.SaveChangesAsync();

Let’s talk about what I just did — and did not do — in the code above. The DocumentStore class in Marten establishes the storage configuration for a single, logical Marten-ized database. This is an expensive object to create, so there should only ever be one instance in your system.

The actual work is done with Marten’s IDocumentSession service that I created with the call to store.LightweightSession(). The IDocumentSession is Marten’s unit of work implementation and plays the same role as DbContext does inside of EF Core. When you use Marten, you queue up operations (start a new event stream, append events, etc.), then commit them in one single database transaction when you call that SaveChangesAsync() method.

For anybody old enough to have used NHibernate reading this, DocumentStore plays the same role as NHibernate’s ISessionFactory.

So now, let’s read back in the events we just persisted, and print out serialized JSON of the Marten data just to see what Marten is actually capturing:

var events = await session.Events.FetchStreamAsync(incidentId);
foreach (var e in events)
{
    // I elided a little bit of code that sets up prettier JSON
    // formatting
    Console.WriteLine(JsonConvert.SerializeObject(e, settings));
}

The raw JSON output is this:

{
  "Data": {
    "CustomerId": "314d8fa1-3cca-4984-89fc-04b24122cf84",
    "Contact": {
      "ContactChannel": "Email",
      "FirstName": "Han",
      "LastName": "Solo",
      "EmailAddress": null,
      "PhoneNumber": null
    },
    "Description": "Software is crashing",
    "LoggedBy": "8a842212-3511-4858-a3f3-dd572a4f608f"
  },
  "EventType": "Helpdesk.Api.IncidentLogged, Helpdesk.Api, Version=1.0.0.0, Culture=neutral, PublicKeyToken=null",
  "EventTypeName": "incident_logged",
  "DotNetTypeName": "Helpdesk.Api.IncidentLogged, Helpdesk.Api",
  "IsArchived": false,
  "AggregateTypeName": null,
  "StreamId": "018c1c9b-5bd0-4273-947d-83d28c8e3210",
  "StreamKey": null,
  "Id": "018c1c9b-5f03-47f5-8c31-1d1ba70fd56a",
  "Version": 1,
  "Sequence": 1,
  "Timestamp": "2023-11-29T19:43:13.864064+00:00",
  "TenantId": "*DEFAULT*",
  "CausationId": null,
  "CorrelationId": null,
  "Headers": null
}
{
  "Data": {
    "Category": "Database",
    "UserId": "8a842212-3511-4858-a3f3-dd572a4f608f"
  },
  "EventType": "Helpdesk.Api.IncidentCategorised, Helpdesk.Api, Version=1.0.0.0, Culture=neutral, PublicKeyToken=null",
  "EventTypeName": "incident_categorised",
  "DotNetTypeName": "Helpdesk.Api.IncidentCategorised, Helpdesk.Api",
  "IsArchived": false,
  "AggregateTypeName": null,
  "StreamId": "018c1c9b-5bd0-4273-947d-83d28c8e3210",
  "StreamKey": null,
  "Id": "018c1c9b-5f03-4a19-82ef-9c12a84a4384",
  "Version": 2,
  "Sequence": 2,
  "Timestamp": "2023-11-29T19:43:13.864064+00:00",
  "TenantId": "*DEFAULT*",
  "CausationId": null,
  "CorrelationId": null,
  "Headers": null
}
{
  "Data": {
    "Priority": "High",
    "UserId": "8a842212-3511-4858-a3f3-dd572a4f608f"
  },
  "EventType": "Helpdesk.Api.IncidentPrioritised, Helpdesk.Api, Version=1.0.0.0, Culture=neutral, PublicKeyToken=null",
  "EventTypeName": "incident_prioritised",
  "DotNetTypeName": "Helpdesk.Api.IncidentPrioritised, Helpdesk.Api",
  "IsArchived": false,
  "AggregateTypeName": null,
  "StreamId": "018c1c9b-5bd0-4273-947d-83d28c8e3210",
  "StreamKey": null,
  "Id": "018c1c9b-5fef-4644-b213-56051088dc15",
  "Version": 3,
  "Sequence": 3,
  "Timestamp": "2023-11-29T19:43:13.909+00:00",
  "TenantId": "*DEFAULT*",
  "CausationId": null,
  "CorrelationId": null,
  "Headers": null
}

And that’s a lot of noise, so let me try to summarize the blob above:

  • Marten is storing each event as serialized JSON in one table, and that’s what you see as the Data leaf in each JSON document above
  • Marten is assigning a unique sequence number for each event
  • StreamId is the incident stream identity that groups the events
  • Each event is assigned a Version that reflects its position within its stream
  • Marten tracks the kind of metadata that you’d probably expect, like timestamps, optional header information, and optional causation/correlation information (we’ll use this much later in the series when I get around to discussing Open Telemetry)

Summary and What’s Next

In this post I introduced the core concepts of event sourcing, events, and event streams. I also introduced the bare bones usage of the Marten library as a way to create new event streams and append events to existing events. Lastly, we took a look at the important metadata that Marten tracks for you in addition to your raw event data. Along the way, we also previewed how the Critter Stack can reduce development time friction by very happily building out the necessary database schema objects for us as needed.

What you are probably thinking at this point is something to the effect of “So what?” After all, jamming little bits of JSON data into the database doesn’t necessarily help us build a user interface page showing a help desk technician what the current state of each open incident is. Heck, we don’t yet have any way to understand the actual current state of any incident!

Fear not though, because in the next post I’ll introduce Marten “Projections” capability that will help us create the “read side” view of the current system state out of the raw event data in whatever format happens to be most convenient for that data’s client or user.

Building a Critter Stack Application: Event Storming

Hey, did you know that JasperFx Software is ready for formal support plans for Marten and Wolverine? Not only are we trying to make the “Critter Stack” tools be viable long term options for your shop, we’re also interested in hearing your opinions about the tools and how they should change.

I did a series of presentations a couple weeks ago showing off the usage of Wolverine and Marten to build a small service using CQRS and Event Sourcing and you can see the video above from .NET Conf 2023. Great, but I thought the talk was way too dense and I’m going to rebuild the talk from scratch before CodeMash. So I’ve got a relatively complete sample application, and we get a lot of feedback that there needs to be a single sample application that’s more realistic to show off what Marten and Wolverine can actually do. Based on other feedback, I know there’s some value in having a series of short, focused posts that build up a sample application one little concept at a time.

To that end, this post will be the start of a multi-part series showing how to use Marten and Wolverine for a CQRS architecture in an ASP.Net Core web service that also uses event sourcing as a persistence strategy.

The series so far:

I blatantly stole (with permission) this sample application idea from Oskar Dudycz. His version of the app is also on GitHub.

If you’re reading this post, it’s very likely you’re a software professional and you’re already familiar with online incident tracking applications — but hey, let’s build yet another one for a help desk company just because it’s a problem domain you’re likely (all too) familiar with!

Let’s say that you’re magically able to get your help desk business experts and stakeholders in a room (or virtual meeting) with the development team all at one time. Crazy, I know, but bear with me. Since you’re altogether, this is a fantastic opportunity to get the new system started is a very collaborative approach called Event Storming that works very well for both event sourcing and CQRS approaches.

The format is pretty simple. Go to any office supply company and get the typical pack of sticky notes like these:

Start by asking the business experts to describe events within the desired workflow that would lead to a change in state or a milestone in the business process. Try to record their terminology on orange sticky notes with a short name that generally implies a past event. In the case of an incident service, those events might be:

  • IncidentLogged
  • IncidentCategorised
  • IncidentResolved

This isn’t waterfall, so you can happily jump back and forth between steps here, but the next general step is to try to identify the actions or “commands” in the system that would cause each of our previously identified events. Jot these commands down on blue sticky notes with a short name in an imperative form like “LogIncident” or “CategoriseIncident”. Create some record of cause and effect by putting the blue sticky command notes just to the left of the orange sticky notes for the related events.

It’s also helpful to organize the sticky notes roughly left to right to give some context to what commands or events happen in what order (which I did not do in my crude diagram in a second).

Even though my graphic below doesn’t do this, it’s perfectly possible for the relationship between commands and events to be one command to many events.

In the course of executing these newly discovered commands, we can start to call out possible “views” of the raw event data that we might need as necessary context. We’ll record these views with a short descriptive name on green sticky notes.

After some time, our wall should be covered in sticky notes in a manner something like this:

Right off the bat, we’re learning what the DDD folks call the ubiquitous language for our business domain that can be shared between us technical folks and the business domain experts. Moreover, as we’ll see in later posts, these names from is ostensibly a requirements gathering session can translate directly to actual code artifact names.

My experience with Event Storming has been very positive, but I’d guess that it depends on how cooperative and collaborative your business partners are with this format. I found it to be a great format to talk through a system’s requirements in a way that provides actual traceability to code implementation details. In other words, when you talk with the business folks and speak in terms of an IncidentLogged, there will actually be a type in your codebase like this:

public record IncidentLogged(
    Guid CustomerId,
    Contact Contact,
    string Description,
    Guid LoggedBy
);

or LogIncident:

public record LogIncident(
    Guid CustomerId,
    Contact Contact,
    string Description
);

Help Desk API

Just for some context, I’m going to step through the creation of a web service with a handful of web service endpoints to create, read, or alter a help desk incident. In much later posts, I’ll talk about publishing internal events to take action asynchronously within the web service, and also to publish other events externally to completely different systems through Rabbit MQ queues.

The “final” code is the CritterStackHelpDesk application under my GitHub profile.

I’m not going to go near a user interface for now, but someone is working up improvements to this service to put a user interface on top with this service as a Backend for Frontend (BFF).

Summary

Event Storming can be a very effective technique for collaboratively discovering system requirements and understanding the system’s workflow with your domain experts. As developers and testers, it can also help create traceability between the requirements and the actual code artifacts without manually intensive traceability matrix documentation.

Next time…

In the next post in this new series, I’ll introduce the event sourcing functionality with just Marten completely outside of any application just to get comfortable with Marten mechanics before we go on.

Tell Us What You Want in Marten and Wolverine!

I can’t prove this conclusively, but the cure for getting the “tell me what you want, what you really, really want” out of your head is probably to go fill out the linked survey on Marten and Wolverine!

As you may know, JasperFx Software is now up and able to offer formal support contracts to help users be successful with the open source Marten and Wolverine tools (the “Critter Stack”). As the next step in our nascent plan to create a sustainable business model around the Critter Stack tools, we’d really like to elicit some feedback from our users or potential users about what features your team would be most interested in next. And to be clear, we’re specifically thinking about complex features that would be part of a paid add on model to the Critter Stack for advanced usages.

We’d love to get any feedback for us you might have in this Google Form.

Some existing ideas for paid features include:

  • A module for GDPR compliance
  • A dead letter queue browser application for Wolverine that would also help you selectively replay messages
  • The ability to dynamically add new tenant databases for Marten + Wolverine at runtime with no downtime
  • Improved asynchronous projection support in Marten, including better throughput overall, the ability to load balance the projections across running nodes
  • Zero downtime projection rebuilds with asynchronous Marten event store projections
  • The capability to do blue/green deployments with Marten event store projections
  • A virtual actor capability for Wolverine
  • A management and monitoring user interface for Wolverine + Marten that would give you insights about running nodes, active event store projections, messaging endpoint health, node assignments
  • DevOps recipes for the Critter Stack?

20+ Years of .NET and Me

I realized this weekend that I’ve worked with .NET now for over 20 years. During the Thanksgiving week of 2002 when it was unusually quiet in the office because most folks were out, I spent three enjoyable days learning about the new .NET runtime and C# to build out a proof of concept for replacing a very problematic VB6 and Oracle PL/SQL system at my then company.

That PoC didn’t go anywhere of course, but the new development model as a replacement for VB6 COM components was surely exciting at the time for reasons that feel pretty quaint now (real stack traces! actual inheritance! garbage collection instead of reference counting!).

Granted, I was pretty horrified the first time I had to use WebForms on a real project because that felt like such a big step backward from ASP Classic to me at the time, but it was still an exciting time. I remember reading about Project Indigo (what became WCF) and thinking it sounded like a fantastic solution to some problems we’d struggled to solve with our earlier tools — then hardly ever used WCF after all that excitement.

I’ve certainly done development in other platforms here and there, but never really made any kind of serious move to get off of .NET, partially or mostly because of my time investment in OSS tools on .NET.

Anyway, there’s no real point to this blog post other than the Thanksgiving week and .NET Conf last week made me think about my first initial foray into .NET. For all the stumbles along the way and years of angst about feeling like I was in a 2nd class development community and platform, I think .NET is substantially better right now as a result of the .NET Core wave of initiatives than it ever has been and hopefully has a bright future since I’m still making a big career bet on it!

Imagining a Better Integration Testing Tool

I might be building out a new integration testing library named “Bobcat”, but mostly I just think that bobcats are cool. Scary looking though when you see them in the wild. They came back to the area where I grew up when the wild turkey population recovered in the 80’s/90’s.

Let’s say that you’re good at writing testable code, and you’re able to write isolated unit tests for the mass majority of your domain logic and much of your coordination workflow type of logic as well. Great, but that still leaves you with the need to write some type of integration test suite and maybe even some modicum of end to end tests through *shudder* Playwright, Selenium, or Cypress.io — not that I’m saying that those tools are bad per se, but that the technical challenge and overhead for succeeding with those tools can be through the roof.

Right now I’m in a spot where the mass majority of the testing for Marten and Wolverine are integration tests of one sort or another that involve databases and message brokers. Most of these tests behave just fine, as Marten especially is well suited for integration testing and I’ve invested in some helpful integration test support helpers built directly into Wolverine itself. Cool, but, um, there’s embarrassingly enough a significant number of “blinking” tests (unreliable automated tests) in Marten and far more in Wolverine. Moreover, there’s a number of other tests that behave perfectly on my high powered development box, but blink in continuous integration test runs.

And I know what you’re thinking, in this case it’s not at all because of shared, static state which is the typical root cause of many blinking test issues. There might very well be some race conditions we haven’t yet perfectly ironed out yet, but basically all of these ill behaved tests are testing asynchronous processing and will run into timeout failures inside a larger test suite run, but generally work reliably when running one at a time.

At this point, I don’t feel like the tooling (mostly xUnit.Net) we’re currently using for automating integration testing is perfectly appropriate for what we’re trying to do, and I’m ready to consider doing something a little bit custom — especially because much of the development coming my way soon is going to involve the exact kind of asynchronous behavior that’s already giving us trouble.

I am also currently helping a JasperFx Software client formulate an integration testing approach across multiple, collaborating micro services, so integration testing is top of mind for me right now.

Integration Testing Challenges

  • Understanding what’s happening inside your system in the case of failures
  • Testing timeouts, especially if you’re needing to test asynchronous processing
  • “Knowing” when asynchronous work is complete and delaying the *assertions* until those asynchronous actions are really complete
  • Having a sense for whether a long running integration test is proceeding, or hung
  • Data setup, especially in problem domains that require quite a bit of data in tests
  • Making the expression of the test as declarative as possible to make the test clear in its intentions
  • Preventing the test from being too tightly coupled to the internals of the system so the test isn’t too brittle when the system internals change
  • Being able to make the test suite *fail fast* when the system is detected to be in an invalid state — don’t blow this off, this can be a huge problem if you’re not careful
  • Selectively and intelligently retrying “blinking” tests — and yeah, you should try really hard to not need this capability, but you might no matter how hard you try

Scattered Thoughts about Possible Approaches

In the case of Wolverine and Marten’s “async daemon” testing, I think a large part of our problems are due to thread pool exhaustion as we rapidly spin up and down different IHost applications. My thought is to not just to do test retries when we detect time out failures, but also to do some level of exponential backoff to pause between starting the next test to let things simmer down in the runtime and let the thread pool snap back. In the worst case, I’d also like to consider a test runner implementation where a separate test manager process could trash and restart the actual test running process in certain cases (Storyteller could actually do this somewhat).

As far as “Knowing” when asynchronous work is complete across multiple running processes, I want to kick the tires on a distributed version of Wolverine’s in-process message tracking integration testing support that can tell you when all outstanding work is completely across threads. I definitely want this built into Wolverine itself some day, but I need to help a client do this in an architecture that doesn’t include Wolverine. With a tip from Martin Thwaites, maybe some kind of tracking using ActivitySource?

As far as tooling is concerned, I’ve contemplated forking xUnit.Net for a hot second, or writing a more “robust” test runner that can work with xUnit.Net types, resurrecting Storyteller (very unlikely), or building something new (“bobcat”?) and dogfooding it the whole way on mostly Wolverine tests. I’m not ready to talk about that much yet — and I’m well aware of the challenges in trying to tackle something like that after the time I invested in Storyteller in the mid to late 2010’s.

For distributed testing, I am intrigued right now by the new Project Aspire from Microsoft as a way to bootstrap and monitor a local or testing environment for integration testing.

I am certainly considering using SpecFlow for a completely different client where they could really use business person readable BDD specifications, but I don’t see SpecFlow by itself doing much to help us out with Wolverine development.

So, stay tuned if you’re interested in the journey here, or have some solid suggestions for us!

Publishing Events from Marten through Wolverine

Aren’t martens really cute?

By the way, JasperFx Software is up and running for formal support plans for both Marten and Wolverine!

Wolverine 1.11.0 was released this week (here’s the release notes) with a small improvement to its ability to subscribe to Marten events captured within Wolverine message handlers or HTTP endpoints. Since Wolverine 1.0, users have been able to opt into having Marten forward events captured within Wolverine handlers to any known Wolverine subscribers for that event with the EventForwardingToWolverine() option.

The latest Wolverine release adds the ability to automatically publish an event as a different message using the event data and its metadata as shown in the sample code below:

builder.Services.AddMarten(opts =>
{
    var connectionString = builder.Configuration.GetConnectionString("marten");
    opts.Connection(connectionString);
})
    // Adds Wolverine transactional middleware for Marten
    // and the Wolverine transactional outbox support as well
    .IntegrateWithWolverine()
    
    .EventForwardingToWolverine(opts =>
    {
        // Setting up a little transformation of an event with its event metadata to an internal command message
        opts.SubscribeToEvent<IncidentCategorised>().TransformedTo(e => new TryAssignPriority
        {
            IncidentId = e.StreamId,
            UserId = e.Data.UserId
        });
    });

This isn’t a general purpose outbox, but rather immediately publishes captured events based on normal Wolverine publishing rules immediately at the time the Marten transaction is committed.

So in this sample handler:

public static class CategoriseIncidentHandler
{
    public static readonly Guid SystemId = Guid.NewGuid();
    
    // This Wolverine handler appends an IncidentCategorised event to an event stream
    // for the related IncidentDetails aggregate referred to by the CategoriseIncident.IncidentId
    // value from the command
    [AggregateHandler]
    public static IEnumerable<object> Handle(CategoriseIncident command, IncidentDetails existing)
    {
        if (existing.Category != command.Category)
        {
            // Wolverine will transform this event to a TryAssignPriority message
            // on the successful commit of the transaction wrapping this handler call
            yield return new IncidentCategorised
            {
                Category = command.Category,
                UserId = SystemId
            };
        }
    }
}

To try to close the loop, when Wolverine handles the CategoriseIncident message, it will:

  1. Potentially append an IncidentCategorised event to the referenced event stream
  2. Try to transform that event to a new TryAssignPriority message
  3. Commit the changes queued up to the underlying Marten IDocumentSession unit of work
  4. If the transaction is successful, publish the TryAssignPriority message — which in this sample case would be routed to a local queue within the Wolverine application and handled in a different thread later

That’s a lot of text and gibberish, but all I’m trying to say is that you can make Wolverine reliably react to events captured in the Marten event store.

JasperFx Software Announces Support Plans for Marten and Wolverine

JasperFx Software is officially ready to provide formal paid support for Marten, Wolverine, or any other JasperFx dependency of the previous two tools. Starting now, potential users can feel safe in adopting Marten or Wolverine for mission critical applications knowing that there’s a backing company ready to support them in their usage of these tools.

Check out our support plans today and please feel free to contact us at sales@jasperfx.net with any questions or hop onto the JasperFx Software Discord channel and we’ll be happy to chat with what may or may not make sense for your shop.

This is an important step for us to make the Critter Stack a viable platform over the long term for our users. Our long term goal is to make the “Critter Stack” the single best set of tooling for creating server side applications in .NET in general and more specifically the best tooling for building solutions with Event Sourcing and CQRS on any platform.

In the future, we’ll also be announcing add on products for the Critter Stack tools that will provide advanced usages and deployment strategies including:

  • Zero downtime deployments
  • A blue/green deployment capability
  • Enhanced scalability for the Marten event store functionality
  • Management consoles for Wolverine messaging, dead letter queue management, and Marten event storage and projection oversight

We would like to thank our vibrant community for all of your support, encouragement, and enthusiasm over these past years since Marten arrived on the scene in 2016. We look forward to engaging with you all as we have embarked on building a sustainable business model around these tools.

Critter Stack at .NET Conf 2023

JasperFx Software will be shortly announcing the availability of official support plans for Marten, Wolverine, and other JasperFx open source tools. We’re working hard to build a sustainable ecosystem around these tools so that companies can feel confident in making a technical bet on these high productivity tools for .NET server side development.

I’ll be presenting a short talk at .NET Conf 2023 entitled “CQRS with Event Sourcing using the Critter Stack.” It’s going to be a quick dive into how to use Marten and Wolverine to build a very small system utilizing a CQRS Architecture with Event Sourcing as the persistence strategy.

Hopefully, I’ll be showing off:

  • How Wolverine’s runtime architecture is significantly different than other .NET tools and why its approach leads to much lower code ceremony and potentially higher performance
  • Marten and PostgreSQL providing a great local developer story both in development and in integration testing
  • How the Wolverine + Marten integration makes your domain logic easily unit testable without resorting to complicated Clean/Onion/Hexagonal Architectures
  • Wolverine’s built in integration testing support that you’ll wish you had today in other .NET messaging tools
  • The built in tooling for unraveling Wolverine or Marten’s “conventional magic”

Here’s the talk abstract:

CQRS with Event Sourcing using the “Critter Stack”

Do you have a system where you think would be a good fit for a CQRS architecture that also uses Event Sourcing for at least part of its persistence strategy? Are you intimidated by the potential complexity of that kind of approach? Fear not, using a combination of the PostgreSQL-backed Marten library for event sourcing and its newer friend Wolverine for command handling and asynchronous messaging, I’ll show you how you can quickly get started with both CQRS and Event Sourcing. Once we get past the quick start, I’ll show you how the Critter Stack’s unique approach to the “Decider” pattern will help you create robust command handlers with very little code ceremony while still enjoying easy testability. Moving beyond basic command handling, I’ll show you how to reliably subscribe to and publish the events or other messages created by your command handlers through Wolverine’s durable outbox and direct subscriptions to Marten’s event storage.

Low Ceremony Web Service Development with the Critter Stack

You can’t really get Midjourney to create an image of a wolverine without veering into trademark violations, so look at the weasel and marten up there working on a website application together!

Wolverine 1.10 was released earlier this week (here’s the release notes), and one of the big additions this time around was some new recipes for combining Marten and Wolverine for very low ceremony web service development.

Before I show the new functionality, let’s imagine that you have a simple web service for invoicing where you’re using Marten as a document database for persistence. You might have a very simplistic web service for exposing a single Invoice like this (and yes, I know you’d probably want to do some kind of transformation to a view model but put that aside for a moment):

    [WolverineGet("/invoices/longhand/id")]
    [ProducesResponseType(404)] 
    [ProducesResponseType(200, Type = typeof(Invoice))]
    public static async Task<IResult> GetInvoice(
        Guid id, 
        IQuerySession session, 
        CancellationToken cancellationToken)
    {
        var invoice = await session.LoadAsync<Invoice>(id, cancellationToken);
        if (invoice == null) return Results.NotFound();

        return Results.Ok(invoice);
    }

It’s not that much code, but there’s still some repetitive boilerplate code. Especially if you’re going to care or be completist about your OpenAPI metadata. The design and usability aesthetic of Wolverine is to reduce code ceremony as much as possible without sacrificing performance or observability, so let’s look at a newer alternative.

Next, I’m going to install the new WolverineFx.Http.Marten Nuget to our web service project, and write this new endpoint using the [Document] attribute:

    [WolverineGet("/invoices/{id}")]
    public static Invoice Get([Document] Invoice invoice)
    {
        return invoice;
    }

The code up above is an exact functional equivalent to the first code sample, and even produces the exact same OpenAPI metadata (or at least tries to, OpenAPI has been a huge bugaboo for Wolverine because so much of the support inside of AspNetCore is hard wired for MVC Core). Notice though, how much less you have to do. You have a synchronous method, so that’s a little less ceremony. It’s a pure function, so even if there was code to transform the invoice data to an API specific shape, you could unit test this method without any infrastructure involved or using something like Alba. Heck, that is setting you up so that Wolverine itself is handling the “return 404 if the Invoice is not found” behavior as shown in the unit test from Wolverine itself (using Alba):

    [Fact]
    public async Task returns_404_on_id_miss()
    {
        // Using Alba to run a request for a non-existent
        // Invoice document
        await Scenario(x =>
        {
            x.Get.Url("/invoices/" + Guid.NewGuid());
            x.StatusCodeShouldBe(404);
        });
    }

Simple enough, but now let’s look at a new HTTP-centric mechanism for the Wolverine + Marten “Aggregate Handler” workflow for writing CQRS “Write” handlers using Marten’s event sourcing. You might want to glance at the previous link for more context before proceeding, or refer back to it later at least.

The main change here is that folks asked to provide the aggregate identity through a route parameter, and then to enforce a 404 response code if the aggregate does not exist.

Using an “Order Management” problem domain, here’s what an endpoint method to ship an existing order could look like:

    [WolverinePost("/orders/{orderId}/ship2"), EmptyResponse]
    // The OrderShipped return value is treated as an event being posted
    // to a Marten even stream
    // instead of as the HTTP response body because of the presence of 
    // the [EmptyResponse] attribute
    public static OrderShipped Ship(ShipOrder2 command, [Aggregate] Order order)
    {
        if (order.HasShipped) 
            throw new InvalidOperationException("This has already shipped!");
        
        return new OrderShipped();
    }

Notice the new [Aggregate] attribute on the Order argument. At runtime, this code is going to:

  1. Take the “orderId” route argument, parse that to a Guid (because that’s the identity type for an Order)
  2. Use that identity — and any version information on the request body or a “version” route argument — to use Marten’s FetchForWriting() mechanism to both load the latest version of the Order aggregate and to opt into optimistic concurrency protections against that event stream.
  3. Return a 404 response if the aggregate does not already exist
  4. Pass the Order aggregate into the actual endpoint method
  5. Take the OrderShipped event returned from the method, and apply that to the Marten event stream for the order
  6. Commit the Marten unit of work

As always, the goal of this workflow is to turn Wolverine endpoint methods into low ceremony, synchronous pure functions that are easily testable with unit tests.

Wolverine and Serverless

I’ve recently fielded some user problems with Wolverine’s transactional inbox/outbox subsystem going absolutely haywire. After asking a plethora of questions, I finally realized that the underlying issue was using Wolverine within AWS Lambda or Azure Function functions where the process is short lived.

Wolverine heretofore is optimized for running in multiple, long lived process nodes because that’s typical for asynchronous messaging architectures. By not getting a chance to cleanly shut down its background processing, users were getting a ton of junk data in Wolverine’s durable message tables that was causing all kinds of aggravation.

To nip that problem in the bud, Wolverine 1.10 introduced a new concept of durability modes to allow you to optimize Wolverine for different types of basic usage:

public enum DurabilityMode
{
    /// <summary>
    /// The durability agent will be optimized to run in a single node. This is very useful
    /// for local development where you may be frequently stopping and restarting the service
    ///
    /// All known agents will automatically start on the local node. The recovered inbox/outbox
    /// messages will start functioning immediately
    /// </summary>
    Solo,
    
    /// <summary>
    /// Normal mode that assumes that Wolverine is running on multiple load balanced nodes
    /// with messaging active
    /// </summary>
    Balanced,
    
    /// <summary>
    /// Disables all message persistence to optimize Wolverine for usage within serverless functions
    /// like AWS Lambda or Azure Functions. Requires that all endpoints be inline
    /// </summary>
    Serverless,
    
    /// <summary>
    /// Optimizes Wolverine for usage as strictly a mediator tool. This completely disables all node
    /// persistence including the inbox and outbox 
    /// </summary>
    MediatorOnly
}

Focusing on just the serverless scenario, you want to turn off all of Wolverine’s durable node tracking, leader election, agent assignment, and long running background processes of all types — and now you can do that just fine like so:

using var host = await Host.CreateDefaultBuilder()
    .UseWolverine(opts =>
    {
        opts.Services.AddMarten("some connection string")

            // This adds quite a bit of middleware for 
            // Marten
            .IntegrateWithWolverine();
        
        // You want this maybe!
        opts.Policies.AutoApplyTransactions();
        
        
        // But wait! Optimize Wolverine for usage within Serverless
        // and turn off the heavy duty, background processes
        // for the transactional inbox/outbox
        opts.Durability.Mode = DurabilityMode.Serverless;
    }).StartAsync();

There’s also some further documentation online about optimizing Wolverine within serverless functions.