Building a Critter Stack Application: Event Storming

Hey, did you know that JasperFx Software is ready for formal support plans for Marten and Wolverine? Not only are we trying to make the “Critter Stack” tools be viable long term options for your shop, we’re also interested in hearing your opinions about the tools and how they should change.

I did a series of presentations a couple weeks ago showing off the usage of Wolverine and Marten to build a small service using CQRS and Event Sourcing and you can see the video above from .NET Conf 2023. Great, but I thought the talk was way too dense and I’m going to rebuild the talk from scratch before CodeMash. So I’ve got a relatively complete sample application, and we get a lot of feedback that there needs to be a single sample application that’s more realistic to show off what Marten and Wolverine can actually do. Based on other feedback, I know there’s some value in having a series of short, focused posts that build up a sample application one little concept at a time.

To that end, this post will be the start of a multi-part series showing how to use Marten and Wolverine for a CQRS architecture in an ASP.Net Core web service that also uses event sourcing as a persistence strategy.

The series so far:

I blatantly stole (with permission) this sample application idea from Oskar Dudycz. His version of the app is also on GitHub.

If you’re reading this post, it’s very likely you’re a software professional and you’re already familiar with online incident tracking applications — but hey, let’s build yet another one for a help desk company just because it’s a problem domain you’re likely (all too) familiar with!

Let’s say that you’re magically able to get your help desk business experts and stakeholders in a room (or virtual meeting) with the development team all at one time. Crazy, I know, but bear with me. Since you’re altogether, this is a fantastic opportunity to get the new system started is a very collaborative approach called Event Storming that works very well for both event sourcing and CQRS approaches.

The format is pretty simple. Go to any office supply company and get the typical pack of sticky notes like these:

Start by asking the business experts to describe events within the desired workflow that would lead to a change in state or a milestone in the business process. Try to record their terminology on orange sticky notes with a short name that generally implies a past event. In the case of an incident service, those events might be:

  • IncidentLogged
  • IncidentCategorised
  • IncidentResolved

This isn’t waterfall, so you can happily jump back and forth between steps here, but the next general step is to try to identify the actions or “commands” in the system that would cause each of our previously identified events. Jot these commands down on blue sticky notes with a short name in an imperative form like “LogIncident” or “CategoriseIncident”. Create some record of cause and effect by putting the blue sticky command notes just to the left of the orange sticky notes for the related events.

It’s also helpful to organize the sticky notes roughly left to right to give some context to what commands or events happen in what order (which I did not do in my crude diagram in a second).

Even though my graphic below doesn’t do this, it’s perfectly possible for the relationship between commands and events to be one command to many events.

In the course of executing these newly discovered commands, we can start to call out possible “views” of the raw event data that we might need as necessary context. We’ll record these views with a short descriptive name on green sticky notes.

After some time, our wall should be covered in sticky notes in a manner something like this:

Right off the bat, we’re learning what the DDD folks call the ubiquitous language for our business domain that can be shared between us technical folks and the business domain experts. Moreover, as we’ll see in later posts, these names from is ostensibly a requirements gathering session can translate directly to actual code artifact names.

My experience with Event Storming has been very positive, but I’d guess that it depends on how cooperative and collaborative your business partners are with this format. I found it to be a great format to talk through a system’s requirements in a way that provides actual traceability to code implementation details. In other words, when you talk with the business folks and speak in terms of an IncidentLogged, there will actually be a type in your codebase like this:

public record IncidentLogged(
    Guid CustomerId,
    Contact Contact,
    string Description,
    Guid LoggedBy
);

or LogIncident:

public record LogIncident(
    Guid CustomerId,
    Contact Contact,
    string Description
);

Help Desk API

Just for some context, I’m going to step through the creation of a web service with a handful of web service endpoints to create, read, or alter a help desk incident. In much later posts, I’ll talk about publishing internal events to take action asynchronously within the web service, and also to publish other events externally to completely different systems through Rabbit MQ queues.

The “final” code is the CritterStackHelpDesk application under my GitHub profile.

I’m not going to go near a user interface for now, but someone is working up improvements to this service to put a user interface on top with this service as a Backend for Frontend (BFF).

Summary

Event Storming can be a very effective technique for collaboratively discovering system requirements and understanding the system’s workflow with your domain experts. As developers and testers, it can also help create traceability between the requirements and the actual code artifacts without manually intensive traceability matrix documentation.

Next time…

In the next post in this new series, I’ll introduce the event sourcing functionality with just Marten completely outside of any application just to get comfortable with Marten mechanics before we go on.

Tell Us What You Want in Marten and Wolverine!

I can’t prove this conclusively, but the cure for getting the “tell me what you want, what you really, really want” out of your head is probably to go fill out the linked survey on Marten and Wolverine!

As you may know, JasperFx Software is now up and able to offer formal support contracts to help users be successful with the open source Marten and Wolverine tools (the “Critter Stack”). As the next step in our nascent plan to create a sustainable business model around the Critter Stack tools, we’d really like to elicit some feedback from our users or potential users about what features your team would be most interested in next. And to be clear, we’re specifically thinking about complex features that would be part of a paid add on model to the Critter Stack for advanced usages.

We’d love to get any feedback for us you might have in this Google Form.

Some existing ideas for paid features include:

  • A module for GDPR compliance
  • A dead letter queue browser application for Wolverine that would also help you selectively replay messages
  • The ability to dynamically add new tenant databases for Marten + Wolverine at runtime with no downtime
  • Improved asynchronous projection support in Marten, including better throughput overall, the ability to load balance the projections across running nodes
  • Zero downtime projection rebuilds with asynchronous Marten event store projections
  • The capability to do blue/green deployments with Marten event store projections
  • A virtual actor capability for Wolverine
  • A management and monitoring user interface for Wolverine + Marten that would give you insights about running nodes, active event store projections, messaging endpoint health, node assignments
  • DevOps recipes for the Critter Stack?

20+ Years of .NET and Me

I realized this weekend that I’ve worked with .NET now for over 20 years. During the Thanksgiving week of 2002 when it was unusually quiet in the office because most folks were out, I spent three enjoyable days learning about the new .NET runtime and C# to build out a proof of concept for replacing a very problematic VB6 and Oracle PL/SQL system at my then company.

That PoC didn’t go anywhere of course, but the new development model as a replacement for VB6 COM components was surely exciting at the time for reasons that feel pretty quaint now (real stack traces! actual inheritance! garbage collection instead of reference counting!).

Granted, I was pretty horrified the first time I had to use WebForms on a real project because that felt like such a big step backward from ASP Classic to me at the time, but it was still an exciting time. I remember reading about Project Indigo (what became WCF) and thinking it sounded like a fantastic solution to some problems we’d struggled to solve with our earlier tools — then hardly ever used WCF after all that excitement.

I’ve certainly done development in other platforms here and there, but never really made any kind of serious move to get off of .NET, partially or mostly because of my time investment in OSS tools on .NET.

Anyway, there’s no real point to this blog post other than the Thanksgiving week and .NET Conf last week made me think about my first initial foray into .NET. For all the stumbles along the way and years of angst about feeling like I was in a 2nd class development community and platform, I think .NET is substantially better right now as a result of the .NET Core wave of initiatives than it ever has been and hopefully has a bright future since I’m still making a big career bet on it!

Imagining a Better Integration Testing Tool

I might be building out a new integration testing library named “Bobcat”, but mostly I just think that bobcats are cool. Scary looking though when you see them in the wild. They came back to the area where I grew up when the wild turkey population recovered in the 80’s/90’s.

Let’s say that you’re good at writing testable code, and you’re able to write isolated unit tests for the mass majority of your domain logic and much of your coordination workflow type of logic as well. Great, but that still leaves you with the need to write some type of integration test suite and maybe even some modicum of end to end tests through *shudder* Playwright, Selenium, or Cypress.io — not that I’m saying that those tools are bad per se, but that the technical challenge and overhead for succeeding with those tools can be through the roof.

Right now I’m in a spot where the mass majority of the testing for Marten and Wolverine are integration tests of one sort or another that involve databases and message brokers. Most of these tests behave just fine, as Marten especially is well suited for integration testing and I’ve invested in some helpful integration test support helpers built directly into Wolverine itself. Cool, but, um, there’s embarrassingly enough a significant number of “blinking” tests (unreliable automated tests) in Marten and far more in Wolverine. Moreover, there’s a number of other tests that behave perfectly on my high powered development box, but blink in continuous integration test runs.

And I know what you’re thinking, in this case it’s not at all because of shared, static state which is the typical root cause of many blinking test issues. There might very well be some race conditions we haven’t yet perfectly ironed out yet, but basically all of these ill behaved tests are testing asynchronous processing and will run into timeout failures inside a larger test suite run, but generally work reliably when running one at a time.

At this point, I don’t feel like the tooling (mostly xUnit.Net) we’re currently using for automating integration testing is perfectly appropriate for what we’re trying to do, and I’m ready to consider doing something a little bit custom — especially because much of the development coming my way soon is going to involve the exact kind of asynchronous behavior that’s already giving us trouble.

I am also currently helping a JasperFx Software client formulate an integration testing approach across multiple, collaborating micro services, so integration testing is top of mind for me right now.

Integration Testing Challenges

  • Understanding what’s happening inside your system in the case of failures
  • Testing timeouts, especially if you’re needing to test asynchronous processing
  • “Knowing” when asynchronous work is complete and delaying the *assertions* until those asynchronous actions are really complete
  • Having a sense for whether a long running integration test is proceeding, or hung
  • Data setup, especially in problem domains that require quite a bit of data in tests
  • Making the expression of the test as declarative as possible to make the test clear in its intentions
  • Preventing the test from being too tightly coupled to the internals of the system so the test isn’t too brittle when the system internals change
  • Being able to make the test suite *fail fast* when the system is detected to be in an invalid state — don’t blow this off, this can be a huge problem if you’re not careful
  • Selectively and intelligently retrying “blinking” tests — and yeah, you should try really hard to not need this capability, but you might no matter how hard you try

Scattered Thoughts about Possible Approaches

In the case of Wolverine and Marten’s “async daemon” testing, I think a large part of our problems are due to thread pool exhaustion as we rapidly spin up and down different IHost applications. My thought is to not just to do test retries when we detect time out failures, but also to do some level of exponential backoff to pause between starting the next test to let things simmer down in the runtime and let the thread pool snap back. In the worst case, I’d also like to consider a test runner implementation where a separate test manager process could trash and restart the actual test running process in certain cases (Storyteller could actually do this somewhat).

As far as “Knowing” when asynchronous work is complete across multiple running processes, I want to kick the tires on a distributed version of Wolverine’s in-process message tracking integration testing support that can tell you when all outstanding work is completely across threads. I definitely want this built into Wolverine itself some day, but I need to help a client do this in an architecture that doesn’t include Wolverine. With a tip from Martin Thwaites, maybe some kind of tracking using ActivitySource?

As far as tooling is concerned, I’ve contemplated forking xUnit.Net for a hot second, or writing a more “robust” test runner that can work with xUnit.Net types, resurrecting Storyteller (very unlikely), or building something new (“bobcat”?) and dogfooding it the whole way on mostly Wolverine tests. I’m not ready to talk about that much yet — and I’m well aware of the challenges in trying to tackle something like that after the time I invested in Storyteller in the mid to late 2010’s.

For distributed testing, I am intrigued right now by the new Project Aspire from Microsoft as a way to bootstrap and monitor a local or testing environment for integration testing.

I am certainly considering using SpecFlow for a completely different client where they could really use business person readable BDD specifications, but I don’t see SpecFlow by itself doing much to help us out with Wolverine development.

So, stay tuned if you’re interested in the journey here, or have some solid suggestions for us!

Publishing Events from Marten through Wolverine

Aren’t martens really cute?

By the way, JasperFx Software is up and running for formal support plans for both Marten and Wolverine!

Wolverine 1.11.0 was released this week (here’s the release notes) with a small improvement to its ability to subscribe to Marten events captured within Wolverine message handlers or HTTP endpoints. Since Wolverine 1.0, users have been able to opt into having Marten forward events captured within Wolverine handlers to any known Wolverine subscribers for that event with the EventForwardingToWolverine() option.

The latest Wolverine release adds the ability to automatically publish an event as a different message using the event data and its metadata as shown in the sample code below:

builder.Services.AddMarten(opts =>
{
    var connectionString = builder.Configuration.GetConnectionString("marten");
    opts.Connection(connectionString);
})
    // Adds Wolverine transactional middleware for Marten
    // and the Wolverine transactional outbox support as well
    .IntegrateWithWolverine()
    
    .EventForwardingToWolverine(opts =>
    {
        // Setting up a little transformation of an event with its event metadata to an internal command message
        opts.SubscribeToEvent<IncidentCategorised>().TransformedTo(e => new TryAssignPriority
        {
            IncidentId = e.StreamId,
            UserId = e.Data.UserId
        });
    });

This isn’t a general purpose outbox, but rather immediately publishes captured events based on normal Wolverine publishing rules immediately at the time the Marten transaction is committed.

So in this sample handler:

public static class CategoriseIncidentHandler
{
    public static readonly Guid SystemId = Guid.NewGuid();
    
    // This Wolverine handler appends an IncidentCategorised event to an event stream
    // for the related IncidentDetails aggregate referred to by the CategoriseIncident.IncidentId
    // value from the command
    [AggregateHandler]
    public static IEnumerable<object> Handle(CategoriseIncident command, IncidentDetails existing)
    {
        if (existing.Category != command.Category)
        {
            // Wolverine will transform this event to a TryAssignPriority message
            // on the successful commit of the transaction wrapping this handler call
            yield return new IncidentCategorised
            {
                Category = command.Category,
                UserId = SystemId
            };
        }
    }
}

To try to close the loop, when Wolverine handles the CategoriseIncident message, it will:

  1. Potentially append an IncidentCategorised event to the referenced event stream
  2. Try to transform that event to a new TryAssignPriority message
  3. Commit the changes queued up to the underlying Marten IDocumentSession unit of work
  4. If the transaction is successful, publish the TryAssignPriority message — which in this sample case would be routed to a local queue within the Wolverine application and handled in a different thread later

That’s a lot of text and gibberish, but all I’m trying to say is that you can make Wolverine reliably react to events captured in the Marten event store.

JasperFx Software Announces Support Plans for Marten and Wolverine

JasperFx Software is officially ready to provide formal paid support for Marten, Wolverine, or any other JasperFx dependency of the previous two tools. Starting now, potential users can feel safe in adopting Marten or Wolverine for mission critical applications knowing that there’s a backing company ready to support them in their usage of these tools.

Check out our support plans today and please feel free to contact us at sales@jasperfx.net with any questions or hop onto the JasperFx Software Discord channel and we’ll be happy to chat with what may or may not make sense for your shop.

This is an important step for us to make the Critter Stack a viable platform over the long term for our users. Our long term goal is to make the “Critter Stack” the single best set of tooling for creating server side applications in .NET in general and more specifically the best tooling for building solutions with Event Sourcing and CQRS on any platform.

In the future, we’ll also be announcing add on products for the Critter Stack tools that will provide advanced usages and deployment strategies including:

  • Zero downtime deployments
  • A blue/green deployment capability
  • Enhanced scalability for the Marten event store functionality
  • Management consoles for Wolverine messaging, dead letter queue management, and Marten event storage and projection oversight

We would like to thank our vibrant community for all of your support, encouragement, and enthusiasm over these past years since Marten arrived on the scene in 2016. We look forward to engaging with you all as we have embarked on building a sustainable business model around these tools.

Critter Stack at .NET Conf 2023

JasperFx Software will be shortly announcing the availability of official support plans for Marten, Wolverine, and other JasperFx open source tools. We’re working hard to build a sustainable ecosystem around these tools so that companies can feel confident in making a technical bet on these high productivity tools for .NET server side development.

I’ll be presenting a short talk at .NET Conf 2023 entitled “CQRS with Event Sourcing using the Critter Stack.” It’s going to be a quick dive into how to use Marten and Wolverine to build a very small system utilizing a CQRS Architecture with Event Sourcing as the persistence strategy.

Hopefully, I’ll be showing off:

  • How Wolverine’s runtime architecture is significantly different than other .NET tools and why its approach leads to much lower code ceremony and potentially higher performance
  • Marten and PostgreSQL providing a great local developer story both in development and in integration testing
  • How the Wolverine + Marten integration makes your domain logic easily unit testable without resorting to complicated Clean/Onion/Hexagonal Architectures
  • Wolverine’s built in integration testing support that you’ll wish you had today in other .NET messaging tools
  • The built in tooling for unraveling Wolverine or Marten’s “conventional magic”

Here’s the talk abstract:

CQRS with Event Sourcing using the “Critter Stack”

Do you have a system where you think would be a good fit for a CQRS architecture that also uses Event Sourcing for at least part of its persistence strategy? Are you intimidated by the potential complexity of that kind of approach? Fear not, using a combination of the PostgreSQL-backed Marten library for event sourcing and its newer friend Wolverine for command handling and asynchronous messaging, I’ll show you how you can quickly get started with both CQRS and Event Sourcing. Once we get past the quick start, I’ll show you how the Critter Stack’s unique approach to the “Decider” pattern will help you create robust command handlers with very little code ceremony while still enjoying easy testability. Moving beyond basic command handling, I’ll show you how to reliably subscribe to and publish the events or other messages created by your command handlers through Wolverine’s durable outbox and direct subscriptions to Marten’s event storage.

Low Ceremony Web Service Development with the Critter Stack

You can’t really get Midjourney to create an image of a wolverine without veering into trademark violations, so look at the weasel and marten up there working on a website application together!

Wolverine 1.10 was released earlier this week (here’s the release notes), and one of the big additions this time around was some new recipes for combining Marten and Wolverine for very low ceremony web service development.

Before I show the new functionality, let’s imagine that you have a simple web service for invoicing where you’re using Marten as a document database for persistence. You might have a very simplistic web service for exposing a single Invoice like this (and yes, I know you’d probably want to do some kind of transformation to a view model but put that aside for a moment):

    [WolverineGet("/invoices/longhand/id")]
    [ProducesResponseType(404)] 
    [ProducesResponseType(200, Type = typeof(Invoice))]
    public static async Task<IResult> GetInvoice(
        Guid id, 
        IQuerySession session, 
        CancellationToken cancellationToken)
    {
        var invoice = await session.LoadAsync<Invoice>(id, cancellationToken);
        if (invoice == null) return Results.NotFound();

        return Results.Ok(invoice);
    }

It’s not that much code, but there’s still some repetitive boilerplate code. Especially if you’re going to care or be completist about your OpenAPI metadata. The design and usability aesthetic of Wolverine is to reduce code ceremony as much as possible without sacrificing performance or observability, so let’s look at a newer alternative.

Next, I’m going to install the new WolverineFx.Http.Marten Nuget to our web service project, and write this new endpoint using the [Document] attribute:

    [WolverineGet("/invoices/{id}")]
    public static Invoice Get([Document] Invoice invoice)
    {
        return invoice;
    }

The code up above is an exact functional equivalent to the first code sample, and even produces the exact same OpenAPI metadata (or at least tries to, OpenAPI has been a huge bugaboo for Wolverine because so much of the support inside of AspNetCore is hard wired for MVC Core). Notice though, how much less you have to do. You have a synchronous method, so that’s a little less ceremony. It’s a pure function, so even if there was code to transform the invoice data to an API specific shape, you could unit test this method without any infrastructure involved or using something like Alba. Heck, that is setting you up so that Wolverine itself is handling the “return 404 if the Invoice is not found” behavior as shown in the unit test from Wolverine itself (using Alba):

    [Fact]
    public async Task returns_404_on_id_miss()
    {
        // Using Alba to run a request for a non-existent
        // Invoice document
        await Scenario(x =>
        {
            x.Get.Url("/invoices/" + Guid.NewGuid());
            x.StatusCodeShouldBe(404);
        });
    }

Simple enough, but now let’s look at a new HTTP-centric mechanism for the Wolverine + Marten “Aggregate Handler” workflow for writing CQRS “Write” handlers using Marten’s event sourcing. You might want to glance at the previous link for more context before proceeding, or refer back to it later at least.

The main change here is that folks asked to provide the aggregate identity through a route parameter, and then to enforce a 404 response code if the aggregate does not exist.

Using an “Order Management” problem domain, here’s what an endpoint method to ship an existing order could look like:

    [WolverinePost("/orders/{orderId}/ship2"), EmptyResponse]
    // The OrderShipped return value is treated as an event being posted
    // to a Marten even stream
    // instead of as the HTTP response body because of the presence of 
    // the [EmptyResponse] attribute
    public static OrderShipped Ship(ShipOrder2 command, [Aggregate] Order order)
    {
        if (order.HasShipped) 
            throw new InvalidOperationException("This has already shipped!");
        
        return new OrderShipped();
    }

Notice the new [Aggregate] attribute on the Order argument. At runtime, this code is going to:

  1. Take the “orderId” route argument, parse that to a Guid (because that’s the identity type for an Order)
  2. Use that identity — and any version information on the request body or a “version” route argument — to use Marten’s FetchForWriting() mechanism to both load the latest version of the Order aggregate and to opt into optimistic concurrency protections against that event stream.
  3. Return a 404 response if the aggregate does not already exist
  4. Pass the Order aggregate into the actual endpoint method
  5. Take the OrderShipped event returned from the method, and apply that to the Marten event stream for the order
  6. Commit the Marten unit of work

As always, the goal of this workflow is to turn Wolverine endpoint methods into low ceremony, synchronous pure functions that are easily testable with unit tests.

Wolverine and Serverless

I’ve recently fielded some user problems with Wolverine’s transactional inbox/outbox subsystem going absolutely haywire. After asking a plethora of questions, I finally realized that the underlying issue was using Wolverine within AWS Lambda or Azure Function functions where the process is short lived.

Wolverine heretofore is optimized for running in multiple, long lived process nodes because that’s typical for asynchronous messaging architectures. By not getting a chance to cleanly shut down its background processing, users were getting a ton of junk data in Wolverine’s durable message tables that was causing all kinds of aggravation.

To nip that problem in the bud, Wolverine 1.10 introduced a new concept of durability modes to allow you to optimize Wolverine for different types of basic usage:

public enum DurabilityMode
{
    /// <summary>
    /// The durability agent will be optimized to run in a single node. This is very useful
    /// for local development where you may be frequently stopping and restarting the service
    ///
    /// All known agents will automatically start on the local node. The recovered inbox/outbox
    /// messages will start functioning immediately
    /// </summary>
    Solo,
    
    /// <summary>
    /// Normal mode that assumes that Wolverine is running on multiple load balanced nodes
    /// with messaging active
    /// </summary>
    Balanced,
    
    /// <summary>
    /// Disables all message persistence to optimize Wolverine for usage within serverless functions
    /// like AWS Lambda or Azure Functions. Requires that all endpoints be inline
    /// </summary>
    Serverless,
    
    /// <summary>
    /// Optimizes Wolverine for usage as strictly a mediator tool. This completely disables all node
    /// persistence including the inbox and outbox 
    /// </summary>
    MediatorOnly
}

Focusing on just the serverless scenario, you want to turn off all of Wolverine’s durable node tracking, leader election, agent assignment, and long running background processes of all types — and now you can do that just fine like so:

using var host = await Host.CreateDefaultBuilder()
    .UseWolverine(opts =>
    {
        opts.Services.AddMarten("some connection string")

            // This adds quite a bit of middleware for 
            // Marten
            .IntegrateWithWolverine();
        
        // You want this maybe!
        opts.Policies.AutoApplyTransactions();
        
        
        // But wait! Optimize Wolverine for usage within Serverless
        // and turn off the heavy duty, background processes
        // for the transactional inbox/outbox
        opts.Durability.Mode = DurabilityMode.Serverless;
    }).StartAsync();

There’s also some further documentation online about optimizing Wolverine within serverless functions.

Integration Testing GraphQL Endpoints with Alba

I’m helping a JasperFx Software client get a new system off the ground that’s using both Hot Chocolate for GraphQL and Marten for event sourcing and general persistence. That’s led to a couple blog posts so far:

Today though, I want to talk about some early ideas for automating integration testing of GraphQL endpoints. Before I show my intended approach, here’s a video from ChiliCream (the company behind Hot Chocolate) showing their recommendations for testing:

Now, to be honest, I don’t agree with their recommended approach. I played a lot of sports growing up in a small town, and one of my coach’s favorite sayings actually applies here:

If you want to be good, practice like you play

every basketball coach I ever played for

That saying really just meant to try to do things well in practice so that it would carry right through into the real games. In the case of integration testing, I want to be testing against the “real” application configuration including the full ASP.Net Core middleware stack and the exact Marten and Hot Chocolate configuration for the application instead of against a separately constructed IoC and Hot Chocolate configuration. In this particular case, the application is using multi-tenancy through a separate database per tenant strategy with the tenant selection at runtime being ultimately dependent upon expected claims on the ClaimsPrincipal for the request.

All that being said, I’m unsurprisingly opting to use the Alba library within xUnit specifications to test through the entire application stack with just a few overrides of the application. My usual approach with xUnit.Net and Alba is to create a shared context that manages the lifecycle of the bootstrapped application in memory like so:

public class AppFixture : IAsyncLifetime
{
    public IAlbaHost Host { get; private set; }

    public async Task InitializeAsync()
    {
        // This is bootstrapping the actual application using
        // its implied Program.Main() set up
        Host = await AlbaHost.For<Program>(x => { });
    }

Right off the bat, we’re bootstrapping our application with its own Program.Main() entry point, but Alba is using WebApplicationFactory behind the scenes and swapping in the in memory TestServer in place of Kestrel. It’s also possible to make some service or configuration overrides of the application at this time.

The xUnit.Net and Marten mechanics I’m proposing for this client are very similar to what I wrote in Automating Integration Tests using the “Critter Stack” earlier this year.

Moving on to the GraphQL mechanics, what I’ve come up with so far is to put a GraphQL query and/or mutation in a flat file within the test project. I hate not having the test inputs in the same code file as the test, but I’m trying to offset that by spitting out the GraphQL query text into the test output to make it a little easier to troubleshoot failing tests. The Alba mechanics — so far — look like this (simplified a bit from the real code):

    public Task<IScenarioResult> PostGraphqlQueryFile(string filename)
    {
        // This ugly code is just loading up the GraphQL query from
        // a named file
        var path = AppContext
            .BaseDirectory
            .ParentDirectory()
            .ParentDirectory()
            .ParentDirectory()
            .AppendPath("GraphQL")
            .AppendPath(filename);

        var queryText = File.ReadAllText(path);

        // Building up the right JSON to POST to the /graphql
        // endpoint
        var dictionary = new Dictionary<string, string>();
        dictionary["query"] = queryText;

        var json = JsonConvert.SerializeObject(dictionary);

        // Write the GraphQL query being used to the test output
        // just as information for troubleshooting
        this.output.WriteLine(queryText);

        // Using Alba to run a GraphQL request end to end
        // in memory. This would throw an exception if the 
        // HTTP status code is not 200
        return Host.Scenario(x =>
        {
            // I'm omitting some code here that we're using to mimic
            // the tenant detection in the real code

            x.Post.Url("/graphql").ContentType("application/json");

            // Dirty hackery.
            x.ConfigureHttpContext(c =>
            {
                var stream = c.Request.Body;
                
                // This encoding turned out to be necessary
                // Thank you Stackoverflow!
                stream.WriteAsync(Encoding.UTF8.GetBytes(json));
                stream.Position = 0;
            });
        });
    }

That’s the basics of running the GraphQL request through, but part of the value of Alba in testing more traditional “JSON over HTTP” endpoints is being able to easily read the HTTP outputs with Alba’s built in helpers that use the application’s JSON serialization setup. I was missing that initially with the GraphQL usage, so I added this extra helper for testing a single GraphQL query or mutation at a time where there is a return body from the mutation:

    public async Task<T> PostGraphqlQueryFile<T>(string filename)
    {
        // Delegating to the previous method
        var result = await PostGraphqlQueryFile(filename);

        // Get the raw HTTP response
        var text = await result.ReadAsTextAsync();

        // I'm using Newtonsoft.Json to get into the raw JSON
        // a little bit
        var json = (JObject)JsonConvert.DeserializeObject(text);

        // Make the test fail if the GraphQL response had any errors
        json.ContainsKey("errors").ShouldBeFalse($"GraphQL response had errors:\n{text}");

        // Find the *actual* response within the larger GraphQL response
        // wrapper structure
        var data = json["data"].First().First().First().First();

        // This would vary a bit in your application
        var serializer = JsonSerializer.Create(new JsonSerializerSettings
        {
            ContractResolver = new CamelCasePropertyNamesContractResolver()
        });

        // Deserialize the raw JSON into the response type for
        // easier access in tests because "strong typing for the win!"
        return serializer.Deserialize<T>(new JTokenReader(data));
    }

And after all that, that leads to integration tests in test fixture classes subclassing our IntegrationContext base type like this:

public class SomeTestFixture : IntegrationContext
{
    public SomeTestFixture(ITestOutputHelper output, AppFixture fixture) : base(output, fixture)
    {
    }

    [Fact]
    public async Task perform_mutation()
    {
        var response = await this.PostGraphqlQueryFile<SomeResponseType>("someGraphQLMutation.txt");

        // Use the strong typed response object in the
        // "assert" part of your test
    }
}

Summary

We’ll see how it goes, but already this harness helped me out to have some repeatable steps to tweak transaction management and multi-tenancy without breaking the actual code. With the custom harness around it, I think we’ve made the GraphQL endpoint testing be somewhat declarative.