Efficient Web Services with Marten V4

We’re genuinely close to finally pulling the damn trigger on Marten V4. One of the last things I’m coding for the release is a new recipe for users to write very efficient web services backed by Marten database storage.

A Traditional .Net Approach

Before I get into the exact mechanics of that, let’s set the stage a little bit and draw some contrasts with a more traditional .Net web service stack. Let’s start by saying that in that traditional stack, you’re not using any kind of CQRS and the system state is persisted in the “one true model” approach using Entity Framework Core and an RDBMS like Sql Server. Now, in some web services you need to query data from the database and serve up a subset of that data into the outgoing HTTP response in a web service endpoint. The typical flow — with a focus on what’s happening under the covers — would be to:

  1. ASP.Net Core receives an HTTP GET, finds the proper endpoint, and calls the right handler for that route. Not that it actually matters, but let’s assume the endpoint handler is an MVC Core Controller method
  2. ASP.Net Core invokes a call to your DI container to build up the MVC controller object, and calls the right method for the route
  3. You’ve been to .Net conferences and internalized the idea that an MVC controller shouldn’t get too big or do things besides HTTP mediation, so you’re delegating to a tool like MediatR. MediatR itself is going to go through another DI container service resolution to find the right handler for the input model, then invoke that handler
  4. EF Core issues a query against your Sql Server database. If you’re needing to fetch data on more than just the root aggregate model, the query is going to be an outer join against all the child tables too
  5. EF Core loops around in the database rows and creates objects for your .Net domain model classes based on its ORM mappings
  6. You certainly don’t want to send the raw domain model on the wire because of coupling concerns, or don’t want every bit of data exposed to the client in a particular web service, so you use some kind of tool like AutoMapper to transform the internal domain model objects built up by EF Core into Data Transfer Objects (DTO) purposely designed to go over the wire.
  7. Lastly, you return the outgoing DTO model, which is serialized to JSON and sent down the HTTP response by MVC Core

Sound pretty common? That’s also a lot of overhead. A lot of memory allocations, data mappings between structures, JSON serialization, and a lot of dictionary lookups just to get data out of the database and spit it out into the HTTP response. It’s also a non-trivial amount of code, and I’d argue that some of the tools I mentioned are high ceremony.

Now do CQRS!

I initially thought of CQRS as looking like a whole lot more work to code, and that’s not an uncommon impression when folks are first introduced to it. I’ve come to realize over time that it’s not really more work so much as it’s really just doing about the same amount of work in different places and different times in the application’s workflow.

Now let’s at least introduce CQRS into our application architecture. I’m not saying that that automatically implies using event sourcing, but let’s say that you are writing a pre-built “read side” model of the state of your system directly to a database of some sort. Now from that same web service I was describing before, you just need to fetch that persisted “read side” model from the database and spit that state right out to the HTTP response.

Now then, I’ve just yada, yada’d all the complexity of the CQRS architecture that continuously updates the read side view for you, but hey, Marten does that for you too and that can be a shortly forthcoming follow up blog post.

Finally bringing Marten V4 into play, let’s say our read side model for an issue tracking system looks like this:

    public class Note
    {
        public string UserName { get; set; }
        public DateTime Timestamp { get; set; }
        public string Text { get; set; }
    }

    public class Issue
    {
        public Guid Id { get; set; }
        public string Description { get; set; }
        public bool Open { get; set; }

        public IList<Note> Notes { get; set; }
    }

Before anyone gets bent out of shape by this, it’s perfectly possible to tell Marten to serialize the persisted documents to JSON with camel or even snake casing to be more idiomatic JSON or Javascript friendly.

Now, let’s build out two controller endpoints, one that gives you an Issue payload by searching by its id, and a second endpoint that gives you all the open Issue models in a single JSON array payload. That controller — using some forthcoming functionality in a new Marten.AspNetCore Nuget — looks like this:

    public class GetIssueController: ControllerBase
    {
        private readonly IQuerySession _session;

        public GetIssueController(IQuerySession session)
        {
            _session = session;
        }

        [HttpGet("/issue/{issueId}")]
        public Task Get(Guid issueId)
        {
            // This "streams" the raw JSON to the HttpResponse
            // w/o ever having to read the full JSON string or
            // deserialize/serialize within the HTTP request
            return _session.Json
                .WriteById<Issue>(issueId, HttpContext);
        }

        [HttpGet("/issue/open")]
        public Task OpenIssues()
        {
            // This "streams" the raw JSON to the HttpResponse
            // w/o ever having to read the full JSON string or
            // deserialize/serialize within the HTTP request
            return _session.Query<Issue>()
                .Where(x => x.Open)
                .WriteArray(HttpContext);
        }

In the GET: /issue/{issueId} endpoint, you’ll notice the call to the new IQuerySession.Json.WriteById() extension method, and how I’m passing to it the current HttpContext. That method is:

  1. Executing a database query against the underlying Postgresql database. And in this case, all of the data is stored in a single column in a single row, so there’s not JOINs or sparse datasets like there would be with an ORM querying an object that has child collections.
  2. Write the raw bytes of the persisted JSON data directly to the HttpResponse.Body without ever bothering to write the whole thing to a .Net string and definitely without having to incur the overhead of JSON deserialization/serialization. That extension method also sets the HTTP content-length and content-type response headers, as well as setting the HTTP status code to 200 if the document is found, or 404 if the data is not found.

In the second HTTP endpoint for GET: /issue/open, the call to WriteArray(HttpContext) is doing something very similar, but writing the results as a JSON array.

By no means is this technique going to be applicable to every HTTP GET endpoint, but when it does, this is far, far more efficient and simpler to code than the more traditional approach that involves all the extra pieces and just has so much more memory allocations and hardware operations going on just to shoot some JSON down the wire.

For a little more context, here’s a test against the /issue/{issueId} endpoint, with a cameo from Alba to help me test the HTTP behavior:

        [Fact]
        public async Task stream_a_single_document_hit()
        {
            var issue = new Issue {Description = "It's bad", Open = true};

            var store = theHost.Services.GetRequiredService<IDocumentStore>();
            using (var session = store.LightweightSession())
            {
                session.Store(issue);
                await session.SaveChangesAsync();
            }

            var result = await theHost.Scenario(s =>
            {
                s.Get.Url($"/issue/{issue.Id}");
                s.StatusCodeShouldBeOk();
                s.ContentTypeShouldBe("application/json");
            });

            var read = result.ReadAsJson<Issue>();

            read.Description.ShouldBe(issue.Description);
        }

and one more test showing the “miss” behavior:

        [Fact]
        public async Task stream_a_single_document_miss()
        {
            await theHost.Scenario(s =>
            {
                s.Get.Url($"/issue/{Guid.NewGuid()}");
                s.StatusCodeShouldBe(404);
            });
        }

Feedback anyone?

This code isn’t even released yet even in an RC Nuget, so feel very free to make any kind of suggestion or send us feedback.

Integration Testing: IHost Lifecycle with NUnit

Starting yesterday, all of my content about automated testing is curated under the new Automated Testing page on this site.

I kicked off a new blog series yesterday with Integration Testing: IHost Lifecycle with xUnit.Net. I started by just discussing how to manage the lifecycle of a .Net IHost inside of an xUnit.Net testing library. I used xUnit.Net because I’m much more familiar with that library, but we mostly use NUnit for our testing at MedeAnalytics, so I’m going to see how the IHost lifecycle I discussed and demonstrated last time in xUnit.NEt could work in NUnit.

To catch you up from the previous post, I have two projects:

  1. An ASP.Net Core web service creatively named WebApplication. This web service has a small endpoint that allows you to post an array of numbers that returns a response telling you the sum and product of those numbers. The code for that controller action is shown in my previous post.
  2. A second testing project using NUnit that references WebApplication. The testing project is going to use Alba for integration testing at the HTTP layer.

With NUnit, I chose to use the SetupFixture construct to manage and share the IHost for the test suite like this:

    [SetUpFixture]
    public class Application
    {
        // Make this lazy so you don't build it out
        // when you don't need it.
        private static readonly Lazy<IAlbaHost> _host;

        static Application()
        {
            _host = new Lazy<IAlbaHost>(() => Program
                .CreateHostBuilder(Array.Empty<string>())
                .StartAlba());
        }

        public static IAlbaHost AlbaHost => _host.Value;

        // I want to expose the underlying Lamar container for some later
        // usage
        public static IContainer Container => (IContainer)_host.Value.Services;

        // Make sure that NUnit will shut down the AlbaHost when
        // all the projects are finished
        [OneTimeTearDown]
        public void Teardown()
        {
            if (_host.IsValueCreated)
            {
                _host.Value.Dispose();
            }
        }
    }

With the IHost instance managed by the Application static class above, I can consume the Alba host in an NUnit test like this:

    public class sample_integration_fixture
    {
        [Test]
        public async Task happy_path_arithmetic()
        {
            // Building the input body
            var input = new Numbers
            {
                Values = new[] {2, 3, 4}
            };

            var response = await Application.AlbaHost.Scenario(x =>
            {
                // Alba deals with Json serialization for us
                x.Post.Json(input).ToUrl("/math");
                
                // Enforce that the HTTP Status Code is 200 Ok
                x.StatusCodeShouldBeOk();
            });

            var output = response.ReadAsJson<Result>();
            output.Sum.ShouldBe(9);
            output.Product.ShouldBe(24);
        }
    }

And now a couple notes about what I did in Application:

  1. I think it’s important to create the IHost lazily, so that you don’t incur the cost of spinning up the IHost when you might be running other tests in your suite that don’t need the IHost. Rapid developer feedback is important, and that’s an awfully easy optimization that could pay off.
  2. The static Teardown() method is decorated with the `[OneTimeTearDown]` attribute to direct NUnit to call that method after all the tests are executed. I cannot stress enough how important it is to clean up resources in your test harness to ensure your ability to quickly iterate through subsequent test runs.
  3. NUnit has a very different model for parallelization than xUnit.Net, and it’s completely “opt in”, so I think there’s less to worry about on that front with NUnit.

At this point I don’t think I have a hard opinion about xUnit.Net vs. NUnit, and I certainly wouldn’t bother switching an existing project from one to the other (even though I’ve certainly done that plenty of times in the past). I haven’t thought this one through enough, but I still think that xUnit.Net is a little bit cleaner for unit testing, but NUnit might be better for integration testing because it gives you finer grained control over fixture lifecycle and has some built in support for test timeouts and retries. At one point I had high hopes for Fixie as another alternative, and that project has become active again, but it would have a long road to challenge either of the two now mainstream tools.

What’s Next?

This series is meant to support my colleagues at MedeAnalytics, so it’s driven by what we just happen to be talking about at any given point. Tomorrow I plan to put out a little post on some Lamar-specific tricks that are helpful in integration testing. Beyond that, I think dealing with database state is the most important thing we’re missing at work, so that needs to be a priority.

Integration Testing: IHost Lifecycle with xUnit.Net

I’m part of an initiative at work to analyze and ultimately improve our test automation practices. As part of that work, I’ll be blogging quite a bit about test automation starting with my brain dump on test automation last week and my most recent post on mocks and stubs last month. From here on out, I’m curating all of my posts and selected writings from other folks on my new Automated Testing page.

I’m already on record as saying that the generic host (IHost) in recent versions of .Net is one of the best things that’s ever happened to the .Net ecosystem. In my previous post I stated that I strongly prefer having the system under test running in process with the test harness for faster feedback cycles and easier debugging. The generic host builder introduced in .Net Core turns out to be a very effective way to bootstrap your system within automated test harnesses.

Before I dive into how to use the IHost in automated testing, here’s a couple issues I think you have to address in your integration testing strategy before we go willy nilly spinning up an IHost:

  • You ideally want to test against your code running in a realistic way, so the way code is bootstrapped and configured should be relatively close to how that code is started up in the real application.
  • There will inevitably need to be at least some configuration that needs to be different in testing or some services — usually accessing resources external to your system — that need to be replaced with stubs or some other kind of fake implementation.
  • It’s important to cleanly dispose or shutdown any IHost object you create in memory to avoid potential locks of resources like database connections, files, or ports. Failing to clean up resources in tests can easily make it harder to iterate through test fixes if you find yourself needing to manually kill processes or restart your IDE to release locked resources (been there, done that).
  • The IHost can be expensive to build up, and sometimes there’s going to be some serious benefit in reusing the IHost between tests to make the test suite run faster.
  • But the IHost is stateful, and there could easily be resources (singleton scoped services, databases, and whatnot) that could impact later test runs in the suite.

Before I jump into solutions, let’s assume that I have two projects:

  1. WebApplication is an ASP.Net Core web service project. WebApplication uses Lamar as its underlying DI container.
  2. A test project that references WebApplication

xUnit.Net Mechanics

I’m more comfortable with xUnit.Net, so I’m going to use that first. My typical usage is to share the IHost through xUnit.Net’s CollectionFixture mechanism (and if you think the usage of this thing is confusing, welcome to the club). First up, I’ll build out a new class I usually call AppFixture to manage the lifecycle of the IHost. The example project I’ve built here is an ASP.Net Core web service project, so I’m going to use Alba to wrap the host inside of AppFixture as shown below:

    public class AppFixture : IDisposable, IAsyncLifetime
    {
        public IAlbaHost Host { get; private set; }
        public async Task InitializeAsync()
        {
            // Program.CreateHostBuilder() is the code from the WebApplication
            // that configures the HostBuilder for the system
            Host = await Program
                .CreateHostBuilder(Array.Empty<string>())
                
                // This extension method starts up the underlying IHost,
                // but Alba replaces Kestrel with a TestServer and
                // wraps the IHost
                .StartAlbaAsync();
        }

        public Task DisposeAsync()
        {
            return Host.StopAsync();
        }

        public void Dispose()
        {
            Host?.Dispose();
        }
    }

A couple things to note in that code above:

  • As we’ll set up next, that class above will be constructed once in memory by xUnit and shared between test fixture classes
  • The Dispose() and DisposeAsync() methods both dispose the IHost. By normal .Net mechanics, that will also dispose the underlying Lamar IoC container, which will in turn dispose any services created by Lamar at runtime that implement IDisposable. Disposing the IHost also stops any registered IHostedService services that your application may be using for long running tasks (for my colleagues who may be reading this, both NServiceBus and MassTransit start and stop their message listeners in an IHostedService, so that might be in use even if you don’t explicitly use that technique).

Next, we’ll set up AppFixture to be shared between our integration test classes by using the [CollectionDefinition] attribute on a marker class:

    [CollectionDefinition("Integration")]
    public class AppFixtureCollection : ICollectionFixture<AppFixture>
    {
        
    }

Lastly, I like to build out a base class for integration tests like this one:

    [Collection("Integration")]
    public abstract class IntegrationContext
    {
        protected IntegrationContext(AppFixture fixture)
        {
            theHost = fixture.Host;
            
            // I am using Lamar as the underlying DI container
            // and want some Lamar specific things later on
            // in the tests
            Container = (IContainer)fixture.Host.Services;
        }

        public IAlbaHost theHost { get; }
        
        public IContainer Container { get; }
    }

The [Collection] attribute is meaningful here because that makes xUnit.Net run all the tests that are contained in test fixture classes that inherit from IntegrationContext in a single thread so we don’t have to worry about concurrent test runs.*

And finally to bring this all together, let’s say that WebApplication has this simplistic web service code to do some arithmetic:

    public class Result
    {
        public int Sum { get; set; }
        public int Product { get; set; }
    }

    public class Numbers
    {
        public int[] Values { get; set; }
    }
    
    public class ArithmeticController : ControllerBase
    {
        [HttpPost("/math")]
        public Result DoMath([FromBody] Numbers input)
        {
            var product = 1;
            foreach (var value in input.Values)
            {
                product *= value;
            }

            return new Result
            {
                Sum = input.Values.Sum(),
                Product = product
            };
        }
    }

In the next code block, let’s finally see a test fixture class that uses the new IntegrationContext as a base class and tests the HTTP endpoint shown in the block above.

    public class ArithmeticApiTests : IntegrationContext
    {
        public ArithmeticApiTests(AppFixture fixture) : base(fixture)
        {
        }

        [Fact]
        public async Task post_to_a_secured_endpoint_with_jwt_from_extension()
        {
            // Building the input body
            var input = new Numbers
            {
                Values = new[] {2, 3, 4}
            };

            var response = await theHost.Scenario(x =>
            {
                // Alba deals with Json serialization for us
                x.Post.Json(input).ToUrl("/math");
                
                // Enforce that the HTTP Status Code is 200 Ok
                x.StatusCodeShouldBeOk();
            });

            var output = response.ReadAsJson<Result>();
            output.Sum.ShouldBe(9);
            output.Product.ShouldBe(24);
        }
    }

Alright, at this point we’ve got a way to shared the system’s IHost in tests for better efficiency, and we’re making sure that all the resources in the IHost are cleaned up when the test suite is done. We’re using the WebApplication’s exact configuration for the IHost, but we still might need to alter that in testing. And there’s also the issue of needing to roll back state in our system between tests. I’ll pick up those subjects in my next couple posts, as well as using NUnit instead of xUnit.Net because that’s what the majority of code at my work uses for testing.

* It would be nice to be able to run parallel tests using our shared IHost, but that can often be problematic because of shared state, so I generally bypass test parallelization in integrated tests. The subject of parallelizing integration tests is worthy of a later blog post of some thoughts I haven’t quite elucidated yet.

A brain dump on automated integration testing

I’m strictly talking about automated testing in this post. I’m more or less leading an effort at work to improve our test automation and Test Driven Development practices at work, so I’ll be trying to blog quite a bit about related topics in the next couple months. After reviewing quite a bit of in flight code, I think I’ll try to revisit some of my old blog posts on testability design from the CodeBetter days and update those old lessons from the early days of TDD to what we’re building now.

My company builds and maintains several long running software systems with a healthy back log of feature requests, performance improvements, and stories to retire technical debt. All that is to say that we’re constantly adding to or improving existing code — which implies we’re always running some non-zero risk of creating regression defects. To keep everybody’s stress levels down, we’re taking incremental steps toward a true continuous delivery model where we can smoothly and consistently build and deploy fully tested features while being confident that we aren’t introducing regression defects.

As you’d likely guess, we’re very interested in improving our automated testing practices as a safety net to enable continuous delivery while also improving our quality in general. That leads to the next question, what kind of automated testing should we be doing? Followed by, is there automated testing we’re doing today that isn’t delivering enough bang for the buck?

To that point, let’s take a look at the classic idea of the testing pyramid at some point, as shown below:

From Unit test: sociable or solitary

The thinking behind the testing pyramid is that there’s a certain, healthy mix of different sorts of automated tests that efficiently lead to better results. I say a “mix” here because unit tests, though relatively cheap compared to other tests, cannot detect many defects that only come out during integration between components, code modules, or systems. From a quick search, I found worlds of memes along the lines of this one:

No integration tests, but all the unit tests pass!

To address exactly what kind of automated tests we should be writing, here’s a stream of consciousness brain dump that I later organized with a patina of organization:

On End to End User Interface Tests

Any kind of test that uses a tool like Selenium to do end to end, black-box testing is going to run slowly. These tests are also frequently be unstable because of asynchronous timing issues in modern browser applications. There’s an unhealthy tendency in many shops who adopt Selenium as a test automation solution to use it as their golden hammer to the exclusion of other testing techniques that can be much more efficient in certain circumstances. To put it bluntly, it’s very difficult to successfully author and maintain test automation suites based on Selenium against complicated applications. In my experience in shops that have attempted large scale Selenium usage, I do not believe that the benefits of those tests have ever outweighed the costs.

I would still recommend using some small number of end to end tests with a tool like Selenium, but those tests should be focused on proving out integration mechanisms between a user interface and backing server side code. For example, I’ve been working on a new integration of Open Id Connect (OIDC) authentication into our web services and web applications. I’ve used Playwright to automate browser testing to prove out the interactions and redirects between the OIDC service and our applications or services.

I also find Cypress.io interesting, but more for doing integration testing of our Angular applications by themselves with a dummy backend. For true end to end testing of .Net-backed web applications, I think I’m interested in replacing Selenium from here on out with Playwright as I think and hope it just does much more for you to make performant and reliable automated tests compared to Selenium.

Driving a browser should not be used to automate functional testing of business logic or data services or any kind of data analysis that could possibly be tested without using the full browser.

If you insist on trying to do a lot of browser automation testing, you better invest in collecting diagnostic information in test runs that can be used by developers to debug test failures. Ideally, I like to have the application’s log output correlated to the test run somehow. I’ve worked with teams that were able to pipe the console.log() tracing from the JavaScript code running in the browser to the test results and that was extremely helpful. Taking screenshots as part of the test can certainly help. As another ideal, I very strongly recommend that any kind of browser automation tests be executable by developers on their local machine on demand for easier debugging. More on this later.

Again, if you absolutely have to write Selenium/Playwright/Cypress tests despite all of my warnings, I strongly recommend you write those tests in the same programming language as the real application. That statement is going to be controversial if any real test automation engineers stumble into this post, but I think it’s important to make it as easy as possible for developers to collaborate with test automation engineers. Moreover, I despise the kind of shadow data access layers you can get from test automation code doing their own thing to write to and read from the underlying data store of the system under test. I think it’s less likely to get that kind of insidious, hidden code duplication if the test code is written in the same programming language and even uses the system’s own data access code to set up or verify database state as part of the automated tests.

Choosing Solitary/Unit Tests or Sociable/Integration Tests

I missed out on this when it was first published, but I think I like the nomenclature of solitary vs sociable tests better than thinking about unit vs integration tests. I also encourage folks to think of that as a continuum rather than a hard categorization. Moreover, I recommend that you switch between solitary and sociable tests even within the same test library where one or the other is more effective.

I would recommend organizing tests by functional area first, and only consider separating out integration tests into a separate testing project when it’s advantageous to use a “fast test, slow test” division for more efficient development.

We have formal requirements for test coverage metrics in our continuous integration builds, so I’d definitely make sure that any integration tests count toward that coverage number.

I think an emphasis on always writing classical unit tests can easily create a strong coupling between the production code and your testing code. That can and will reduce your ability to evolve your code, add new functionality, or do performance optimizations without rewriting your tests.

In many cases, integration tests that start at a natural sub-system facade or a logical controller/conductor entry point will do much better for you as a regression safety net to allow you to refactor your code to allow for new behavior or do important performance optimizations.

Case in point, I relied strictly on fine grained unit tests in my early work in StructureMap, and I definitely felt the negative consequences of that approach (I gave a talk about it in 2008 that’s still relevant). With later releases of StructureMap and now with Lamar, I lean much more heavily on integrated acceptance tests (let’s go ahead and call it Behavior Driven Development) that test from the entry point of the library down and focus on user-centric scenarios. I feel like that testing approach has led to much better results — both in the ease of adding new features, detecting regression defects in automated builds, and allowing me to evolve the functionality of the library.

On the other hand, integration tests can be harder to troubleshoot when they fail because you have more ground to cover. They also run slower of course. If you find that your feedback cycles feel too slow to efficiently run the tests continuously or especially if you find yourself doing long, marathon sessions in your debugger, stop and consider introducing more fine-grained tests first.

Running the Tests Locally vs Remotely

To the previous point, I think it’s critical that developers should be able to easily spin up and run automated integration tests on demand on their local development boxes. Tests will fail, and being able to easily troubleshoot a failing test is a prerequisite for successful test automation. If you can run an integration test locally, you’re much more likely to be able to iterate and try potential fixes quickly. There’s also the very real possibility of attaching a debugger to the testing process.

I think that our current technology set makes it much easier to do integration testing than it was when I was first getting started and the strict Michael Feathers definition of a unit test was in vogue. Just speaking from my own experience, the current .Net 5 generation is very easy to spin up and down in process for automated testing. Docker has been a great way to stand up development environments using Sql Server, Postgresql, Rabbit MQ, and other infrastructural tools.

As a follow up to the previous section, it’s also advantageous for automated tests to be able to run in process with the test harness code. For instance, I’d much rather do Alba testing of HTTP endpoints in .Net 5 where I’m able to quickly spin up an actual web service in memory and shut it down from my testing project. As opposed to the old, full .Net framework where you’d have to run the web service project in IIS or IISExpress first, then use HttpClient to address the service from your unit tests. The first, .Net 5/Alba approach is a much faster iteration and feedback cycle to support a Test Driven Development workflow than the 2nd approach.

Likewise, when given a choice between tests that can be run locally versus tests that can only be executed in a remote server location, give me the local tests every time. When and if you hit a scenario where you really need to run tests remotely *cough* serverless *cough*, that’s the one and only exception I can think of to my “never deploy from your local development box” rule. If depending on remote execution of tests, I’d at least want the ability to send my local development branch to the remote server at will. Just having a CI server build out pull request branches might get you there of course, but then you might be dependent upon being about to run that test suite in parallel with other CI builds. That’s not a show stopper, but it might make you have to invest more in your build automation to spin up isolated environments on demand.

Apollo Testing!

There’s plenty of debate over what the actual ration of UI tests to integration tests to unit tests should be, and plenty of folks have different metaphors than a “pyramid” to describe what they thing the ratio should be. I happen to like the integration test heavy ratio described in The Testing Trophy and Testing Classifications with the graphic below:

From The Testing Trophy and Testing Classifications

I think his image of the “testing trophy” looks a lot like the command module from the Apollo missions to the moon in the 70’s:

The Apollo Command Module

So from now on, I’m calling our intended testing approach the Apollo Testing Method!

What is the purpose of testing?

I’m just barely old enough that I started my official software development career in old fashioned waterfall models. In those days we did some unit testing with ad hoc testing tools to troubleshoot new code, but it wasn’t anywhere close to what developers do today with Test Driven Development and xUnit tools. As developers, we mostly ran the complete application on our development boxes and stepped through things manually to check out new code locally before throwing things over the wall to QA at the end of the project.

Regardless of whatever ad hoc, local testing developers did of their code locally, the only official testing that actually counted was the purely manual testing done by our testers in the testing environment that was supposed to exactly mimic the production environment (it never quite did, but that’s a story for another day). The QA team strictly used black-box testing with some direct access to the underlying database.

That old black-box testing approach at the end of the project was a much slower feedback cycle than we’re accustomed to today after the advent of Agile Software Development. The killer problem was that the testing feedback cycles were too slow to consider evolutionary design approaches because of the real fear of regression failures. It was also harder as a developer to address defects found in the testing cycle because you were frequently needing to work with code you hadn’t touched in many months that certainly wasn’t fresh in your mind. That frequently led to marathon debugging sessions while a helpful project manager came by your cubicle to cheerfully ask for any updates several times a day and crank up the pressure. As a developer you were also completely at the mercy of QA for information about what was really happening in the system when they found bugs.

In my mind, the most important element of Agile software development overall was the emphasis on improving feedback cycles. Faster feedback allowed

Testing is not about proving that our code works perfectly so much as a way to find and remove enough problems from the code that it can be deployed to production. I think this is an important approach because it allows us to use faster, finer grained testing approaches like isolated unit testing or intermediate level white-box integration tests that are generally faster running and cheaper to build than classic black-box, end to end tests.

What’s Next?

My organization has started an effort to introduce much more integration testing into our development processes as a way of improving quality and throughput. To help out on that, I’m going to attempt to write a series of blog posts going into specific areas about tools and techniques, but for right now I’m just jotting down this stream of consciousness brain dump to get started.

Based on what I think we need to establish at work, I’m thinking to cover:

  • .Net IHost bootstrapping and lifecycle within xUnit.Net or NUnit. I’m much more familiar with xUnit.Net from recent development, but we mostly use NUnit at work so I’ll be trying to cover that base as well.
  • HTTP API testing, which will inevitably feature Alba
  • Dealing with databases in tests, and that’s gonna have to cover both RDBMS databases and probably Mongo Db for now
  • Message handler testing with MassTransit and NServiceBus (we use both in different products)
  • Another brain dump on doing end to end testing after a meeting today about the CI of one of our big systems.
  • How should automated testing be integrated into the development cycle, and who should be responsible for these tests, and why is the obvious answer a very close collaboration between testers, developers, and even business experts
  • This will be a bigger stretch for me, but maybe get into how to do some semi-integration testing of an Angular front end with NgRX.
  • After talking through some of our issues with test automation at work, I think I’d like to blog about some of the positive things we did with Storyteller. I’ve been increasingly frustrated with xUnit.Net (and don’t think NUnit would be much better) for integration testing, so I’ve got quite a bit of notes about what an alternative tool optimized for integration tests could look like I wouldn’t mind publishing.

Alba v5.0 is out! Easy recipes for integration testing ASP.Net web services

Alba is a small library to facilitate easier integration testing of web services built on the ASP.Net Core platform. It’s more or less a wrapper around the ASP.Net TestServer, but does a lot to make testing much easier than writing low level code against TestServer. You would use Alba from within xUnit.Net or NUnit testing projects.

I outlined a proposal for Alba V5 a couple weeks back, and today I was able to publish the new Nuget. The big changes are:

  • The Alba documentation website was rebuilt with Vitepress and refreshed to reflect V5 changes
  • The old SystemUnderTest type was renamed AlbaHost to make the terminology more inline with ASP.Net Core
  • The Json serialization is done through the configured input and output formatters within your ASP.Net Core application — meaning that Alba works just fine regardless of whether your application uses Newtonsoft.Json or System.Text.Json for its Json serialization.
  • There is a new extension model that was added specifically to support testing web services that are authenticated with JWT bearer tokens.

For security, you can opt into:

  1. Stub out all of the authentication and use hard-coded claims
  2. Add a valid JWT to every request with user-defined claims
  3. Let Alba deal with live OIDC integration by handling the JWT integration with your OIDC identity server

Testing web services secured by JWT tokens with Alba v5

We’re working toward standing up a new OIDC infrastructure built around Identity Server 5, with a couple gigantic legacy monolith applications and potentially dozens of newer microservices needing to use this new identity server for authentication. We’ll have to have a good story for running our big web applications that will have this new identity server dependency at development time, but for right now I just want to focus on an automated testing strategy for our newer ASP.Net Core web services using the Alba library.

First off, Alba is a helper library for integration testing HTTP API endpoints in .Net Core systems. Alba wraps the ASP.Net Core TestServer while providing quite a bit of convenient helpers for setting up and verifying HTTP calls against your ASP.Net Core services. We will be shortly introducing Alba into my organization at MedeAnalytics as a way of doing much more integration testing at the API layer (think the middle layer of any kind of testing pyramid concept).

In my previous post I laid out some plans and proposals for a quickly forthcoming Alba v5 release, with the biggest improvement being a new model for being able to stub out OIDC authentication for APIs that are secured by JWT bearer tokens (I think I win programming bingo for that sentence!).

Before I show code, I should say that all of this code is in the v5 branch of Alba on GitHub, but not yet released as it’s very heavily in flight.

To start, I’m assuming that you have a web service project, then a testing library for that web service project. In your web application, bearer token authentication is set up something like this inside your Startup.ConfigureServices() method:

services.AddAuthentication("Bearer")
    .AddJwtBearer("Bearer", options =>
    {
        // A real application would pull all this information from configuration
        // of course, but I'm hardcoding it in testing
        options.Audience = "jwtsample";
        options.ClaimsIssuer = "myapp";
        
        // don't worry about this, our JwtSecurityStub is gonna switch it off in
        // tests
        options.Authority = "https://localhost:5001";
            

        options.TokenValidationParameters = new TokenValidationParameters
        {
            ValidateAudience = false,
            IssuerSigningKey = new SymmetricSecurityKey(Encoding.UTF8.GetBytes("some really big key that should work"))
        };
    });

And of course, you also have these lines of code in your Startup.Configure() method to add in ASP.Net Core middleware for authentication and authorization:

app.UseAuthentication();
app.UseAuthorization();

With these lines of setup code, you will not be able to hit any secured HTTP endpoint in your web service unless there is a valid JWT token in the Authorization header of the incoming HTTP request. Moreover, with this configuration your service would need to make calls to the configured bearer token authority (http://localhost:5001 above). It’s going to be awkward and probably very brittle to depend on having the identity server spun up and running locally when our developers try to run API tests. It would obviously be helpful if there was a quick way to stub out the bearer token authentication in testing to automatically supply known claims so our developers can focus on developing their individual service’s functionality.

That’s where Alba v5 comes in with its new JwtSecurityStub extension that will:

  1. Disable any validation interactions with an external OIDC authority
  2. Automatically add a valid JWT token to any request being sent through Alba
  3. Give developers fine-grained control over the claims attached to any specific request if there is logic that will vary by claim values

To demonstrate this new Alba functionality, let’s assume that you have a testing project that has a direct reference to the web service project. The direct project reference is important because you’ll want to spin up the “system under test” in a test fixture like this:

    public class web_api_authentication : IDisposable
    {
        private readonly IAlbaHost theHost;

        public web_api_authentication()
        {
            // This is calling your real web service's configuration
            var hostBuilder = Program.CreateHostBuilder(new string[0]);

            // This is a new Alba v5 extension that can "stub" out
            // JWT token authentication
            var jwtSecurityStub = new JwtSecurityStub()
                .With("foo", "bar")
                .With(JwtRegisteredClaimNames.Email, "guy@company.com");

            // AlbaHost was "SystemUnderTest" in previous versions of
            // Alba
            theHost = new AlbaHost(hostBuilder, jwtSecurityStub);
        }

I was using xUnit.Net in this sample, but Alba is agnostic about the actual testing library and we’ll use both NUnit and xUnit.Net at work.

In the code above I’ve bootstrapped the web service with Alba and attached the JwtSecurityStub. I’ve also established some baseline claims that will be added to every JWT token on all Alba scenario requests. The AlbaHost extends the IHost interface you’re already used to in .Net Core, but adds the important Scenario() method that you can use to run HTTP requests all the way through your entire application stack like this test :

        [Fact]
        public async Task post_to_a_secured_endpoint_with_jwt_from_extension()
        {
            // Building the input body
            var input = new Numbers
            {
                Values = new[] {2, 3, 4}
            };

            var response = await theHost.Scenario(x =>
            {
                // Alba deals with Json serialization for us
                x.Post.Json(input).ToUrl("/math");
                
                // Enforce that the HTTP Status Code is 200 Ok
                x.StatusCodeShouldBeOk();
            });

            var output = response.ResponseBody.ReadAsJson<Result>();
            output.Sum.ShouldBe(9);
            output.Product.ShouldBe(24);
        }

You’ll notice that I did absolutely nothing in regards to JWT set up or claims or anything. That’s because the JwtSecurityStub is taking care of everything for you. It’s:

  1. Reaching into your application’s bootstrapping to pluck out the right signing key so that it builds JWT token strings that can be validated with the right signature
  2. Turns off any external token validation using an external OIDC authority
  3. Places a unique, unexpired JWT token on each request that matches the issuer and authority configuration of your application

Now, to further control the claims used on any individual scenario request, you can use this new method in Scenario tests:

        [Fact]
        public async Task can_modify_claims_per_scenario()
        {
            var input = new Numbers
            {
                Values = new[] {2, 3, 4}
            };

            var response = await theHost.Scenario(x =>
            {
                // This is a custom claim that would only be used for the 
                // JWT token in this individual test
                x.WithClaim(new Claim("color", "green"));
                x.Post.Json(input).ToUrl("/math");
                x.StatusCodeShouldBeOk();
            });

            var principal = response.Context.User;
            principal.ShouldNotBeNull();
            
            principal.Claims.Single(x => x.Type == "color")
                .Value.ShouldBe("green");
        }

I’ve got plenty of more ground to cover in how we’ll develop locally with our new identity server strategy, but I’m feeling pretty good about having a decent API testing strategy. All of this code is just barely written, so any feedback you might have would be very timely. Thanks for reading about some brand new code!

Proposal for Alba v5: JWT secured APIs, more JSON options, and other stuff

Just typed up a master GitHub issue for Alba v5, and I’ll take any possible feedback anywhere, but I’d prefer you make requests on GitHub. Here’s the GitHub issue I just banged out copied and pasted for anybody interested:

I’m dipping into Alba over the next couple work days to add some improvements that we want at MedeAnalytics to test HTTP APIs that are going to be secured with JWT Bearer Token authentication that’s ultimately backed up by IdentityServer5. I’ve done some proof of concepts off to the side on how to deal with that in Alba tests with or without the actual identity server up and running, so I know it’s possible.

However, when I started looking into how to build a more formal “JWT Token” add on to Alba and catching up with GitHub issues on Alba, it’s led me to the conclusion that it’s time for a little overhaul of Alba as part of the JWT work that’s turning into a proposed Alba v5 release.

Here’s what I’m thinking:

  • Ditch all support < .Net 5.0 just because that makes things easier on me maintaining Alba. Selfishness FTW.
  • New Alba extension model, more on this below
  • Revamp the Alba bootstrapping so that it’s just an extension method hanging off the end of IHostBuilder that will create a new ITestingHost interface for the existing SystemUnderTest class. More on this below.
  • Decouple Alba’s JSON facilities from Newtonsoft.Json. A couple folks have asked if you can use System.Text.Json with Alba so far, so this time around we’ll try to dig into the application itself and find the JSON formatter from the system and make all the JSON serialization “just” use that instead of recreating anything
  • Enable easy testing of GPRC endpoints like we do for classic JSON services

DevOps changes

  • Replace the Rake build script with Bullseye
  • Move the documentation website from stdocs to VitePress with a similar set up as the new Marten website
  • Retire the existing AppVeyor CI (that works just fine), and move to GitHub actions for CI, but add a new action for publishing Nugets

Bootstrapping Changes

Today you use code like this to bootstrap Alba against your ASP.Net Core application:

var system = new SystemUnderTest(WebApi.Program.CreateHostBuilder(new string[0]));

where Alba’s SystemUnderTest intercepts an IHostBuilder, adds some additional configuration, swaps out Kestrel for the ASP.Net Core TestServer, and starts the underlying IHost.

In Alba v5, I’d like to change this a little bit to being an extension method hanging off of IHostBuilder so you’d have something more like this:

var host = WebApi.Program.CreateHostBuilder(new string[0]).StartAlbaHost();

// or

var host = await WebApi.Program.CreateHostBuilder(new string[0]).StartAlbaHostAsync();

The host above would be this new interface (still the old SystemUnderTest under the covers):

public interface IAlbaHost : IHost
{
    Task<ScenarioResult> Scenario(Action<Scenario> configure);

    // various methods to register actions to run before or after any scenario is executed
}

Extension Model

Right now the only extension I have in mind is one for JWT tokens, but it probably goes into a separate Nuget, so here you go.

public interface ITestingHostExtension : IDisposable, IAsyncDisposable
{
    // Make any necessary registrations to the actual application
    IHostBuilder Configure(IHostBuilder builder);

    // Spin it up if necessary before any scenarios are run, also gives
    // an extension a chance to add any BeforeEach() / AfterEach() kind
    // of hooks to the Scenario or HttpContext
    ValueTask Start(ITestingHost host);
   
}

My thought is that ITestingHostExtension objects would be passed into the StartAlbaHost(params ITestingHostExtension[] extensions) method from above.

JWT Bearer Token Extension

This will probably need to be in a separate Alba.JwtTokens Nuget library.

  • Extension method on Scenario to add a bearer token to the Authorization header. Small potatoes.
  • Helper of some sort to fetch JWT tokens from an external identity server and use that token on further scenario runs. I’ve got this prototyped already
  • a new Alba extension that acts as a stubbed identity server. More below:

For the stubbed in identity server, what I’m thinking is that a registered IdentityServerStub extension would:

  • Spin up a small fake OIDC web application hosted in memory w/ Kestrel that responds to the basic endpoints to validate JWT tokens.
  • With this extension, Alba will quietly stick a JWT token into the Authorization header of each request so you can test API endpoints that are secured by bearer tokens
  • Reaches into the system under test’s IHostBuilder and sets the authority url to the local Url of the stubbed OIDC server (I’ve already successfully prototyped this one)
  • Also reaches into the system under test to find the JwtBearerOptions as configured and uses the signing keys that are already configured for the application for the JWT generation. That should make the setup of this quite a bit simpler
  • The new JWT stub extension can be configured with baseline claims that would be part of any generated JWT
  • A new extension method on Scenario to add or override claims in the JWT tokens on a per-scenario basis to support fine-grained authorization testing in your API tests
  • An extension method on Scenario to disable the JWT header altogether so you can test out authentication failures
  • Probably need a facility of some sort to override the JWT expiration time as well

Using Alba to Test ASP.Net Services

TL;DR: Alba is a handy library for integration tests against ASP.Net web services, and the rapid feedback that enables is very useful

One of our projects at Calavista right now is helping a client modernize and optimize a large .Net application, with the end goal being everything running on .Net 5 and an order of magnitude improvement in system throughput. As part of the effort to upgrade the web services, I took on a task to replace this system’s usage of IdentityServer3 with IdentityServer4, but still use the existing Marten-backed data storage for user membership information.

Great, but there’s just one problem. I’ve never used IdentityServer4 before and it changed somewhat between the IdentityServer3 code I was trying to reverse engineer and its current model. I ended up getting through that work just fine. A key element of doing that was using the Alba library to create a test harness so I could iterate through configuration changes quickly by rerunning tests on the new IdentityServer4 project. It didn’t start out this way, but Alba is essentially a wrapper around the ASP.Net TestServer and just acts as a utility to make it easier to write automated tests around the HTTP services in your web service projects.

I ended up starting two new .Net projects:

  1. A new web service that hosts IdentityServer4 and is configured to use user membership information from our client’s existing Marten/Postgresql database
  2. A new xUnit.Net project to hold integration tests against the new IdentityServer4 web service.

Let’s dive right into how I set up Alba and xUnit.Net as an automated test harness for our new IdentityServer4 service. If you start a new ASP.Net project with one of the built in project templates, you’ll get a Program file that’s the main entry point for the application and a Startup class that has most of the system’s bootstrapping configuration. The templates will generate this method that’s used to configure the IHostBuilder for the application:

public static IHostBuilder CreateHostBuilder(string[] args)
{
    return Host.CreateDefaultBuilder(args)
        .ConfigureWebHostDefaults(webBuilder =>
        {
            webBuilder.UseStartup<Startup>();
        });
}

For more information on what role of the IHostBuilder is within your application, see .NET Generic Host in ASP.NET Core.

That’s important, because that gives us the ability to stand up the application exactly as it’s configured in an automated test harness. Switching to the new xUnit.Net test project, referenced my new web service project that will host IdentityServer4. Because spinning up your ASP.Net system can be relatively expensive, I only want to do that once and share the IHost between tests. That’s a perfect usage for xUnit.Net’s shared context support.

First I make what will be the shared test fixture context class for the integration tests shown below:

public class AppFixture : IDisposable
{
    public AppFixture()
    {
        // This is calling into the actual application's
        // configuration to stand up the application in 
        // memory
        var builder = Program.CreateHostBuilder(new string[0]);

        // This is the Alba "SystemUnderTest" wrapper
        System = new SystemUnderTest(builder);
    }
    
    public SystemUnderTest System { get; }

    public void Dispose()
    {
        System?.Dispose();
    }
}

The Alba SystemUnderTest wrapper is responsible for building the actual IHost object for your system, and does so using the in memory TestServer in place of Kestrel.

Just as a convenience, I like to create a base class for integration tests I tend to call IntegrationContext:

    public abstract class IntegrationContext 
        // This is some xUnit.Net mechanics
        : IClassFixture<AppFixture>
    {
        public IntegrationContext(AppFixture fixture)
        {
            System = fixture.System;
            
            // Just casting the IServiceProvider of the underlying
            // IHost to a Lamar IContainer as a convenience
            Container = (IContainer)fixture.System.Services;
        }
        
        public IContainer Container { get; }
        
        // This gives the tests access to run
        // Alba "Scenarios"
        public ISystemUnderTest System { get; }
    }

We’re using Lamar as the underlying IoC container in this application, and I wanted to use Lamar-specific IoC diagnostics in the tests, so I expose the main Lamar container off of the base class as just a convenience.

To finally turn to the tests, the very first thing to try with IdentityServer4 was just to hit the descriptive discovery endpoint just to see if the application was bootstrapping correctly and IdentityServer4 was functional *at all*. I started a new test class with this declaration:

    public class EndToEndTests : IntegrationContext
    {
        private readonly ITestOutputHelper _output;
        private IDocumentStore theStore;

        public EndToEndTests(AppFixture fixture, ITestOutputHelper output) : base(fixture)
        {
            _output = output;

            // I'm grabbing the Marten document store for the app
            // to set up user information later
            theStore = Container.GetInstance<IDocumentStore>();
        }

And then a new test just to exercise the discovery endpoint:

[Fact]
public async Task can_get_the_discovery_document()
{
    var result = await System.Scenario(x =>
    {
        x.Get.Url("/.well-known/openid-configuration");
        x.StatusCodeShouldBeOk();
    });
    
    // Just checking out what's returned
    _output.WriteLine(result.ResponseBody.ReadAsText());
}

The test above is pretty crude. All it does is try to hit the `/.well-known/openid-configuration` url in the application and see that it returns a 200 OK HTTP status code.

I tend to run tests while I’m coding by using keyboard shortcuts. Most IDEs support some kind of “re-run the last test” keyboard shortcut. Using that, my preferred workflow is to run the test once, then assuming that the test is failing the first time, work in a tight cycle of making changes and constantly re-running the test(s). This turned out to be invaluable as it took me a couple iterations of code changes to correctly re-create the old IdentityServer3 configuration into the new IdentityServer4 configuration.

Moving on to doing a simple authentication, I wrote a test like this one to exercise the system with known credentials:

[Fact]
public async Task get_a_token()
{
    // All I'm doing here is wiping out the existing database state
    // with Marten and using a little helper method to add a new
    // user to the database as part of the test setup
    // This would obviously be different in your system
    theStore.Advanced.Clean.DeleteAllDocuments();
    await theStore
        .CreateUser("aquaman@justiceleague.com", "dolphin", "System Administrator");
    
    
    var body =
        "client_id=something&client_secret=something&grant_type=password&scope=ourscope%20profile%20openid&username=aquaman@justiceleague.com&password=dolphin";

    // The test would fail here if the status
    // code was NOT 200
    var result = await System.Scenario(x =>
    {
        x.Post
            .Text(body)
            .ContentType("application/x-www-form-urlencoded")
            .ToUrl("/connect/token");
    });

    // As a convenience, I made a new class called ConnectData
    // just to deserialize the IdentityServer4 response into 
    // a .Net object
    var token = result.ResponseBody.ReadAsJson<ConnectData>();
    token.access_token.Should().NotBeNull();
    token.token_type.Should().Be("Bearer");
}

public class ConnectData
{
    public string access_token { get; set; }
    public int expires_in { get; set; }
    public string token_type { get; set; }
    public string scope { get; set; }
    
}

Now, this test took me several iterations to work through until I found exactly the right way to configure IdentityServer4 and adjusted our custom Marten backing identity store (IResourceOwnerPasswordValidator and IProfileService in IdentityServer4 world) until the tests pass. I found it extremely valuable to be able to debug right into the failing tests as I worked, and even needed to take advantage of JetBrains Rider’s capability to debug through external code to understand how IdentityServer4 itself worked. I’m very sure that I was able to get through this work much faster by iterating through tests as opposed to just trying to run the application and driving it through something like Postman or through the connected user interface.

Using Alba for Integration Testing ASP.Net Core Web Services

There’s a video of David Giard and I talking about Alba at CodeMash a couple years ago on YouTube if you’re interested.

Alba is a helper library for doing automated testing against ASP.Net Core web services. Alba started out as a subsystem of the defunct FubuMVC project for HTTP contract testing, but was later extracted into its own project distributed via Nuget, ported to ASP.Net Core, and finally re-wired to incorporate TestServer internally.

There are certainly other advantages, but I think the biggest selling point of adopting ASP.Net Core is the ability to run HTTP requests through your system completely in memory without any other kind of external web server setup. In my experience, this has made automated integration testing of ASP.Net Core applications far simpler compared to older versions of ASP.Net.

We could use FubuMVC’s OWIN support with or without Katana to perform the same kind of automated testing that I’m demonstrating here years ago, but hardly anyone ever used that and what I’m showing here is significantly easier to use.

Why not just use TestServer you ask? You certainly can, but Alba provides a lot of helper functionality around exercising HTTP web services that will make your tests much more readable and remove a lot of the repetitive coding you’d have to do with just using TestServer.

To demonstrate Alba, I published a small solution called AlbaIntegrationTesting to GitHub this morning.

Starting with a small web services application generated with the dotnet new webapi template, I added a small endpoint that we’ll later write specifications against:

public class Answers
{
    public int Product { get; set; }
    public int Sum { get; set; }
}

public class AdditionController : ControllerBase
{
    [HttpGet("math/{one}/{two}")]
    public Answers DoMath(int one, int two)
    {
        return new Answers
        {
            Product = one * two,
            Sum = one + two
        };
    }
}

Next, I added a new test project using xUnit.Net with the dotnet new xunit template. With the new skeleton testing project I:

  • Added a reference to the latest Alba Nuget
  • My preference for testing assertions in .Net is Shouldly, so I added a Nuget reference for that as well

Since bootstrapping an ASP.Net Core system is non-trivial in terms of performance, I utilized xUnit.Net’s support for sharing context between tests so that the application only has to be spun up once in the test suite. The first step of that is to create a new “fixture” class that holds the ASP.Net Core system in memory like so:

public class AppFixture : IDisposable
{
    public AppFixture()
    {
        // Use the application configuration the way that it is in the real application
        // project
        var builder = Program.CreateHostBuilder(new string[0])
            
            // You may need to do this for any static 
            // content or files in the main application including
            // appsettings.json files
            
            // DirectoryFinder is an Alba helper
            .UseContentRoot(DirectoryFinder.FindParallelFolder("WebApplication")) 
            
            // Override the hosting environment to "Testing"
            .UseEnvironment("Testing"); 

        // This is the Alba scenario wrapper around
        // TestServer and an active .Net Core IHost
        System = new SystemUnderTest(builder);

        // There's also a BeforeEachAsync() signature
        System.BeforeEach(httpContext =>
        {
            // Take any kind of setup action before
            // each simulated HTTP request
            
            // In this case, I'm setting a fake JWT token on each request
            // as a demonstration
            httpContext.Request.Headers["Authorization"] = $"Bearer {generateToken()}";
        });

        System.AfterEach(httpContext =>
        {
            // Take any kind of teardown action after
            // each simulated HTTP request
        });

    }

    private string generateToken()
    {
        // In a current project, we implement this method
        // to create a valid JWT token with the claims that
        // the web services require
        return "Fake";
    }

    public SystemUnderTest System { get; }

    public void Dispose()
    {
        System?.Dispose();
    }
}

This new AppFixture class will only be built once by xUnit.Net and shared between test unit classes using the IClassFixture<AppFixture> interface.

Do note that you can express some actions in Alba to take immediately before or after executing an HTTP request through your system for typical setup or teardown operations. In one of my current projects, we exploit this capability to add a pre-canned JWT to the request headers that’s required by our system. In our case, that’s allowing us to test the security integration and the multi-tenancy support that depends on the JWT claims at the same time we’re exercising controller actions and the associated database access underneath it. If you’re familiar with the test pyramid idea, these tests are the middle layers of our testing pyramid.

To simplify the xUnit.Net usage throughout the testing suite, I like to introduce a base class that inevitable accumulates utility methods for running Alba or common database setup and teardown functions. I tend to call this class IntegrationContext and here’s a sample:

public abstract class IntegrationContext : IClassFixture<AppFixture>
{
    protected IntegrationContext(AppFixture fixture)
    {
        Fixture = fixture;
    }

    public AppFixture Fixture { get; }

    /// <summary>
    /// Runs Alba HTTP scenarios through your ASP.Net Core system
    /// </summary>
    /// <param name="configure"></param>
    /// <returns></returns>
    protected Task<IScenarioResult> Scenario(Action<Scenario> configure)
    {
        return Fixture.System.Scenario(configure);
    }

    // The Alba system
    protected SystemUnderTest System => Fixture.System;

    // Just a convenience because you use it pretty often
    // in tests to get at application services
    protected IServiceProvider Services => Fixture.System.Services;

}

Now with the AppFixture and IntegrationContext in place, let’s write some specifications against the AdditionController endpoint shown earlier in this post:

public class DoMathSpecs : IntegrationContext
{
    public DoMathSpecs(AppFixture fixture) : base(fixture)
    {
    }

    // This specification uses the shorthand helpers in Alba
    // that's useful when you really only care about the data
    // going in or out of the HTTP endpoint
    [Fact]
    public async Task do_some_math_adds_and_multiples_shorthand()
    {
        var answers = await System.GetAsJson<Answers>("/math/3/4");
        
        answers.Sum.ShouldBe(7);
        answers.Product.ShouldBe(12);
    }
    
    // This specification shows the longhand way of executing an
    // Alba scenario and using some of its declarative assertions
    // about the expected HTTP response
    [Fact]
    public async Task do_some_math_adds_and_multiples_longhand()
    {
        var result = await Scenario(x =>
        {
            x.Get.Url("/math/3/4");
            x.ContentTypeShouldBe("application/json; charset=utf-8");
        });

        var answers = result.ResponseBody.ReadAsJson<Answers>();
        
        
        answers.Sum.ShouldBe(7);
        answers.Product.ShouldBe(12);
    }
}

Alba can do a lot more to work with the HTTP requests and responses, but I hope this gives you a quick introduction to using Alba for integration testing.

It’s an OSS Nuget Release Party! (Jasper v1.0, Lamar, Alba, Oakton)

My bandwidth for OSS work has been about zero for the past couple months. With the COVID-19 measures drastically reducing my commute and driving kids to school time, getting to actually use some of my projects at work, and a threat from an early adopter to give up on Jasper if I didn’t get something out soon, I suddenly went active again and got quite a bit of backlog things, pull requests, and bug fixes out.

My main focus in terms of OSS development for the past 3-4 years has been a big project called “Jasper” that was originally going to be a modernized .Net Core successor to FubuMVC. Just by way of explanation, Jasper, MO is my ancestral hometown and all the other projects I’m mentioning here are named after either other small towns or landmarks around the titular “Jasper.”

Alba

Alba is a library for HTTP contract testing with ASP.Net Core endpoints. It does quite a bit to reduce the repetitive code from using TestServer by itself in tests and makes your tests be much more declarative and intention revealing. Alba v3.1.1 was released a couple weeks ago to address a problem exposing the application’s IServiceProvider from the Alba SystemUnderTest. Fortunately for once, I caught this one myself while dogfooding Alba on a project at work.

Alba originated with the code we used to test FubuMVC HTTP behavior back in the day, but was modernized to work with ASP.Net Core instead of OWIN, and later retrofitted to just use TestServer under the covers.

Baseline

Baseline is a grab bag of utility code and extension methods on common .Net types that you can’t believe is missing from the BCL, most of which is originally descended from FubuCore.

As an ancillary project, I ripped out the type scanning and assembly finding code from Lamar into a separate BaselineTypeDiscovery Nuget that’s used by most of the other libraries in this post. There was a pretty significant pull request in the latest BaselineTypeDiscovery v1.1.0 release that should improve the application start up time for folks that use Lamar to discover assemblies in their application.

Oakton

Oakton is a command parsing library for .Net that was lifted originally from FubuCore that is used by Jasper. Oakton v2.0.4 and Oakton.AspNetCore v2.1.3 just upgrade the assembly discovery features to use the newer BaselineTypeDiscovery release above.

Lamar

Lamar is a modern, fast, ASP.Net Core compliant IoC container and the successor to StructureMap. I let a pretty good backlog of issues and pull requests amass, so I took some time yesterday to burn that down and the result is Lamar v4.2. This release upgrades the type scanning, fixes some bugs, and added quite a few fixes to the documentation website.

Jasper

Jasper at this point is a command executor ala MediatR (but much more ambitious) and a lightweight messaging framework — but the external messaging will mature much more in subsequent releases.

This feels remarkably anti-climactic seeing as how it’s been my main focus for years, but I pushed Jasper v1.0 today, specifically for some early adopters. The documentation is updated here. There’s also an up to date repository of samples that should grow. I’ll make a much bigger deal out of Jasper when I make the v1.1 or v2.0 release sometime after the Coronavirus has receded and my bandwidth slash ambition level is higher. For right now I’m just wanting to get some feedback from early users and let them beat it up.

Marten

There’s nothing new to say from me about Marten here except that my focus on Jasper has kept me from contributing too much to Marten. With Jasper v1.0 out, I’ll shortly (and finally) turn my attention to helping with the long planned Marten v4 release. For a little bit of synergy, part of my plans there is to use Jasper for some of the advanced Marten event store functionality we’re planning.