Tag Archives: Automated Testing

Using Alba to Test ASP.Net Services

TL;DR: Alba is a handy library for integration tests against ASP.Net web services, and the rapid feedback that enables is very useful

One of our projects at Calavista right now is helping a client modernize and optimize a large .Net application, with the end goal being everything running on .Net 5 and an order of magnitude improvement in system throughput. As part of the effort to upgrade the web services, I took on a task to replace this system’s usage of IdentityServer3 with IdentityServer4, but still use the existing Marten-backed data storage for user membership information.

Great, but there’s just one problem. I’ve never used IdentityServer4 before and it changed somewhat between the IdentityServer3 code I was trying to reverse engineer and its current model. I ended up getting through that work just fine. A key element of doing that was using the Alba library to create a test harness so I could iterate through configuration changes quickly by rerunning tests on the new IdentityServer4 project. It didn’t start out this way, but Alba is essentially a wrapper around the ASP.Net TestServer and just acts as a utility to make it easier to write automated tests around the HTTP services in your web service projects.

I ended up starting two new .Net projects:

  1. A new web service that hosts IdentityServer4 and is configured to use user membership information from our client’s existing Marten/Postgresql database
  2. A new xUnit.Net project to hold integration tests against the new IdentityServer4 web service.

Let’s dive right into how I set up Alba and xUnit.Net as an automated test harness for our new IdentityServer4 service. If you start a new ASP.Net project with one of the built in project templates, you’ll get a Program file that’s the main entry point for the application and a Startup class that has most of the system’s bootstrapping configuration. The templates will generate this method that’s used to configure the IHostBuilder for the application:

public static IHostBuilder CreateHostBuilder(string[] args)
    return Host.CreateDefaultBuilder(args)
        .ConfigureWebHostDefaults(webBuilder =>

For more information on what role of the IHostBuilder is within your application, see .NET Generic Host in ASP.NET Core.

That’s important, because that gives us the ability to stand up the application exactly as it’s configured in an automated test harness. Switching to the new xUnit.Net test project, referenced my new web service project that will host IdentityServer4. Because spinning up your ASP.Net system can be relatively expensive, I only want to do that once and share the IHost between tests. That’s a perfect usage for xUnit.Net’s shared context support.

First I make what will be the shared test fixture context class for the integration tests shown below:

public class AppFixture : IDisposable
    public AppFixture()
        // This is calling into the actual application's
        // configuration to stand up the application in 
        // memory
        var builder = Program.CreateHostBuilder(new string[0]);

        // This is the Alba "SystemUnderTest" wrapper
        System = new SystemUnderTest(builder);
    public SystemUnderTest System { get; }

    public void Dispose()

The Alba SystemUnderTest wrapper is responsible for building the actual IHost object for your system, and does so using the in memory TestServer in place of Kestrel.

Just as a convenience, I like to create a base class for integration tests I tend to call IntegrationContext:

    public abstract class IntegrationContext 
        // This is some xUnit.Net mechanics
        : IClassFixture<AppFixture>
        public IntegrationContext(AppFixture fixture)
            System = fixture.System;
            // Just casting the IServiceProvider of the underlying
            // IHost to a Lamar IContainer as a convenience
            Container = (IContainer)fixture.System.Services;
        public IContainer Container { get; }
        // This gives the tests access to run
        // Alba "Scenarios"
        public ISystemUnderTest System { get; }

We’re using Lamar as the underlying IoC container in this application, and I wanted to use Lamar-specific IoC diagnostics in the tests, so I expose the main Lamar container off of the base class as just a convenience.

To finally turn to the tests, the very first thing to try with IdentityServer4 was just to hit the descriptive discovery endpoint just to see if the application was bootstrapping correctly and IdentityServer4 was functional *at all*. I started a new test class with this declaration:

    public class EndToEndTests : IntegrationContext
        private readonly ITestOutputHelper _output;
        private IDocumentStore theStore;

        public EndToEndTests(AppFixture fixture, ITestOutputHelper output) : base(fixture)
            _output = output;

            // I'm grabbing the Marten document store for the app
            // to set up user information later
            theStore = Container.GetInstance<IDocumentStore>();

And then a new test just to exercise the discovery endpoint:

public async Task can_get_the_discovery_document()
    var result = await System.Scenario(x =>
    // Just checking out what's returned

The test above is pretty crude. All it does is try to hit the `/.well-known/openid-configuration` url in the application and see that it returns a 200 OK HTTP status code.

I tend to run tests while I’m coding by using keyboard shortcuts. Most IDEs support some kind of “re-run the last test” keyboard shortcut. Using that, my preferred workflow is to run the test once, then assuming that the test is failing the first time, work in a tight cycle of making changes and constantly re-running the test(s). This turned out to be invaluable as it took me a couple iterations of code changes to correctly re-create the old IdentityServer3 configuration into the new IdentityServer4 configuration.

Moving on to doing a simple authentication, I wrote a test like this one to exercise the system with known credentials:

public async Task get_a_token()
    // All I'm doing here is wiping out the existing database state
    // with Marten and using a little helper method to add a new
    // user to the database as part of the test setup
    // This would obviously be different in your system
    await theStore
        .CreateUser("aquaman@justiceleague.com", "dolphin", "System Administrator");
    var body =

    // The test would fail here if the status
    // code was NOT 200
    var result = await System.Scenario(x =>

    // As a convenience, I made a new class called ConnectData
    // just to deserialize the IdentityServer4 response into 
    // a .Net object
    var token = result.ResponseBody.ReadAsJson<ConnectData>();

public class ConnectData
    public string access_token { get; set; }
    public int expires_in { get; set; }
    public string token_type { get; set; }
    public string scope { get; set; }

Now, this test took me several iterations to work through until I found exactly the right way to configure IdentityServer4 and adjusted our custom Marten backing identity store (IResourceOwnerPasswordValidator and IProfileService in IdentityServer4 world) until the tests pass. I found it extremely valuable to be able to debug right into the failing tests as I worked, and even needed to take advantage of JetBrains Rider’s capability to debug through external code to understand how IdentityServer4 itself worked. I’m very sure that I was able to get through this work much faster by iterating through tests as opposed to just trying to run the application and driving it through something like Postman or through the connected user interface.

Using Alba for Integration Testing ASP.Net Core Web Services

There’s a video of David Giard and I talking about Alba at CodeMash a couple years ago on YouTube if you’re interested.

Alba is a helper library for doing automated testing against ASP.Net Core web services. Alba started out as a subsystem of the defunct FubuMVC project for HTTP contract testing, but was later extracted into its own project distributed via Nuget, ported to ASP.Net Core, and finally re-wired to incorporate TestServer internally.

There are certainly other advantages, but I think the biggest selling point of adopting ASP.Net Core is the ability to run HTTP requests through your system completely in memory without any other kind of external web server setup. In my experience, this has made automated integration testing of ASP.Net Core applications far simpler compared to older versions of ASP.Net.

We could use FubuMVC’s OWIN support with or without Katana to perform the same kind of automated testing that I’m demonstrating here years ago, but hardly anyone ever used that and what I’m showing here is significantly easier to use.

Why not just use TestServer you ask? You certainly can, but Alba provides a lot of helper functionality around exercising HTTP web services that will make your tests much more readable and remove a lot of the repetitive coding you’d have to do with just using TestServer.

To demonstrate Alba, I published a small solution called AlbaIntegrationTesting to GitHub this morning.

Starting with a small web services application generated with the dotnet new webapi template, I added a small endpoint that we’ll later write specifications against:

public class Answers
    public int Product { get; set; }
    public int Sum { get; set; }

public class AdditionController : ControllerBase
    public Answers DoMath(int one, int two)
        return new Answers
            Product = one * two,
            Sum = one + two

Next, I added a new test project using xUnit.Net with the dotnet new xunit template. With the new skeleton testing project I:

  • Added a reference to the latest Alba Nuget
  • My preference for testing assertions in .Net is Shouldly, so I added a Nuget reference for that as well

Since bootstrapping an ASP.Net Core system is non-trivial in terms of performance, I utilized xUnit.Net’s support for sharing context between tests so that the application only has to be spun up once in the test suite. The first step of that is to create a new “fixture” class that holds the ASP.Net Core system in memory like so:

public class AppFixture : IDisposable
    public AppFixture()
        // Use the application configuration the way that it is in the real application
        // project
        var builder = Program.CreateHostBuilder(new string[0])
            // You may need to do this for any static 
            // content or files in the main application including
            // appsettings.json files
            // DirectoryFinder is an Alba helper
            // Override the hosting environment to "Testing"

        // This is the Alba scenario wrapper around
        // TestServer and an active .Net Core IHost
        System = new SystemUnderTest(builder);

        // There's also a BeforeEachAsync() signature
        System.BeforeEach(httpContext =>
            // Take any kind of setup action before
            // each simulated HTTP request
            // In this case, I'm setting a fake JWT token on each request
            // as a demonstration
            httpContext.Request.Headers["Authorization"] = $"Bearer {generateToken()}";

        System.AfterEach(httpContext =>
            // Take any kind of teardown action after
            // each simulated HTTP request


    private string generateToken()
        // In a current project, we implement this method
        // to create a valid JWT token with the claims that
        // the web services require
        return "Fake";

    public SystemUnderTest System { get; }

    public void Dispose()

This new AppFixture class will only be built once by xUnit.Net and shared between test unit classes using the IClassFixture<AppFixture> interface.

Do note that you can express some actions in Alba to take immediately before or after executing an HTTP request through your system for typical setup or teardown operations. In one of my current projects, we exploit this capability to add a pre-canned JWT to the request headers that’s required by our system. In our case, that’s allowing us to test the security integration and the multi-tenancy support that depends on the JWT claims at the same time we’re exercising controller actions and the associated database access underneath it. If you’re familiar with the test pyramid idea, these tests are the middle layers of our testing pyramid.

To simplify the xUnit.Net usage throughout the testing suite, I like to introduce a base class that inevitable accumulates utility methods for running Alba or common database setup and teardown functions. I tend to call this class IntegrationContext and here’s a sample:

public abstract class IntegrationContext : IClassFixture<AppFixture>
    protected IntegrationContext(AppFixture fixture)
        Fixture = fixture;

    public AppFixture Fixture { get; }

    /// <summary>
    /// Runs Alba HTTP scenarios through your ASP.Net Core system
    /// </summary>
    /// <param name="configure"></param>
    /// <returns></returns>
    protected Task<IScenarioResult> Scenario(Action<Scenario> configure)
        return Fixture.System.Scenario(configure);

    // The Alba system
    protected SystemUnderTest System => Fixture.System;

    // Just a convenience because you use it pretty often
    // in tests to get at application services
    protected IServiceProvider Services => Fixture.System.Services;


Now with the AppFixture and IntegrationContext in place, let’s write some specifications against the AdditionController endpoint shown earlier in this post:

public class DoMathSpecs : IntegrationContext
    public DoMathSpecs(AppFixture fixture) : base(fixture)

    // This specification uses the shorthand helpers in Alba
    // that's useful when you really only care about the data
    // going in or out of the HTTP endpoint
    public async Task do_some_math_adds_and_multiples_shorthand()
        var answers = await System.GetAsJson<Answers>("/math/3/4");
    // This specification shows the longhand way of executing an
    // Alba scenario and using some of its declarative assertions
    // about the expected HTTP response
    public async Task do_some_math_adds_and_multiples_longhand()
        var result = await Scenario(x =>
            x.ContentTypeShouldBe("application/json; charset=utf-8");

        var answers = result.ResponseBody.ReadAsJson<Answers>();

Alba can do a lot more to work with the HTTP requests and responses, but I hope this gives you a quick introduction to using Alba for integration testing.

Fast Build, Slow Build, and the Testing Pyramid

At Calavista we’ve been helping a couple of our clients use Selenium for automated testing of web applications. For one client we’re slowly introducing a slightly different, but still .Net-focused technical stack that allows for much more effective test automation without having to resort to quite so many Selenium tests. For another client we’re trying to help them optimize the execution time of their large Selenium test suite.

At this point, they’re only running the Selenium test suite in a scheduled run overnight, with their testers and developers needing to deal with any test failures the next day. Ideally, they want to get to the point where developers could optionally execute either the whole suite or a targeted subset of the Selenium tests on their own development branches whenever they want.

I think it’s unlikely that we’ll get the full Selenium test suite to where it executes fast enough that a developer would be willing to run those tests as part of their normal “check in dance” routine. To thread the needle a bit between letting a developer get quick feedback from their own local builds or the main continuous integration builds and the desire to run the Selenium suite much more often for faster feedback, we’re suggesting they split the build activity up with what I’ve frequently seen called the “fast build, slow build” pattern (I couldn’t find anybody to attribute this to tonight as I wrote this, but I can’t take credit for it).

First off, let’s assume your project is following the idea of the “testing pyramid” one way or another such that your automated tests probably fall into one of three broad categories:

  1. Unit tests that don’t touch the database or other external services so they generally run pretty quickly. This would probably include things like business logic rules or validation rules.
  2. Integration tests that test a subset of the system and frequently use databases or other external services. HTTP contract tests are another example.
  3. End to end tests that almost inevitably run slowly compared to other types of tests. Selenium tests are notoriously slow and are the obvious example here.

The general idea is to segment the automated build something like this:

  1. Local developer’s build — You might only choose to compile the code and run fast unit tests as a check before you try to push commits to a GitHub/BitBucket/Azure DevOps/whatever you happen to be using branch. If the integration tests in item #2 are fast enough, you might include them in this step. At times, I’ve divided a local build script into “full” and “fast” modes so I can easily choose how much to run at one time for local commits versus any kind of push (I’m obviously assuming that everybody uses Git by this point, so I apologize if the Git-centric terminology isn’t helpful here).
  2. The CI “fast build” — You’d run a superset of the local developer’s build, but add the integration tests that run reasonably quickly and maybe a small smattering of the end to end tests. This is the “fast build” to give the developer reasonable assurance that their push built successfully and didn’t break anything
  3. The CI “slow build” of the rest of the end to end tests. This build would be triggered as a cascading build by the success of the “fast build” on the build server. The “slow build” wouldn’t necessarily be executed for every single push to source control, but there would at least be much more granularity in the tracking from build results to the commits picked up by the “slow build” execution. The feedback from these tests would also be much more timely than running overnight. The segregation into the “fast build / slow build” split allows developers not to be stuck waiting for long test runs before they can check in or continue working, but still get some reasonable feedback cycle from those bigger, slower, end to end tests.



Storyteller 5.0 – Streamlined CLI, Netstandard 2.0, and easier debugging

I published the Storyteller 5.0 release last night. I punted on doing any kind of big user interface overhaul for now, and just released the back end improvements on their own with some vague idea that there’d be an improved or at least restyled user interface later this year.

The key improvements are:

  • Netstandard 2.0 support
  • An easier getting started story
  • Streamlined command line usage
  • Easier “F5 debugging” for specifications in your IDE
  • No changes whatsoever to your Fixture code from 4.0

Getting Started with Storyteller 5

Previous versions of Storyteller have been problematic for new users getting started and setting up projects with the right Nuget dependencies. I felt like things got a little better with the dotnet cli, but the enduring problem with that is how few .Net developers seem to be using it or familiar with it. When you use Storyteller 5, you need two dependencies in your Storyteller specification project:

  1. A reference to the Storyteller 5.0 assembly via Nuget
  2. The dotnet-storyteller command line tool referenced as a dotnet cli tool in your project, and that’s where most of the trouble come in.

To start up a new Storyteller 5.0 specification project, first make the directory where you want the project to live. Next, use the dotnet new console command to create a new project with a csproj file and a Program.cs file.

In your csproj file, replace the contents with this, or just add the package reference for Storyteller and the cli tool reference for dotnet-storyteller as shown below:



Next, we need to get into the entry point to this new console application change the Program.Main() method to activate the Storyteller engine within this new project:

    public class Program
        public static int Main(string[] args)
            return StorytellerAgent.Run(args);

Internally, the StorytellerAgent is using Oakton to parse the arguments and carry out one of these commands:

    Available commands:
       agent -> Used by dotnet storyteller to remote control the Storyteller specification engine
         run -> Executes Specifications and Writes Results
        test -> Try to start and warmup the system under test for diagnostics
    validate -> Use to validate specifications for syntax errors or missing grammars or fixtures

If you execute the console application with no arguments like this:

|> dotnet run

It will execute all the specifications and write the results to a file named “stresults.htm.”

You can customize the running behavior by passing in optional flags with the pattern dotnet run -- run --flag flagvalue like this example that just writes the results file to a different location:

|> dotnet run -- run Arithmetic -r ./artifacts/results.htm

If you’re not already familiar with the dotnet cli, what’s going on here is that anything to the right of the “–” double dash is considered to be the command line arguments passed into your application’s Main() method. The “run” argument tells the StorytellerAgent that you actually want to run specifications and it’s unfortunately not redundant and very much mandatory if you want to customize how Storyteller runs specifications.

See the Storyteller 5.0 quickstart project for a working example.

Running the Storyteller Specification Editor

Assuming that you’ve got the cli tools reference to dotnet-storyteller and you’ve executed `dotnet restore` at least once (compiling through VS.Net or Rider does this for you), the only thing you need to do to launch the specification editor tool is this from the command line:

|> dotnet storyteller

F5 Debugging

Debugging complicated Storyteller specifications has been its Achille’s Heel from the very beginning. You can always attach a debugger to a running Storyteller process, but that’s clumsy (quicker in Rider than VS.Net, but still). As a cheap but effective improvement in v5, you can run a single specification from the command line with this signature:

|> dotnet run -- run "Suite1 / ChildSuite1 / Specification Name"

This is admittedly pretty ugly, but remember that you can tell either Rider or VS.Net to pass arguments to your console application when your press F5 to run an application in debug mode. I utilize this quite a bit in Jasper development to troubleshoot individual specifications. Here’s what the configuration looks like for this in Rider:



See the “Program arguments” specifically. Once the path to the specification is configured, I can just hit F5 and jump right into a debugging session running just that specification.

We looked pretty hard at supporting the dotnet test tooling so you could run Storyteller specifications from either Visual Studio.Net’s or Rider/ReSharper’s test runners, but all I could think about after trying to reverse engineer xUnit’s tooling around that was a certain Monty Python scene.

Concept for Integrating Selenium with Storyteller 4

While this is a working demonstration on my box, what I’m showing here is a very early conceptual approach for review by other folks in my shop. I’d love to have any feedback on this thing.

I spent quite a bit of time in our Salt Lake City office last week speaking with our QA folks about test automation in general and where Selenium does or doesn’t fit into our (desired) approach. The developers in my shop use Selenium quite a bit today within our Storyteller acceptance suite with mixed results, but now our QA folks are wanting to automate some of their manual test suite and kicking the tires on Selenium.

As a follow up to those discussions, this post shows the very early concept for how we can use Selenium functionality within Storyteller specifications for their and your feedback. All of the code is in Storyteller’s 4.1 branch.

Demo Specification

Let’s start very crude. Let’s say that you have a web page that has a

tag with some kind of user message text that’s hidden at first. On top of that, let’s say that you’ve got two buttons on the screen with the text “Show” and “Hide.” A Storyteller specification for that behavior might look like this:


and the HTML results would look like this:


The 3+ second runtime is mostly in the creation and launching of a Chrome browser instance. More on this later.

To implement this specification we need two things, Fixture classes that implement our desired language and the actual specification data in a markdown file shown in the next section.

In this example, there would be a new “Storyteller.Selenium” library that provides the basis for integrating Selenium into Storyteller specifications with a common “ScreenFixture” base class for Fixture’s that target Selenium. After that, the SampleFixture class used in the specification above looks like this:

    public class SampleFixture : ScreenFixture
        public SampleFixture()
            // This is just a little bit of trickery to
            // use human readable aliases for elements on
            // the page. The Selenium By class identifies
            // how Selenium should "find" the element
            Element("the Show button", By.Id("button1"));
            Element("the Hide button", By.Id("button2"));
            Element("the div", By.Id("div1"));
            Element("the textbox", By.Id("text1"));

        protected override void beforeRunning()
            // Launching Chrome and opening the browser to a sample
            // HTML page. In real life, you'd need to be smarter about this
            // and reuse the Driver across specifications for better
            // performance
            Driver = new ChromeDriver();
            RootUrl = "file://" + Project.CurrentProject.ProjectPath.Replace("\\", "/");

        public override void TearDown()
            // Clean up behind yourself

If you were editing the specifications in Storyteller’s Specification editor, you’ll have a dropdown box listing the elements by name any place where you need to specify an element like so:


Finally, the proposed Storyteller.Selenium package adds information to the performance logging for how long a web page takes to load. This is the time according to WebDriver and shouldn’t be used for detailed performance optimization, but it’s still a useful number to understand performance problems during Storyteller specification executions. See the “Navigation/simple.htm” line below:


What does the actual specification look like?

If you authored the specification above in the Storyteller user interface, you’d get this markdown file:

# Click Buttons

-> id = b721e06b-0b64-4710-b82b-cbe5aa261f60
-> lifecycle = Acceptance
-> max-retries = 0
-> last-updated = 2017-02-21T15:56:35.1528422Z
-> tags = 

|> OpenUrl url=simple.htm

This element is hidden by default
|> IsHidden element=the div

Clicking the "Show" button will reveal the div
|> Click element=the Show button
|> IsVisible element=the div

However, if you were writing the specification by hand directly in the markdown file, you can simplify it to this:

# Click Buttons

|> OpenUrl simple.htm

This element is hidden by default
|> IsHidden the div

Clicking the "Show" button will reveal the div
|> Click the Show button
|> IsVisible the div

We’re trying very hard with Storyteller 4 to make specifications easier to write for non-developers and what you see above is a product of that effort.

Why Storyteller + Selenium instead of just Selenium?

why would you want to use Storyteller and Selenium together instead of just Selenium by itself? A couple reasons:

  • There’s a lot more going on in effective automated tests besides driving web browsers (setting up system data, checking system data, starting/stopping the system under test). Storyteller provides a lot more functionality than Selenium by itself.
  • It’s very valuable to express automated tests in a higher level language with something like Storyteller or Cucumber instead of going right down to screen elements and other implementation details. I say this partially for making the specifications more human readable, but also to decouple the expression of the test from the underlying implementation details. You want to do this so that your tests can more readily accommodate structural changes to the web pages. If you’ve never worked on large scale automated testing against a web browser, you really need to be aware that these kinds of tests can be very brittle in the face of user interface changes.
  • Storyteller provides a lot of extra instrumentation and performance logging that can be very useful for debugging testing or performance problems
  • I hate to throw this one out there, but Storyteller’s configurable retry capability in continuous integration is very handy for test suites with oodles of asynchronous behavior like you frequently run into with modern web applications

Because somebody will ask, or an F# enthusiast will inevitably throw this out there, yes, there’s Canopy as well that wraps a nice DSL around Selenium and provides some stabilization. I’m not disparaging Canopy in the slightest, but everything I said about using raw Selenium applies equally to using Canopy by itself. To be a bit more eye-poky about it, one of the first success stories of Storyteller 3 was in replacing a badly unstable test suite that used Canopy naively.