A sample environment check for OIDC authenticated web services

EDIT 8/25: Make sure you’re using Oakton 3.2.2 if you try to reproduce this sample, because of course I found and fixed an Oakton bug in this functionality in the process of writing the post:(

I think that what I’m calling an “environment check” here is a very valuable, and under utilized technique in software development today. I had a good reason to stop and create one today to troubleshoot an issue I stubbed my toe on, so here’s a blog post about environment checks.

One of the things on my plate at work right now is prototyping out the usage of Identity Server 5 as our new identity provider across our enterprise. Today I was working with a spike that needed to run a pair of ASP.Net Core applications:

  1. A web application that used the Duende.BFF library and JWT bearer tokens as its server side authentication mechanism.
  2. The actual identity server application

All I was trying to do was to run both applications and try to log into secure portions of the web application. No luck though, I was getting weird looking exceptions about missing configuration. I eventually debugged through the ASP.Net Core OIDC authentication innards, and found out that my issue was that the web application was configured with the wrong authority Url to the identity server service.

That turned out to be a very simple problem to fix, but the exception message was completely unrelated to the actual error, and that’s the kind of problem that’s unfortunately likely to come up again as this turns into a real solution that’s separately deployed in several different environments later (local development, CI, testing, staging, production, you know the drill).

As a preventative measure — or at least a diagnostic tool — I stopped and added an environment check into the application just to assert that yes, it can successfully reach the configured OIDC authority at the configured Url we expect it to be. Mechanically, I added Oakton as the command line runner for the application. After that, I utilized Oakton’s facility for adding and executing environment checks to a .Net Core or .Net 5+ application.

Inside the Startup.ConfigureServices() method for the main web application, I have this code to configure the server-side authentication (sorry, I know this is a lot of code):

            services.AddBff()
                .AddServerSideSessions();

            // This is specific to this application. This is just 
            // a cheap way of doing strong typed configuration
            // without using the ugly IOptions<T> stuff
            var settings = Configuration.Get<OidcSettings>();
            
            // configure server-side authentication and session management
            services.AddAuthentication(options =>
                {
                    options.DefaultScheme = "cookie";
                    options.DefaultChallengeScheme = "oidc";
                    options.DefaultSignOutScheme = "oidc";
                })
                .AddCookie("cookie", options =>
                {
                    // host prefixed cookie name
                    options.Cookie.Name = settings.CookieName;

                    // strict SameSite handling
                    options.Cookie.SameSite = SameSiteMode.Strict;
                })
                .AddOpenIdConnect("oidc", options =>
                {
                    // This is the root Url for the OIDC authority
                    options.Authority = settings.Authority;

                    // confidential client using code flow + PKCE + query response mode
                    options.ClientId = settings.ClientId;
                    options.ClientSecret = settings.ClientSecret;
                    options.ResponseType = "code";
                    options.ResponseMode = "query";
                    options.UsePkce = true;

                    options.MapInboundClaims = true;
                    options.GetClaimsFromUserInfoEndpoint = true;

                    // save access and refresh token to enable automatic lifetime management
                    options.SaveTokens = true;

                    // request scopes
                    options.Scope.Clear();
                    options.Scope.Add("openid");
                    options.Scope.Add("profile");
                    options.Scope.Add("api");

                    // request refresh token
                    options.Scope.Add("offline_access");
                });

The main thing I want you to note in the code above is that the Url for the OIDC identity service is bound from the raw configuration to the OidcSettings.Authority property. Now that we know where the authority Url is, formulating an environment check is this code, again within the Startup.ConfigureServices() method:

            // This extension method registers an environment check with Oakton
            services.CheckEnvironment<HttpClient>($"Is the expected OIDC authority at {settings.Authority} running", async (client, token) =>
            {
                var document = await client.GetDiscoveryDocumentAsync(settings.Authority, cancellationToken: token);
                
                if (document == null)
                {
                    throw new Exception("Unable to load the OIDC discovery document. Is the OIDC authority server running?");
                }
                else if (document.IsError)
                {
                    throw document.Exception;
                }
            });

Now, if the authority Url is misconfigured, or the OIDC identity service is not running, I can quickly diagnose that issue by running this command from the main web application directory:

dotnet run -- check-env

If I were to run this command with the identity service is down, I’ll get this lovely output:

Stop, dogs, stop! The light is red!

Hopefully, I can tell from the description and the exception message that the identity service that is supposed to be at https://localhost:5010 is not responsive. In this case it was because I had purposely turned it off to show you the failure, so let’s turn the identity service on and try again with dotnet run -- check-env:

Go, dogs, go! It’s green ahead!

As part of turning this work over to real developers to turn this work into real, production worthy code, we’ll document the dotnet run -- check-env tooling for detecting errors. More effectively though, with Oakton the command line call will return a non-zero error code if the environment checks fail, so it’s useful to run this command in automated deployments as a “fail fast” check on the environment that can point to specifically where the application is misconfigured or external dependencies are unreachable.

As another alternative, Oakton can run these environment checks for you at application startup time by using the -c or --check flag when the application executable is executed.

Lamar v6 is out! Expanded interception options, overriding test services, better documentation

Lamar is a modern IoC container library that is optimized for usage within .Net Core / .Net 5/6 applications and natively implements the .Net DI abstractions (AddTransient/Singleton/Scoped() etc). Lamar was specifically built to be a much higher performance version of the old StructureMap library with similar usage for relatively easy migration.

Lamar v6.0.0 went live today. It’s not a very large release at all, but I took this opportunity to tuck some previously public types into being internal to clean up your Intellisense and eliminated some obsolete public members. I don’t anticipate any changes in your code for the mass majority of Lamar users upgrading.

The highlights of this release are:

  • Lamar’s documentation website got a big facelift as it moved from an old Bootstrap theme with my old stdocs tool to using Vitepress and mdsnippets. I also did a sweep to remove old StructureMap nomenclature and replace that with the newer Lamar nomenclature that very purposely mimics the .Net DI abstraction terminology.
  • New functionality to reliably override service registrations at container creation time. This was specifically meant for automated integration testing where you may want to override specific service registrations. This hasn’t been a straightforward thing in the past because the generic host builder applies service registrations from Startup classes last no matter what, and that defeated many efforts to override services in testing.
  • AaronAllBright contributed an important pull request for some missing activation features. That was expanded into a full blown interception and activation feature set to go along with the support for decorators in Lamar v1+. That’s been a frequent concern for folks looking to move off of StructureMap and one of the biggest remaining features in Lamar’s backlog, so mark that one off.
  • A couple bug fixes, but I think this is the only big one.
  • I stopped and modernized the devops a bit to replace the old Rake build with Bullseye & SimpleExec. I also moved the CI from AppVeyor to GitHub actions. I wouldn’t say that either move was a big deal, but it is nice having the CI information right in the GitHub website.

Alba v5.0 is out! Easy recipes for integration testing ASP.Net web services

Alba is a small library to facilitate easier integration testing of web services built on the ASP.Net Core platform. It’s more or less a wrapper around the ASP.Net TestServer, but does a lot to make testing much easier than writing low level code against TestServer. You would use Alba from within xUnit.Net or NUnit testing projects.

I outlined a proposal for Alba V5 a couple weeks back, and today I was able to publish the new Nuget. The big changes are:

  • The Alba documentation website was rebuilt with Vitepress and refreshed to reflect V5 changes
  • The old SystemUnderTest type was renamed AlbaHost to make the terminology more inline with ASP.Net Core
  • The Json serialization is done through the configured input and output formatters within your ASP.Net Core application — meaning that Alba works just fine regardless of whether your application uses Newtonsoft.Json or System.Text.Json for its Json serialization.
  • There is a new extension model that was added specifically to support testing web services that are authenticated with JWT bearer tokens.

For security, you can opt into:

  1. Stub out all of the authentication and use hard-coded claims
  2. Add a valid JWT to every request with user-defined claims
  3. Let Alba deal with live OIDC integration by handling the JWT integration with your OIDC identity server

Testing web services secured by JWT tokens with Alba v5

We’re working toward standing up a new OIDC infrastructure built around Identity Server 5, with a couple gigantic legacy monolith applications and potentially dozens of newer microservices needing to use this new identity server for authentication. We’ll have to have a good story for running our big web applications that will have this new identity server dependency at development time, but for right now I just want to focus on an automated testing strategy for our newer ASP.Net Core web services using the Alba library.

First off, Alba is a helper library for integration testing HTTP API endpoints in .Net Core systems. Alba wraps the ASP.Net Core TestServer while providing quite a bit of convenient helpers for setting up and verifying HTTP calls against your ASP.Net Core services. We will be shortly introducing Alba into my organization at MedeAnalytics as a way of doing much more integration testing at the API layer (think the middle layer of any kind of testing pyramid concept).

In my previous post I laid out some plans and proposals for a quickly forthcoming Alba v5 release, with the biggest improvement being a new model for being able to stub out OIDC authentication for APIs that are secured by JWT bearer tokens (I think I win programming bingo for that sentence!).

Before I show code, I should say that all of this code is in the v5 branch of Alba on GitHub, but not yet released as it’s very heavily in flight.

To start, I’m assuming that you have a web service project, then a testing library for that web service project. In your web application, bearer token authentication is set up something like this inside your Startup.ConfigureServices() method:

services.AddAuthentication("Bearer")
    .AddJwtBearer("Bearer", options =>
    {
        // A real application would pull all this information from configuration
        // of course, but I'm hardcoding it in testing
        options.Audience = "jwtsample";
        options.ClaimsIssuer = "myapp";
        
        // don't worry about this, our JwtSecurityStub is gonna switch it off in
        // tests
        options.Authority = "https://localhost:5001";
            

        options.TokenValidationParameters = new TokenValidationParameters
        {
            ValidateAudience = false,
            IssuerSigningKey = new SymmetricSecurityKey(Encoding.UTF8.GetBytes("some really big key that should work"))
        };
    });

And of course, you also have these lines of code in your Startup.Configure() method to add in ASP.Net Core middleware for authentication and authorization:

app.UseAuthentication();
app.UseAuthorization();

With these lines of setup code, you will not be able to hit any secured HTTP endpoint in your web service unless there is a valid JWT token in the Authorization header of the incoming HTTP request. Moreover, with this configuration your service would need to make calls to the configured bearer token authority (http://localhost:5001 above). It’s going to be awkward and probably very brittle to depend on having the identity server spun up and running locally when our developers try to run API tests. It would obviously be helpful if there was a quick way to stub out the bearer token authentication in testing to automatically supply known claims so our developers can focus on developing their individual service’s functionality.

That’s where Alba v5 comes in with its new JwtSecurityStub extension that will:

  1. Disable any validation interactions with an external OIDC authority
  2. Automatically add a valid JWT token to any request being sent through Alba
  3. Give developers fine-grained control over the claims attached to any specific request if there is logic that will vary by claim values

To demonstrate this new Alba functionality, let’s assume that you have a testing project that has a direct reference to the web service project. The direct project reference is important because you’ll want to spin up the “system under test” in a test fixture like this:

    public class web_api_authentication : IDisposable
    {
        private readonly IAlbaHost theHost;

        public web_api_authentication()
        {
            // This is calling your real web service's configuration
            var hostBuilder = Program.CreateHostBuilder(new string[0]);

            // This is a new Alba v5 extension that can "stub" out
            // JWT token authentication
            var jwtSecurityStub = new JwtSecurityStub()
                .With("foo", "bar")
                .With(JwtRegisteredClaimNames.Email, "guy@company.com");

            // AlbaHost was "SystemUnderTest" in previous versions of
            // Alba
            theHost = new AlbaHost(hostBuilder, jwtSecurityStub);
        }

I was using xUnit.Net in this sample, but Alba is agnostic about the actual testing library and we’ll use both NUnit and xUnit.Net at work.

In the code above I’ve bootstrapped the web service with Alba and attached the JwtSecurityStub. I’ve also established some baseline claims that will be added to every JWT token on all Alba scenario requests. The AlbaHost extends the IHost interface you’re already used to in .Net Core, but adds the important Scenario() method that you can use to run HTTP requests all the way through your entire application stack like this test :

        [Fact]
        public async Task post_to_a_secured_endpoint_with_jwt_from_extension()
        {
            // Building the input body
            var input = new Numbers
            {
                Values = new[] {2, 3, 4}
            };

            var response = await theHost.Scenario(x =>
            {
                // Alba deals with Json serialization for us
                x.Post.Json(input).ToUrl("/math");
                
                // Enforce that the HTTP Status Code is 200 Ok
                x.StatusCodeShouldBeOk();
            });

            var output = response.ResponseBody.ReadAsJson<Result>();
            output.Sum.ShouldBe(9);
            output.Product.ShouldBe(24);
        }

You’ll notice that I did absolutely nothing in regards to JWT set up or claims or anything. That’s because the JwtSecurityStub is taking care of everything for you. It’s:

  1. Reaching into your application’s bootstrapping to pluck out the right signing key so that it builds JWT token strings that can be validated with the right signature
  2. Turns off any external token validation using an external OIDC authority
  3. Places a unique, unexpired JWT token on each request that matches the issuer and authority configuration of your application

Now, to further control the claims used on any individual scenario request, you can use this new method in Scenario tests:

        [Fact]
        public async Task can_modify_claims_per_scenario()
        {
            var input = new Numbers
            {
                Values = new[] {2, 3, 4}
            };

            var response = await theHost.Scenario(x =>
            {
                // This is a custom claim that would only be used for the 
                // JWT token in this individual test
                x.WithClaim(new Claim("color", "green"));
                x.Post.Json(input).ToUrl("/math");
                x.StatusCodeShouldBeOk();
            });

            var principal = response.Context.User;
            principal.ShouldNotBeNull();
            
            principal.Claims.Single(x => x.Type == "color")
                .Value.ShouldBe("green");
        }

I’ve got plenty of more ground to cover in how we’ll develop locally with our new identity server strategy, but I’m feeling pretty good about having a decent API testing strategy. All of this code is just barely written, so any feedback you might have would be very timely. Thanks for reading about some brand new code!

Proposal for Alba v5: JWT secured APIs, more JSON options, and other stuff

Just typed up a master GitHub issue for Alba v5, and I’ll take any possible feedback anywhere, but I’d prefer you make requests on GitHub. Here’s the GitHub issue I just banged out copied and pasted for anybody interested:

I’m dipping into Alba over the next couple work days to add some improvements that we want at MedeAnalytics to test HTTP APIs that are going to be secured with JWT Bearer Token authentication that’s ultimately backed up by IdentityServer5. I’ve done some proof of concepts off to the side on how to deal with that in Alba tests with or without the actual identity server up and running, so I know it’s possible.

However, when I started looking into how to build a more formal “JWT Token” add on to Alba and catching up with GitHub issues on Alba, it’s led me to the conclusion that it’s time for a little overhaul of Alba as part of the JWT work that’s turning into a proposed Alba v5 release.

Here’s what I’m thinking:

  • Ditch all support < .Net 5.0 just because that makes things easier on me maintaining Alba. Selfishness FTW.
  • New Alba extension model, more on this below
  • Revamp the Alba bootstrapping so that it’s just an extension method hanging off the end of IHostBuilder that will create a new ITestingHost interface for the existing SystemUnderTest class. More on this below.
  • Decouple Alba’s JSON facilities from Newtonsoft.Json. A couple folks have asked if you can use System.Text.Json with Alba so far, so this time around we’ll try to dig into the application itself and find the JSON formatter from the system and make all the JSON serialization “just” use that instead of recreating anything
  • Enable easy testing of GPRC endpoints like we do for classic JSON services

DevOps changes

  • Replace the Rake build script with Bullseye
  • Move the documentation website from stdocs to VitePress with a similar set up as the new Marten website
  • Retire the existing AppVeyor CI (that works just fine), and move to GitHub actions for CI, but add a new action for publishing Nugets

Bootstrapping Changes

Today you use code like this to bootstrap Alba against your ASP.Net Core application:

var system = new SystemUnderTest(WebApi.Program.CreateHostBuilder(new string[0]));

where Alba’s SystemUnderTest intercepts an IHostBuilder, adds some additional configuration, swaps out Kestrel for the ASP.Net Core TestServer, and starts the underlying IHost.

In Alba v5, I’d like to change this a little bit to being an extension method hanging off of IHostBuilder so you’d have something more like this:

var host = WebApi.Program.CreateHostBuilder(new string[0]).StartAlbaHost();

// or

var host = await WebApi.Program.CreateHostBuilder(new string[0]).StartAlbaHostAsync();

The host above would be this new interface (still the old SystemUnderTest under the covers):

public interface IAlbaHost : IHost
{
    Task<ScenarioResult> Scenario(Action<Scenario> configure);

    // various methods to register actions to run before or after any scenario is executed
}

Extension Model

Right now the only extension I have in mind is one for JWT tokens, but it probably goes into a separate Nuget, so here you go.

public interface ITestingHostExtension : IDisposable, IAsyncDisposable
{
    // Make any necessary registrations to the actual application
    IHostBuilder Configure(IHostBuilder builder);

    // Spin it up if necessary before any scenarios are run, also gives
    // an extension a chance to add any BeforeEach() / AfterEach() kind
    // of hooks to the Scenario or HttpContext
    ValueTask Start(ITestingHost host);
   
}

My thought is that ITestingHostExtension objects would be passed into the StartAlbaHost(params ITestingHostExtension[] extensions) method from above.

JWT Bearer Token Extension

This will probably need to be in a separate Alba.JwtTokens Nuget library.

  • Extension method on Scenario to add a bearer token to the Authorization header. Small potatoes.
  • Helper of some sort to fetch JWT tokens from an external identity server and use that token on further scenario runs. I’ve got this prototyped already
  • a new Alba extension that acts as a stubbed identity server. More below:

For the stubbed in identity server, what I’m thinking is that a registered IdentityServerStub extension would:

  • Spin up a small fake OIDC web application hosted in memory w/ Kestrel that responds to the basic endpoints to validate JWT tokens.
  • With this extension, Alba will quietly stick a JWT token into the Authorization header of each request so you can test API endpoints that are secured by bearer tokens
  • Reaches into the system under test’s IHostBuilder and sets the authority url to the local Url of the stubbed OIDC server (I’ve already successfully prototyped this one)
  • Also reaches into the system under test to find the JwtBearerOptions as configured and uses the signing keys that are already configured for the application for the JWT generation. That should make the setup of this quite a bit simpler
  • The new JWT stub extension can be configured with baseline claims that would be part of any generated JWT
  • A new extension method on Scenario to add or override claims in the JWT tokens on a per-scenario basis to support fine-grained authorization testing in your API tests
  • An extension method on Scenario to disable the JWT header altogether so you can test out authentication failures
  • Probably need a facility of some sort to override the JWT expiration time as well

Dynamic Code Generation in Marten V4

Marten V4 extensively uses runtime code generation backed by Roslyn runtime compilation for dynamic code. This is both much more powerful than source generators in what it allows us to actually do, but can have significant memory usage and “cold start” problems (seems to depend on exact configurations, so it’s not a given that you’ll have these issues). In this post I’ll show the facility we have to “generate ahead” the code to dodge the memory and cold start issues at production time.

Before V4, Marten had used the common model of building up Expression trees and compiling them into lambda functions at runtime to “bake” some of our dynamic behavior into fast running code. That does work and a good percentage of the development tools you use every day probably use that technique internally, but I felt that we’d already outgrown the dynamic Expression generation approach as it was, and the new functionality requested in V4 was going to substantially raise the complexity of what we were going to need to do.

Instead, I (Marten is a team effort, but I get all the blame for this one) opted to use the dynamic code generation and compilation approach using LamarCodeGeneration and LamarCompiler that I’d originally built for other projects. This allowed Marten to generate much more complex code than I thought was practical with other models (we could have used IL generation too of course, but that’s an exercise in masochism). If you’re interested, I gave a talk about these tools and the approach at NDC London 2019.

I do think this has worked out in terms of performance improvements at runtime and certainly helped to be able to introduce the new document and event store metadata features in V4, but there’s a catch. Actually two:

  1. The Roslyn compiler sucks down a lot of memory sometimes and doesn’t seem to ever release it. It’s gotten better with newer releases and it’s not consistent, but still.
  2. There’s a sometimes significant lag in cold start scenarios on the first time Marten needs to generate and compile code at runtime

What we could do though, is provide what I call..

Marten’s “Generate Ahead” Strategy

To side step the problems with the Roslyn compilation, I developed a model (I did this originally in Jasper) to generate the code ahead of time and have it compiled into the entry assembly for the system. The last step is to direct Marten to use the pre-compiled types instead of generating the types at runtime.

Jumping straight into a sample console project to show off this functionality, I’m configuring Marten with the AddMarten() method you can see in this code on GitHub.

The important line of code you need to focus on here is this flag:

opts.GeneratedCodeMode = TypeLoadMode.LoadFromPreBuiltAssembly;

This flag in the Marten configuration directs Marten to first look in the entry assembly of the application for any types that it would normally try to generate at runtime, and if that type exists, load it from the entry assembly and bypass any invocation of Roslyn. I think in a real application you’d wrap that call something like this so that it only applies when the application is running in production mode:

if (Environment.IsProduction())
{

    options.GeneratedCodeMode = 
        TypeLoadMode.LoadFromPreBuiltAssembly;
}

The next thing to notice is that I have to tell Marten ahead of time what the possible document types and even about any compiled query types in this code so that Marten will “know” what code to generate in the next section. The compiled query registration is new, but you already had to let Marten know about the document types to make the schema migration functionality work anyway.

Generating and exporting the code ahead of time is done from the command line through an Oakton command. First though, add the LamarCodeGeneration.Commands Nuget to your entry project, which will also add a transitive reference to Oakton if you’re not already using it. This is all described in the Oakton getting started page, but you’ll need to change your Program.Main() method slightly to activate Oakton:

        // The return value needs to be Task<int> 
        // to communicate command success or failure
        public static Task<int> Main(string[] args)
        {
            return CreateHostBuilder(args)
                
                // This makes Oakton be your CLI
                // runner
                .RunOaktonCommands(args);
        }

If you’ll open up the command terminal of your preference at the root directory of the entry project, type this command to see the available Oakton commands:

dotnet run -- help

That’ll spit out a list of commands and the assemblies where Oakton looked for command types. You should see output similar to this:

Searching 'LamarCodeGeneration.Commands, Version=0.0.0.0, Culture=neutral, PublicKeyToken=null' for commands
Searching 'Marten.CommandLine, Version=0.0.0.0, Culture=neutral, PublicKeyToken=null' for commands

  -------------------------------------------------
    Available commands:
  -------------------------------------------------
        check-env -> Execute all environment checks against the application
          codegen -> Utilities for working with LamarCodeGeneration and LamarCompiler

and assuming you got that far, now type dotnet run -- help codegen to see the list of options with the codegen command.

If you just want to preview the generated code in the console, type:

dotnet run -- codegen preview

To just verify that the dynamic code can be successfully generated and compiled, use:

dotnet run -- codegen test

To actually export the generated code, use:

dotnet run — codegen write

That command will write a single C# file at /Internal/Generated/DocumentStorage.cs for any document storage types and another at `/Internal/Generated/Events.cs` for the event storage and projections.

Just as a short cut to clear out any old code, you can use:

dotnet run -- codegen delete

If you’re curious, the generated code — and remember that it’s generated code so it’s going to be butt ugly — is going to look like this for the document storage, and this code for the event storage and projections.

The way that I see this being used is something like this:

  • The LoadFromPreBuiltAssembly option is only turned on in Production mode so that developers can iterate at will during development. That should be disabled at development time.
  • As part of the CI/CD process for a project, you’ll run the dotnet run -- codegen write command as an initial step, then proceed to the normal compilation and testing cycles. That will bake in the generated code right into the compiled assemblies and enable you to also take advantage of any kind AOT compiler optimizations

Duh. Why not source generators doofus?

Why didn’t we use source generators you may ask? The Roslyn-based approach in Marten is both much better and much worse than source generators. Source generators are part of the compilation process itself and wouldn’t have any kind of cold start problem like Marten has with the runtime Roslyn compilation approach. That part is obviously much better, plus there’s no weird two step compilation process at CI/CD time. But on the downside, source generators can only use information that’s available at compilation time, and the Marten code generation relies very heavily on type reflection, conventions applied at runtime, and configuration built up through a fluent interface API. I do not believe that we could use source generators for what we’ve done in Marten because of that dependency on runtime information.

Marten, the Generic Host Builder in .Net Core, and why this could be the golden age for OSS in .Net

Before I dive into some newly improved mechanics for quickly integrating Marten into just about any possible kind of .Net 5+ application, I want to take a trip through yesteryear and appreciate just how much better the .Net platform is today than it’s every been. If you wanna skip down to the positive part of the post, just look for sub heading “The better future for .Net is already here!.”

I’ve been a .Net developer since the very ugly beginning when we were all just glad to be off VB6/COM and onto a “modern” programming language that felt like a better Java but not yet completely horrified at how disastrously bad WebForms was going to turn out to be or disappointed at how poorly all our new .Net stuff worked out when we tried to adopt Agile development practices like Test Driven Development.

I’ve also been very deeply invested in developing open source tools and frameworks on the .Net framework, starting with StructureMap, the very first production capable Inversion of Control library on .Net that first went into a production system in the summer of 2004.

As the maintainer of StructureMap throughout, I had a front row seat to the absolute explosion of adapter libraries for seemingly every possible permutation of framework, IoC tool, and logging library. And this happened because every framework of any note in the .Net world had a completely different way to abstract away the actual IoC container or allowed you to use different logging tools through the specific framework’s special adapter mechanisms.

I’m also old enough to remember when every tool took a direct dependency on log4net around the time they lost their public strong naming key and broke compatibility. That was loads of fun.

There also wasn’t a standard way to deal with application lifecycle events across different types of .Net applications or application frameworks (ASP.Net’s Global.ASAX being an exception I guess). Every framework at that time had a different way of being bootstrapped at application start up time, so developers had to do a lot of plumbing code as they pieced all this junk together with the myriad adapter libraries.

To recap the “old”, pre-Core .Net world:

  • Application framework developers had to create their own IoC and logging abstractions and support a plethora of adapter libraries as well as possibly having to create all new application lifecycle management facilities as part of their framework
  • Library developers had to support users through their usage of problematic framework integrations where the library in question was often used in non-optimal ways (my personal pet peeve, and I edited this post before publishing to eliminate some harsh criticism of certain well known .Net application frameworks).
  • Developers of actual .Net applications were rightfully intimidated by the complexity necessary to integrate open source tools, or simply struggled to do so as there was no standardization of how all those tools went about their integration into .Net applications.

I would argue that the situation in .Net as it was before .Net Core seriously impeded the development and adoption of open source tooling in our community — which also had the effect of slowing down the rate of innovation throughout the .Net community.

You might notice that I used the past tense quite a bit in the previous paragraphs, and that’s because I think…

The better future for .Net is already here!.

It’s hard to overstate this, but the generic host builder (IHostBuilder and IHost) in .Net Core / .Net 5 and the related is hugely advantageous for the .Net ecosystem. I tried to explain why I think this a bit in a recent episode of .Net Rocks, but it’s probably easier to demonstrate this by integrating Marten into a new .Net 5 web api project.

Assuming that you’re already familiar with the usage of the Startup class for configuring ASP.Net Core applications, I’m going to add Marten to the new service with the following code:

public void ConfigureServices(IServiceCollection services)
{
    // This is the absolute, simplest way to integrate Marten into your
    // .Net Core application with Marten's default configuration
    services.AddMarten(options =>
    {
        // Establish the connection string to your Marten database
        options.Connection(Configuration.GetConnectionString("Marten"));

        // If we're running in development mode, let Marten just take care
        // of all necessary schema building and patching behind the scenes
        if (Environment.IsDevelopment())
        {
            options.AutoCreateSchemaObjects = AutoCreate.All;
        }

        if (Environment.IsProduction())
        {
            // This is a new V4 feature I'll blog about soon...
            options.GeneratedCodeMode = TypeLoadMode.LoadFromPreBuiltAssembly;
        }

        // Turn on the async projection daemon in only one node
        options.Projections.AsyncMode = DaemonMode.HotCold;

        // Configure other elements of the event store or document types...
    });
}

Even if you’re a developer who is completely unfamiliar with Marten itself, the integration through AddMarten() follows the recent .Net idiom and would be conceptually similar to many other tools you probably already use. That by itself is a very good thing as there’s less novelty for new Marten developers (there is of course matching UseMarten() signatures that hang off of IHostBuilder that is useful for other kinds of .Net projects)

I purposely used a few bells and whistles in Marten to demonstrate more capabilities of the generic host in .Net Core. To go into more detail, the code above:

  • Registers all the necessary Marten services with the proper service lifecycles in the underlying system container. And that works for any IoC container that supports the standard DI abstractions in the Microsoft.Extensions.DependencyInjection namespace. There isn’t any kind of Marten.Lamar or Marten.DryIoC or Marten.WhateverIoCToolYouHappenToLike adapter libraries.
  • I’ve also turned on Marten’s asynchronous projection process which will run as a long running task in the application by utilizing the standard .Net Core IHostedService mechanism whose lifecycle will be completely managed by the generic host in .Net.
  • We were able to do a little bit of environment specific configuration to let Marten do all necessary schema management automatically at development time, but not allow Marten to change database schemas in production. I also opted into a new Marten V4 optimization for production cold starts when the application is running in “Production” mode. That’s easy to do now using .Net’s standard IHostEnvironment mechanism at bootstrapping time.
  • It’s not obvious from that code, but starting in V4.0, Marten will happily log to the standard ILogger interface from .Net Core. Marten itself doesn’t care where or how you actually capture log information, but because .Net itself now specifies a single set of logging abstractions to rule them all, Marten can happily log to any commonly used logging mechanism (NLog, Serilog, the console, Azure logging, whatever) without any other kind of adapter.
  • And oh yeah, I had to add the connection string to a Postgresql database from .Net Core’s IConfiguration interface, which itself could be pulling information any number of ways, but our code doesn’t have to care about anything else but that one standard interface.

As part of the team behind the Marten, I feel like our jobs are easier now than it was just prior to .Net Core and the generic host builder because now there are:

  • .Net standard abstractions for logging, configuration, and IoC service registration
  • The IHostedService mechanism is an easy, standard way in .Net to run background processes or even just to hook into application start up and tear down events with custom code
  • The IServiceCollection.Add[Tool Name]() and IHostBuilder.Use[Tool Name]() idioms provided us a way to integrate Marten into basically any kind of .Net Core / .Net 5/6 application through a mechanism that is likely to feel familiar to experienced .Net developers — which hopefully reduces the barrier to adoption for Marten

As some of you reading this may know, I was very heavily involved in building an alternative web development framework for .Net called FubuMVC well before .Net Core came on to the scene. When we built FubuMVC, we had to create from scratch lot of the exact same functionality that’s in the generic host today to manage application lifecycle events, bootstrapping the application’s IoC container through its own IoC abstractions, and hooks for extension with other .Net libraries. I wouldn’t say that was the primary reason that FubuMVC failed as a project, but it certainly didn’t help.

Flash forward to today, and I’ve worked off and on on a complete replacement for FubuMVC on .Net Core called Jasper for asynchronous messaging and an alternative for HTTP services. That project isn’t going anywhere at the moment, but Jasper’s internals are much, much smaller than FubuMVC’s were for similar functionality because Jasper can rely on the functionality and standard abstractions of the generic host in .Net Core. That makes tools like Jasper much easier for .Net developers to write.

Unsurprisingly, there are plenty of design and implementation decisions that Microsoft made with the generic host builder that I don’t agree with. I can point to a few things that I believe FubuMVC did much better for composition a decade ago than ASP.Net Core does today (and vice versa too of course). I deeply resent how the ASP.Net Core team wrote the IoC compliance rules that were rammed down the throats of all us IoC library maintainers. You can also point out that there are cultural problems within the .Net community in regards to OSS and that Microsoft itself severely retards open source innovation in .Net by stomping on community projects at will.

All of that may be true, but from a strictly technical perspective, I think that it is now much easier for the .Net community to build and adopt innovative open source tools than it ever has been. I largely credit the work that’s been done with the generic host, the common idioms for tool integration, and the standard interfaces that came in the .Net Core wave for potentially opening up a new golden age for .Net OSS development.

Brief Update on Marten V4

The latest, greatest Marten V4 release candidate is up on Nuget this morning. Marten V4 has turned out to be a massive undertaking for an OSS project with a small team, but we’re not that far off what’s turned into our “Duke Nukem Forever” release. We’re effectively “code complete” other than occasional bugs coming in from early users — which is fortunately a good thing as it’s helped us make some things more robust without impacting the larger community.

At this point it’s mostly a matter of completing the documentation updates for V4, and I’m almost refusing to work on new features until that’s complete. The Marten documentation website is moving to VitePress going forward, so it’ll be re-skinned with better search capabilities. We’re also taking this opportunity to reorganize the documentation with the hopes of making that site more useful for users.

Honestly, the biggest thing holding us back right now in my opinion is that I’ve been a little burned out and slow working on the documentation. Right now, I’m hoping we can pull the trigger on V4 in the next month or two.

Joining MedeAnalytics

I joined MedeAnalytics last week in a leadership role within their architecture team. Besides helping tackle strategic technical goals like messaging, test automation, and cloud architecture, I’m trying to play the Martin Fowler-style “bumble bee” role across development teams on topics like tools, techniques, and the technologies we use.

I’m hopeful that being in a strategic role where I’m constantly neck deep in technical challenges and learning new things will reboot my blogging a bit, but we’ll see. Right off the bat, I’ve been reading up on the C4 Model for describing software architecture and the Structurizr toolset as a way of reverse engineering our existing systems for my own edification. At least expect a positive review of both soon.

I don’t know how this will change my OSS work at the moment, other than I’ll be answering Lamar issues raised by my new MedeAnalytics colleagues a lot faster than I probably did for them before 🙂

Using Alba to Test ASP.Net Services

TL;DR: Alba is a handy library for integration tests against ASP.Net web services, and the rapid feedback that enables is very useful

One of our projects at Calavista right now is helping a client modernize and optimize a large .Net application, with the end goal being everything running on .Net 5 and an order of magnitude improvement in system throughput. As part of the effort to upgrade the web services, I took on a task to replace this system’s usage of IdentityServer3 with IdentityServer4, but still use the existing Marten-backed data storage for user membership information.

Great, but there’s just one problem. I’ve never used IdentityServer4 before and it changed somewhat between the IdentityServer3 code I was trying to reverse engineer and its current model. I ended up getting through that work just fine. A key element of doing that was using the Alba library to create a test harness so I could iterate through configuration changes quickly by rerunning tests on the new IdentityServer4 project. It didn’t start out this way, but Alba is essentially a wrapper around the ASP.Net TestServer and just acts as a utility to make it easier to write automated tests around the HTTP services in your web service projects.

I ended up starting two new .Net projects:

  1. A new web service that hosts IdentityServer4 and is configured to use user membership information from our client’s existing Marten/Postgresql database
  2. A new xUnit.Net project to hold integration tests against the new IdentityServer4 web service.

Let’s dive right into how I set up Alba and xUnit.Net as an automated test harness for our new IdentityServer4 service. If you start a new ASP.Net project with one of the built in project templates, you’ll get a Program file that’s the main entry point for the application and a Startup class that has most of the system’s bootstrapping configuration. The templates will generate this method that’s used to configure the IHostBuilder for the application:

public static IHostBuilder CreateHostBuilder(string[] args)
{
    return Host.CreateDefaultBuilder(args)
        .ConfigureWebHostDefaults(webBuilder =>
        {
            webBuilder.UseStartup<Startup>();
        });
}

For more information on what role of the IHostBuilder is within your application, see .NET Generic Host in ASP.NET Core.

That’s important, because that gives us the ability to stand up the application exactly as it’s configured in an automated test harness. Switching to the new xUnit.Net test project, referenced my new web service project that will host IdentityServer4. Because spinning up your ASP.Net system can be relatively expensive, I only want to do that once and share the IHost between tests. That’s a perfect usage for xUnit.Net’s shared context support.

First I make what will be the shared test fixture context class for the integration tests shown below:

public class AppFixture : IDisposable
{
    public AppFixture()
    {
        // This is calling into the actual application's
        // configuration to stand up the application in 
        // memory
        var builder = Program.CreateHostBuilder(new string[0]);

        // This is the Alba "SystemUnderTest" wrapper
        System = new SystemUnderTest(builder);
    }
    
    public SystemUnderTest System { get; }

    public void Dispose()
    {
        System?.Dispose();
    }
}

The Alba SystemUnderTest wrapper is responsible for building the actual IHost object for your system, and does so using the in memory TestServer in place of Kestrel.

Just as a convenience, I like to create a base class for integration tests I tend to call IntegrationContext:

    public abstract class IntegrationContext 
        // This is some xUnit.Net mechanics
        : IClassFixture<AppFixture>
    {
        public IntegrationContext(AppFixture fixture)
        {
            System = fixture.System;
            
            // Just casting the IServiceProvider of the underlying
            // IHost to a Lamar IContainer as a convenience
            Container = (IContainer)fixture.System.Services;
        }
        
        public IContainer Container { get; }
        
        // This gives the tests access to run
        // Alba "Scenarios"
        public ISystemUnderTest System { get; }
    }

We’re using Lamar as the underlying IoC container in this application, and I wanted to use Lamar-specific IoC diagnostics in the tests, so I expose the main Lamar container off of the base class as just a convenience.

To finally turn to the tests, the very first thing to try with IdentityServer4 was just to hit the descriptive discovery endpoint just to see if the application was bootstrapping correctly and IdentityServer4 was functional *at all*. I started a new test class with this declaration:

    public class EndToEndTests : IntegrationContext
    {
        private readonly ITestOutputHelper _output;
        private IDocumentStore theStore;

        public EndToEndTests(AppFixture fixture, ITestOutputHelper output) : base(fixture)
        {
            _output = output;

            // I'm grabbing the Marten document store for the app
            // to set up user information later
            theStore = Container.GetInstance<IDocumentStore>();
        }

And then a new test just to exercise the discovery endpoint:

[Fact]
public async Task can_get_the_discovery_document()
{
    var result = await System.Scenario(x =>
    {
        x.Get.Url("/.well-known/openid-configuration");
        x.StatusCodeShouldBeOk();
    });
    
    // Just checking out what's returned
    _output.WriteLine(result.ResponseBody.ReadAsText());
}

The test above is pretty crude. All it does is try to hit the `/.well-known/openid-configuration` url in the application and see that it returns a 200 OK HTTP status code.

I tend to run tests while I’m coding by using keyboard shortcuts. Most IDEs support some kind of “re-run the last test” keyboard shortcut. Using that, my preferred workflow is to run the test once, then assuming that the test is failing the first time, work in a tight cycle of making changes and constantly re-running the test(s). This turned out to be invaluable as it took me a couple iterations of code changes to correctly re-create the old IdentityServer3 configuration into the new IdentityServer4 configuration.

Moving on to doing a simple authentication, I wrote a test like this one to exercise the system with known credentials:

[Fact]
public async Task get_a_token()
{
    // All I'm doing here is wiping out the existing database state
    // with Marten and using a little helper method to add a new
    // user to the database as part of the test setup
    // This would obviously be different in your system
    theStore.Advanced.Clean.DeleteAllDocuments();
    await theStore
        .CreateUser("aquaman@justiceleague.com", "dolphin", "System Administrator");
    
    
    var body =
        "client_id=something&client_secret=something&grant_type=password&scope=ourscope%20profile%20openid&username=aquaman@justiceleague.com&password=dolphin";

    // The test would fail here if the status
    // code was NOT 200
    var result = await System.Scenario(x =>
    {
        x.Post
            .Text(body)
            .ContentType("application/x-www-form-urlencoded")
            .ToUrl("/connect/token");
    });

    // As a convenience, I made a new class called ConnectData
    // just to deserialize the IdentityServer4 response into 
    // a .Net object
    var token = result.ResponseBody.ReadAsJson<ConnectData>();
    token.access_token.Should().NotBeNull();
    token.token_type.Should().Be("Bearer");
}

public class ConnectData
{
    public string access_token { get; set; }
    public int expires_in { get; set; }
    public string token_type { get; set; }
    public string scope { get; set; }
    
}

Now, this test took me several iterations to work through until I found exactly the right way to configure IdentityServer4 and adjusted our custom Marten backing identity store (IResourceOwnerPasswordValidator and IProfileService in IdentityServer4 world) until the tests pass. I found it extremely valuable to be able to debug right into the failing tests as I worked, and even needed to take advantage of JetBrains Rider’s capability to debug through external code to understand how IdentityServer4 itself worked. I’m very sure that I was able to get through this work much faster by iterating through tests as opposed to just trying to run the application and driving it through something like Postman or through the connected user interface.