JasperFx OSS Plans for .Net 6 (Marten et al)

I’m going to have to admit that I got caught flat footed by the .Net 6 release a couple weeks ago. I hadn’t really been paying much attention to the forthcoming changes, maybe got cocky by how easy the transition from netcoreapp3.1 to .Net 5 was, and have been unpleasantly surprised by how much work it’s going to take to move some OSS projects up to .Net 6. All at the same time that the advance users of the world are clamoring for all their dependencies to target .Net 6 yesterday.

All that being said, here’s my running list of plans to get the projects in the JasperFx GitHub organization successfully targeting .Net 6. I’ll make edits to this page as things get published to Nuget.

Baseline

Baseline is a grab bag utility library full of extension methods that I’ve relied on for years. Nobody uses it directly per se, but it’s a dependency of just about every other project in the organization, so it went first with the 3.2.2 release adding a .Net 6 target. No code changes were necessary other than adding .Net 6 to the CI testing. Easy money.

Oakton

EDIT: Oakton v4.0 is up on Nuget. WebApplication is supported, but you can’t override configuration in commands with this model like you can w/ HostBuilder only. I’ll do a follow up at some point to fill in this gap.

Oakton is a tool to add extensible command line options to .Net applications based on the HostBuilder model. Oakton is my problem child right now because it’s a dependency in several other projects and its current model does not play nicely with the new WebApplicationBuilder approach for configuring .Net 6 applications. I’d also like to get the Oakton documentation website moved to the VitePress + MarkdownSnippets model we’re using now for Marten and some of the other JasperFx projects. I think I’ll take a shortcut here and publish the Nuget and let the documentation catch up later.

Alba

Alba is an automated testing helper for ASP.Net Core. Just like Oakton, Alba worked very well with the HostBuilder model, but was thrown for a loop with the new WebApplicationBuilder configuration model that’s the mechanism for using the new Minimal API (*cough* inevitable Sinatra copy *cough*) model. Fortunately though, Hawxy came through with a big pull request to make Alba finally work with the WebApplicationFactory model that can accommodate the new WebApplicationBuilder model, so we’re back in business soon. Alba 5.1 will be published soon with that work after some documentation updates and hopefully some testing with the Oakton + WebApplicationBuilder + Alba model.

EDIT: Alba 7.0 is up with the necessary changes, but the docs will come later this week

Lamar

Lamar is an IoC/DI container and the modern successor to StructureMap. The biggest issue with Lamar on v6 was Nuget dependencies on the IServiceCollection model, plus needing some extra implementation to light up the implied service model of Minimal APIs. All the current unit tests and even integration tests with ASP.Net Core are passing on .Net 6. To finish up a new Lamar 7.0 release is:

  • One .Net 6 related bug in the diagnostics
  • Better Minimal API support
  • Upgrade Oakton & Baseline dependencies in some of the Lamar projects
  • Documentation updates for the new IAsyncDisposable support and usage with WebApplicationBuilder with or without Minimal API usage

Marten/Weasel

We just made the gigantic V4 release a couple months ago knowing that we’d have to follow up quickly with a V5 release with a few breaking changes to accommodate .Net 6 and the latest version of Npgsql. We are having to make a full point release, so that opens the door for other breaking changes that didn’t make it into V4 (don’t worry, I think shifting from V4 to V5 will be easy for most people). The other Marten core team members have been doing most of the work for this so far, but I’m going to jump into the fray later this week to do some last minute changes:

  • Review some internal changes to Npgsql that might have performance impacts on Marten
  • Consider adding an event streaming model within the new V4 async daemon. For folks that wanna use that to publish events to some kind of transport (Kafka? Some kind of queue?) with strict ordering. This won’t be much yet, but it keeps coming up so we might as well consider it.
  • Multi-tenancy through multiple databases. It keeps coming up, and potentially causes breaking API changes, so we’re at least going to explore it

I’m trying not to slow down the Marten V5 release with .Net 6 support for too long, so this is all either happening really fast or not at all. I’ll blog more later this week about multi-tenancy & Marten.

Weasel is a spin off library from Marten for database change detection and ADO.Net helpers that are reused in other projects now. It will be published simultaneously with Marten.

Jasper

Oh man, I’d love, love, love to have Jasper 2.0 done by early January so that it’ll be available for usage at my company on some upcoming work. This work is on hold while I deal with the other projects, my actual day job, and family and stuff.

Rebooting Jasper

Jasper is a long running OSS passion project of mine. As it is now, Jasper is a command processing tool similar to Brighter or MassTransit that can be used as either an in memory mediator tool (like a superset of Mediatr) or as a service bus framework for asynchronous messaging. Jasper was originally conceived as a way to recreate the “good” parts of FubuMVC like low code ceremony, minimal intrusion of the framework into application code, and effective usage of the Russian Doll model for the execution pipeline. At the same time though, I wanted Jasper to improve upon the earlier FubuMVC architecture by maximizing performance, minimizing object allocations, easier configuration and bootstrapping, and making it much easier for developers to troubleshoot runtime issues.

I actually did cut a Jasper V1 release early in the COVID-19 pandemic, but otherwise dropped it to focus on Marten and stopped paying any attention to it. With Marten V4 is in the books, I’m going back to working on Jasper for a little bit. For right now, I’m thinking that the Jasper V2 work is something like this:

  1. Upgrading all the dependencies and targeting .Net 5/6 (basically done)
  2. Benchmarking and optimizing the core runtime internals. Sometimes the best way to improve a codebase is to step away from it for quite a bit and come back in with fresh perspectives. There’s also some significant lessons from Marten V4 that might apply to Jasper
  3. Build in Open Telemetry tracing through Jasper’s pipeline. Really all about getting me up to speed on distributed tracing.
  4. Support the AsyncAPI standard (Swagger for asynchronous messaging). I’m really interested in this, but haven’t taken much time to dive into it yet
  5. Wire compatibility with NServiceBus so a Jasper app can talk bi-directionally with an NServiceBus app
  6. Same with MassTransit. If I decide to pursue Jasper seriously, I’d have to do that to have any shot at using Jasper at work
  7. More transport options. Right now there’s a Kafka & Pulsar transport option stuck in PR purgatory from another contributor. Again, a learning opportunity.
  8. Optimize the heck out of the Rabbit MQ usage.
  9. Go over the usability of the configuration. To be honest here, I’ve less than thrilled with our MassTransit usage and the hoops you have to jump through to bend it to our will and I’d like to see if I could do better with Jasper
  10. Improve the documentation website (if I’m serious about Jasper)
  11. Play with some kind of Jasper/Azure Functions integration. No idea what that would look like, but the point is to go learn more about Azure Functions
  12. Maybe, but a low priority — I have a working version of FubuMVC style HTTP endpoints in Jasper already. With everybody all excited about the new Minimal API stuff in ASP.Net Core v6, I wouldn’t mind showing a slightly different approach

Integrating Marten and Jasper

Maybe the single biggest reason for me to play more with Jasper is to explore some deeper integration with Marten for some more complicated CQRS and event sourcing architectural problems. Jasper already has an outbox/inbox pattern implementation for Marten. Going farther, I’d like to have out of the box solutions for:

  • Event streaming from Marten to message queues using Jasper
  • An alternative to Marten’s async daemon using Kafka/Pulsar topics
  • Using Jasper to host Marten’s asynchronous projections in a way that distributes work across running nodes
  • Experimenting more with CQRS architectures using Jasper + Marten

Anyway, I’m jotting this down mostly for me, but I’m absolutely happy for any kind of feedback or maybe to see if anyone else would be interested in helping with Jasper development.

Self Diagnosing Deployments with Oakton and Lamar

So here’s the deal, sometimes, somehow you deploy a new version of a system into a testing, staging, or production environment and it doesn’t work. Shocking and sometimes distressing when that happens, right?

There are any number of problems that could be the cause. Maybe a database is in an invalid state, maybe a file got missed in the deployment, maybe some bit of configuration is wrong, maybe a downstream or upstream collaborating system is down or unreachable. Who knows, right? Well, what if we could make our systems self-diagnosing so they can do a quick rundown of how they’re configured and running right at start up time and fail fast if something is detected to be wrong with the deployment? And also have the system tell us exactly what is wrong through first causes without having to later trace runtime failures to the ultimate root cause?

To that end, let’s talk about some mechanisms in both the Oakton and Lamar libraries to quickly add in what I like to call “environment checks”, meaning self diagnosing checks on system startup to test out system configuration and validity. Oakton has a facility to run environment checks in either a separate diagnostic command directly in your application, or to run the environment checks at system start up time, make a detailed report of any failures, and make the process stop and return an error code. In turn, a continuous deployment script should be able to detect the failure to start the system and rollback to a previously known, good state.

In the past, I’ve worked in teams where we’ve embedded environment checks to:

  • Verify that a configured database is reachable
  • Ping an external web service dependency
  • Check for the existence of required files
  • Verify that an expected COM dependency was registered (shudder, there’s some bad memories in that one)
  • Test that security features are correctly configured and usable — and I think that’s a big one
  • Assert that the system has the ability to read and write configured file system directories — also some serious scar tissue

The point here is to make your deployments fail fast anytime there’s an environmental misconfiguration, and do so in a way that makes it easy for you to spot exactly what’s wrong. Environment checks can help teams avoid system down time and keep testers from blowing up the defect lists with false negatives from misconfigured systems.

Getting Started with Oakton

Just to get started, I spun up a new .Net 5 web service with dotnet new webapi. That template gives you this code in the Program.Main() method:

        public static void Main(string[] args)
        {
            CreateHostBuilder(args).Build().Run();
        }

I’m going to add a Nuget reference to Oakton, then change that method up above so that Oakton is handling the command line parsing and system startup like this:

        // It's important to return Task<int> so that
        // Oakton can signal failures with non-zero
        // return codes
        public static Task<int> Main(string[] args)
        {
            return CreateHostBuilder(args).RunOaktonCommands(args);
        }

There’s still nothing going on in our web service, but from the command line we could run dotnet run -- check-env just to start up our application’s IHost and run all the environment checks in a test mode. Nothing there yet, so you’d get output like this:

   ___            _      _
  / _ \    __ _  | | __ | |_    ___    _ __
 | | | |  / _` | | |/ / | __|  / _ \  | '_ \
 | |_| | | (_| | |   <  | |_  | (_) | | | | |
  \___/   \__,_| |_|\_\  \__|  \___/  |_| |_|

No environment checks.
All environment checks are good!

We can add environment checks directly to our application with Oakton’s built in mechanisms like this example that just validates that there’s an appsettings.json file in the application path:

        public void ConfigureServices(IServiceCollection services)
        {
            // Literally just proving out that appsettings.json is available
            services.CheckThatFileExists("appsettings.json");
            
            // Other registrations
        }

And now that same dotnet run -- check-env file gives us this output:

   ___            _      _
  / _ \    __ _  | | __ | |_    ___    _ __
 | | | |  / _` | | |/ / | __|  / _ \  | '_ \
 | |_| | | (_| | |   <  | |_  | (_) | | | | |
  \___/   \__,_| |_|\_\  \__|  \___/  |_| |_|

   1.) Success: File 'appsettings.json' exists

Running Environment Checks ---------------------------------------- 100%

All environment checks are good!

One of the other things that Oakton does is intercept the basic dotnet run command with extra options, so we can use the --check flag like so:

dotnet run -- --check

That command is going to:

  1. Bootstrap and start the IHost for your application
  2. Load and execute all the registered environment checks for your application
  3. Report the status of each environment check to the console
  4. Fail the executable if any environment checks fail

In my tiny sample app I’m building here, the output of that call is this:

C:\code\JasperSamples\EnvironmentChecks\WebApplication\WebApplication>dotnet run -- --check
Building...
   1.) Success: File 'appsettings.json' exists

Running Environment Checks ---------------------------------------- 100%

info: Microsoft.Hosting.Lifetime[0]
      Now listening on: https://localhost:5001
info: Microsoft.Hosting.Lifetime[0]
      Now listening on: http://localhost:5000
info: Microsoft.Hosting.Lifetime[0]
      Application started. Press Ctrl+C to shut down.
info: Microsoft.Hosting.Lifetime[0]
      Hosting environment: Development
info: Microsoft.Hosting.Lifetime[0]
      Content root path: C:\code\JasperSamples\EnvironmentChecks\WebApplication\WebApplication

At deployment time, that call is probably just [your application name] --check to start the application.

Environment Checks with Lamar

Now that we’ve got the basics for environment checks built into our new web service with Oakton, I want to introduce Lamar as the DI/IoC tool for the application. I’ll add a Nuget reference to Lamar.Microsoft.DependencyInjection to my project. First to make Lamar be the IoC container for my new service, I’ll add this line of code to the Program.CreateHostBuilder() method:

        public static IHostBuilder CreateHostBuilder(string[] args) =>
            Host.CreateDefaultBuilder(args)
                
                // Make Lamar the application container
                .UseLamar()
                .ConfigureWebHostDefaults(webBuilder =>
                {
                    webBuilder.UseStartup<Startup>();
                });

Now, I’m going to go a little farther and add a reference to the Lamar.Diagnostics Nuget as well. That adds some Lamar specific diagnostic commands to our command line options, but also allows us to add Lamar container checks at startup like this:

        public void ConfigureServices(IServiceCollection services)
        {
            // Literally just proving out that appsettings.json is available
            services.CheckThatFileExists("appsettings.json");
            
            // Do a full check of the Lamar configuration
            // and run Lamar environment checks too!
            services.CheckLamarConfiguration();

Running our dotnet run -- check-env command again gives us:

   ___            _      _
  / _ \    __ _  | | __ | |_    ___    _ __
 | | | |  / _` | | |/ / | __|  / _ \  | '_ \
 | |_| | | (_| | |   <  | |_  | (_) | | | | |
  \___/   \__,_| |_|\_\  \__|  \___/  |_| |_|

   1.) Success: File 'appsettings.json' exists
   2.) Success: Lamar IoC Service Registrations
   3.) Success: Lamar IoC Type Scanning

Running Environment Checks ---------------------------------------- 100%

All environment checks are good!

The Lamar container checks are a little heavyweight, so watch out for that because you might want to dial them back. Those checks run through every single service registration in Lamar, verifies that all the dependencies exist, and tries to build every registration at least once.

Next though, let’s look at how Lamar lets us plug its own form of environment checks. On any concrete class that is built by Lamar, you can directly embed environment checks with methods like this fake service:

    public class SometimesMisconfiguredService
    {
        private readonly IConfiguration _configuration;

        public SometimesMisconfiguredService(IConfiguration configuration)
        {
            _configuration = configuration;
        }

        [ValidationMethod]
        public void Validate() // The method name does not matter
        {
            var connectionString = _configuration.GetConnectionString("database");
            using var conn = new SqlConnection(connectionString);
            
            // Just try to connect to the configured database
            conn.Open();
        }
    }

The method name above doesn’t matter. All that Lamar needs to see is the [ValidationMethod] attribute. See Lamar’s Environment Tests for a little more information about using this feature. And I don’t really need to “do” anything other than throw an exception if the check logically fails.

And I’ll register that service in the container in the Startup.ConfigureServices() method like so:

            services.AddSingleton<SometimesMisconfiguredService>();

Now, going back to the application and I’ll try to start it up with the environment check flag active — but I haven’t configured a database connection string or even attempted to spin up a database at all, so this should fail fast. And it does with a call to dotnet run -- --check:

   1.) Success: File 'appsettings.json' exists
   2.) Failed: Lamar IoC Service Registrations

ERROR: Lamar.IoC.ContainerValidationException: Error in WebApplication.SometimesMisconfiguredService.Validate()

AND A LOT OF STACK TRACE VERBIAGE FROM THE FAILURES

To get at just the Lamar container validation, you can also use dotnet run -- lamar-validate. IoC container tools are a dime a dozen in .Net, and many of them are perfectly competent. When I’m asked “why use Lamar?”, my stock answer is to use Lamar for its diagnostic capabilities.

More Information

Marten Takes a Giant Leap Forward with the Official V4 Release!

Starting next week I’ll be doing some more deep dives into new Marten V4 improvements and some more involved sample usages.

Today I’m very excited to announce the official release of Marten V4.0! The Nugets just went live, and we’ve published out completely revamped project website at https://martendb.io.

This has been at least a two year journey of significant development effort by the Marten core team and quite a few contributors, preceded by several years of brainstorming within the Marten community about the improvements realized by this release. There’s plenty more to do in the Marten backlog, but I think this V4 release puts Marten on a very solid technical foundation for the long term future.

This was a massive effort, and I’d like to especially thank the other core team members Oskar Dudycz for answering so many user questions and being the champion for our event sourcing feature set, and Babu Annamalai for the newly improved website and all our grown up DevOps infrastructure. Their contributions over the years and especially on this giant release have been invaluable.

I’d also like to thank:

  • JT for taking on the nullability sweep and many other things
  • Ville Häkli might have accidentally become our best tester and helped us discover and deal with several issues along the way
  • Julien Perignon and his team for their patience and help with the Marten V4 shakedown cruise
  • Barry Hagan started the ball rolling with Marten’s new, expanded metadata collection
  • Raif Atef for several helpful bug reports and some related fixes
  • Simon Cropp for several pull requests and doing some dirty work
  • Kasper Damgård for a lot of feedback on Linq queries and memory usage
  • Adam Barclay helped us improve Marten’s multi-tenancy support and its usability

and many others who raised actionable issues, gave us feedback, and even made code contributions. Keeping in mind that I personally grew up on a farm in the middle of nowhere in the U.S., it’s a little mind-blowing to me to work on a project of this magnitude that at a quick glance included contributors from at least five continents on this release.

One of my former colleagues at Calavista likes to ask prospective candidates for senior architect roles what project they’ve done that they’re the most proud of. I answered “Marten” at the time, but I think I mean that even more now.

What Changed in this Release?

To quote the immortal philosopher Ferris Bueller:

The question isn‘t ‘what are we going to do’, the question is ‘what aren’t we going to do? ‘

We did try to write up a list of breaking changes for V4 in the migration guide, but here’s some highlights:

  • We generally made a huge sweep of the Marten code internals looking for every possible opportunity to reduce object allocations and dictionary lookups for low level performance improvements. The new dynamic code generation approach in Marten helped get us to that point.
  • We think Marten is even easier to bootstrap in new projects with improvements to the IServiceCollection.AddMarten() extensions
  • Marten supports System.Text.Json — but use that with some caution of course
  • The Linq support took a big step forward with a near rewrite and filled in some missing support for better querying through child collections as a big example. The Linq support is now much more modular and we think that will help us continue to grow that support. It’s a small thing, but the Linq parsing was even optimized a little bit for performance
  • Event Sourcing in Marten got a lot of big improvements that were holding up adoption by some users, especially in regards to the asynchronous projection support. The “async daemon” was completely rewritten and is now much easier to incorporate into .Net systems.
  • As a big user request, Marten supports much more options for tracking flexible metadata like correlation ids and even user defined headers in both document and event storage
  • Multi-tenancy support was improved
  • Soft delete support got some additional usability features
  • PLv8 adoption has been a stumbling block, so all the features related to PLv8 were removed to a separate add-on library called Marten.PLv8
  • The schema management features in Marten made some significant strides and should be able to handle more scenarios with less manual intervention — we think/hope/let’s just be positive for now

What’s Next for Marten?

Full point OSS releases inevitably bring a barrage of user reported errors, questions about migrating, possibly confusing wording in new documentation, and lots of queries about some significant planned features we just couldn’t fit into this already giant release. For that matter, we’ll probably have to quickly spin out a V5 release for .Net 6 and Npgsql 6 because there’s breaking changes coming due to those dependencies. OSS projects are never finished, only abandoned, and there’ll be a world of things to take care of in the aftermath of 4.0 — but for right now, Don’t Steal My Sunshine!.

A sample environment check for OIDC authenticated web services

EDIT 8/25: Make sure you’re using Oakton 3.2.2 if you try to reproduce this sample, because of course I found and fixed an Oakton bug in this functionality in the process of writing the post:(

I think that what I’m calling an “environment check” here is a very valuable, and under utilized technique in software development today. I had a good reason to stop and create one today to troubleshoot an issue I stubbed my toe on, so here’s a blog post about environment checks.

One of the things on my plate at work right now is prototyping out the usage of Identity Server 5 as our new identity provider across our enterprise. Today I was working with a spike that needed to run a pair of ASP.Net Core applications:

  1. A web application that used the Duende.BFF library and JWT bearer tokens as its server side authentication mechanism.
  2. The actual identity server application

All I was trying to do was to run both applications and try to log into secure portions of the web application. No luck though, I was getting weird looking exceptions about missing configuration. I eventually debugged through the ASP.Net Core OIDC authentication innards, and found out that my issue was that the web application was configured with the wrong authority Url to the identity server service.

That turned out to be a very simple problem to fix, but the exception message was completely unrelated to the actual error, and that’s the kind of problem that’s unfortunately likely to come up again as this turns into a real solution that’s separately deployed in several different environments later (local development, CI, testing, staging, production, you know the drill).

As a preventative measure — or at least a diagnostic tool — I stopped and added an environment check into the application just to assert that yes, it can successfully reach the configured OIDC authority at the configured Url we expect it to be. Mechanically, I added Oakton as the command line runner for the application. After that, I utilized Oakton’s facility for adding and executing environment checks to a .Net Core or .Net 5+ application.

Inside the Startup.ConfigureServices() method for the main web application, I have this code to configure the server-side authentication (sorry, I know this is a lot of code):

            services.AddBff()
                .AddServerSideSessions();

            // This is specific to this application. This is just 
            // a cheap way of doing strong typed configuration
            // without using the ugly IOptions<T> stuff
            var settings = Configuration.Get<OidcSettings>();
            
            // configure server-side authentication and session management
            services.AddAuthentication(options =>
                {
                    options.DefaultScheme = "cookie";
                    options.DefaultChallengeScheme = "oidc";
                    options.DefaultSignOutScheme = "oidc";
                })
                .AddCookie("cookie", options =>
                {
                    // host prefixed cookie name
                    options.Cookie.Name = settings.CookieName;

                    // strict SameSite handling
                    options.Cookie.SameSite = SameSiteMode.Strict;
                })
                .AddOpenIdConnect("oidc", options =>
                {
                    // This is the root Url for the OIDC authority
                    options.Authority = settings.Authority;

                    // confidential client using code flow + PKCE + query response mode
                    options.ClientId = settings.ClientId;
                    options.ClientSecret = settings.ClientSecret;
                    options.ResponseType = "code";
                    options.ResponseMode = "query";
                    options.UsePkce = true;

                    options.MapInboundClaims = true;
                    options.GetClaimsFromUserInfoEndpoint = true;

                    // save access and refresh token to enable automatic lifetime management
                    options.SaveTokens = true;

                    // request scopes
                    options.Scope.Clear();
                    options.Scope.Add("openid");
                    options.Scope.Add("profile");
                    options.Scope.Add("api");

                    // request refresh token
                    options.Scope.Add("offline_access");
                });

The main thing I want you to note in the code above is that the Url for the OIDC identity service is bound from the raw configuration to the OidcSettings.Authority property. Now that we know where the authority Url is, formulating an environment check is this code, again within the Startup.ConfigureServices() method:

            // This extension method registers an environment check with Oakton
            services.CheckEnvironment<HttpClient>($"Is the expected OIDC authority at {settings.Authority} running", async (client, token) =>
            {
                var document = await client.GetDiscoveryDocumentAsync(settings.Authority, cancellationToken: token);
                
                if (document == null)
                {
                    throw new Exception("Unable to load the OIDC discovery document. Is the OIDC authority server running?");
                }
                else if (document.IsError)
                {
                    throw document.Exception;
                }
            });

Now, if the authority Url is misconfigured, or the OIDC identity service is not running, I can quickly diagnose that issue by running this command from the main web application directory:

dotnet run -- check-env

If I were to run this command with the identity service is down, I’ll get this lovely output:

Stop, dogs, stop! The light is red!

Hopefully, I can tell from the description and the exception message that the identity service that is supposed to be at https://localhost:5010 is not responsive. In this case it was because I had purposely turned it off to show you the failure, so let’s turn the identity service on and try again with dotnet run -- check-env:

Go, dogs, go! It’s green ahead!

As part of turning this work over to real developers to turn this work into real, production worthy code, we’ll document the dotnet run -- check-env tooling for detecting errors. More effectively though, with Oakton the command line call will return a non-zero error code if the environment checks fail, so it’s useful to run this command in automated deployments as a “fail fast” check on the environment that can point to specifically where the application is misconfigured or external dependencies are unreachable.

As another alternative, Oakton can run these environment checks for you at application startup time by using the -c or --check flag when the application executable is executed.

Lamar v6 is out! Expanded interception options, overriding test services, better documentation

Lamar is a modern IoC container library that is optimized for usage within .Net Core / .Net 5/6 applications and natively implements the .Net DI abstractions (AddTransient/Singleton/Scoped() etc). Lamar was specifically built to be a much higher performance version of the old StructureMap library with similar usage for relatively easy migration.

Lamar v6.0.0 went live today. It’s not a very large release at all, but I took this opportunity to tuck some previously public types into being internal to clean up your Intellisense and eliminated some obsolete public members. I don’t anticipate any changes in your code for the mass majority of Lamar users upgrading.

The highlights of this release are:

  • Lamar’s documentation website got a big facelift as it moved from an old Bootstrap theme with my old stdocs tool to using Vitepress and mdsnippets. I also did a sweep to remove old StructureMap nomenclature and replace that with the newer Lamar nomenclature that very purposely mimics the .Net DI abstraction terminology.
  • New functionality to reliably override service registrations at container creation time. This was specifically meant for automated integration testing where you may want to override specific service registrations. This hasn’t been a straightforward thing in the past because the generic host builder applies service registrations from Startup classes last no matter what, and that defeated many efforts to override services in testing.
  • AaronAllBright contributed an important pull request for some missing activation features. That was expanded into a full blown interception and activation feature set to go along with the support for decorators in Lamar v1+. That’s been a frequent concern for folks looking to move off of StructureMap and one of the biggest remaining features in Lamar’s backlog, so mark that one off.
  • A couple bug fixes, but I think this is the only big one.
  • I stopped and modernized the devops a bit to replace the old Rake build with Bullseye & SimpleExec. I also moved the CI from AppVeyor to GitHub actions. I wouldn’t say that either move was a big deal, but it is nice having the CI information right in the GitHub website.

Alba v5.0 is out! Easy recipes for integration testing ASP.Net web services

Alba is a small library to facilitate easier integration testing of web services built on the ASP.Net Core platform. It’s more or less a wrapper around the ASP.Net TestServer, but does a lot to make testing much easier than writing low level code against TestServer. You would use Alba from within xUnit.Net or NUnit testing projects.

I outlined a proposal for Alba V5 a couple weeks back, and today I was able to publish the new Nuget. The big changes are:

  • The Alba documentation website was rebuilt with Vitepress and refreshed to reflect V5 changes
  • The old SystemUnderTest type was renamed AlbaHost to make the terminology more inline with ASP.Net Core
  • The Json serialization is done through the configured input and output formatters within your ASP.Net Core application — meaning that Alba works just fine regardless of whether your application uses Newtonsoft.Json or System.Text.Json for its Json serialization.
  • There is a new extension model that was added specifically to support testing web services that are authenticated with JWT bearer tokens.

For security, you can opt into:

  1. Stub out all of the authentication and use hard-coded claims
  2. Add a valid JWT to every request with user-defined claims
  3. Let Alba deal with live OIDC integration by handling the JWT integration with your OIDC identity server

Testing web services secured by JWT tokens with Alba v5

We’re working toward standing up a new OIDC infrastructure built around Identity Server 5, with a couple gigantic legacy monolith applications and potentially dozens of newer microservices needing to use this new identity server for authentication. We’ll have to have a good story for running our big web applications that will have this new identity server dependency at development time, but for right now I just want to focus on an automated testing strategy for our newer ASP.Net Core web services using the Alba library.

First off, Alba is a helper library for integration testing HTTP API endpoints in .Net Core systems. Alba wraps the ASP.Net Core TestServer while providing quite a bit of convenient helpers for setting up and verifying HTTP calls against your ASP.Net Core services. We will be shortly introducing Alba into my organization at MedeAnalytics as a way of doing much more integration testing at the API layer (think the middle layer of any kind of testing pyramid concept).

In my previous post I laid out some plans and proposals for a quickly forthcoming Alba v5 release, with the biggest improvement being a new model for being able to stub out OIDC authentication for APIs that are secured by JWT bearer tokens (I think I win programming bingo for that sentence!).

Before I show code, I should say that all of this code is in the v5 branch of Alba on GitHub, but not yet released as it’s very heavily in flight.

To start, I’m assuming that you have a web service project, then a testing library for that web service project. In your web application, bearer token authentication is set up something like this inside your Startup.ConfigureServices() method:

services.AddAuthentication("Bearer")
    .AddJwtBearer("Bearer", options =>
    {
        // A real application would pull all this information from configuration
        // of course, but I'm hardcoding it in testing
        options.Audience = "jwtsample";
        options.ClaimsIssuer = "myapp";
        
        // don't worry about this, our JwtSecurityStub is gonna switch it off in
        // tests
        options.Authority = "https://localhost:5001";
            

        options.TokenValidationParameters = new TokenValidationParameters
        {
            ValidateAudience = false,
            IssuerSigningKey = new SymmetricSecurityKey(Encoding.UTF8.GetBytes("some really big key that should work"))
        };
    });

And of course, you also have these lines of code in your Startup.Configure() method to add in ASP.Net Core middleware for authentication and authorization:

app.UseAuthentication();
app.UseAuthorization();

With these lines of setup code, you will not be able to hit any secured HTTP endpoint in your web service unless there is a valid JWT token in the Authorization header of the incoming HTTP request. Moreover, with this configuration your service would need to make calls to the configured bearer token authority (http://localhost:5001 above). It’s going to be awkward and probably very brittle to depend on having the identity server spun up and running locally when our developers try to run API tests. It would obviously be helpful if there was a quick way to stub out the bearer token authentication in testing to automatically supply known claims so our developers can focus on developing their individual service’s functionality.

That’s where Alba v5 comes in with its new JwtSecurityStub extension that will:

  1. Disable any validation interactions with an external OIDC authority
  2. Automatically add a valid JWT token to any request being sent through Alba
  3. Give developers fine-grained control over the claims attached to any specific request if there is logic that will vary by claim values

To demonstrate this new Alba functionality, let’s assume that you have a testing project that has a direct reference to the web service project. The direct project reference is important because you’ll want to spin up the “system under test” in a test fixture like this:

    public class web_api_authentication : IDisposable
    {
        private readonly IAlbaHost theHost;

        public web_api_authentication()
        {
            // This is calling your real web service's configuration
            var hostBuilder = Program.CreateHostBuilder(new string[0]);

            // This is a new Alba v5 extension that can "stub" out
            // JWT token authentication
            var jwtSecurityStub = new JwtSecurityStub()
                .With("foo", "bar")
                .With(JwtRegisteredClaimNames.Email, "guy@company.com");

            // AlbaHost was "SystemUnderTest" in previous versions of
            // Alba
            theHost = new AlbaHost(hostBuilder, jwtSecurityStub);
        }

I was using xUnit.Net in this sample, but Alba is agnostic about the actual testing library and we’ll use both NUnit and xUnit.Net at work.

In the code above I’ve bootstrapped the web service with Alba and attached the JwtSecurityStub. I’ve also established some baseline claims that will be added to every JWT token on all Alba scenario requests. The AlbaHost extends the IHost interface you’re already used to in .Net Core, but adds the important Scenario() method that you can use to run HTTP requests all the way through your entire application stack like this test :

        [Fact]
        public async Task post_to_a_secured_endpoint_with_jwt_from_extension()
        {
            // Building the input body
            var input = new Numbers
            {
                Values = new[] {2, 3, 4}
            };

            var response = await theHost.Scenario(x =>
            {
                // Alba deals with Json serialization for us
                x.Post.Json(input).ToUrl("/math");
                
                // Enforce that the HTTP Status Code is 200 Ok
                x.StatusCodeShouldBeOk();
            });

            var output = response.ResponseBody.ReadAsJson<Result>();
            output.Sum.ShouldBe(9);
            output.Product.ShouldBe(24);
        }

You’ll notice that I did absolutely nothing in regards to JWT set up or claims or anything. That’s because the JwtSecurityStub is taking care of everything for you. It’s:

  1. Reaching into your application’s bootstrapping to pluck out the right signing key so that it builds JWT token strings that can be validated with the right signature
  2. Turns off any external token validation using an external OIDC authority
  3. Places a unique, unexpired JWT token on each request that matches the issuer and authority configuration of your application

Now, to further control the claims used on any individual scenario request, you can use this new method in Scenario tests:

        [Fact]
        public async Task can_modify_claims_per_scenario()
        {
            var input = new Numbers
            {
                Values = new[] {2, 3, 4}
            };

            var response = await theHost.Scenario(x =>
            {
                // This is a custom claim that would only be used for the 
                // JWT token in this individual test
                x.WithClaim(new Claim("color", "green"));
                x.Post.Json(input).ToUrl("/math");
                x.StatusCodeShouldBeOk();
            });

            var principal = response.Context.User;
            principal.ShouldNotBeNull();
            
            principal.Claims.Single(x => x.Type == "color")
                .Value.ShouldBe("green");
        }

I’ve got plenty of more ground to cover in how we’ll develop locally with our new identity server strategy, but I’m feeling pretty good about having a decent API testing strategy. All of this code is just barely written, so any feedback you might have would be very timely. Thanks for reading about some brand new code!

Proposal for Alba v5: JWT secured APIs, more JSON options, and other stuff

Just typed up a master GitHub issue for Alba v5, and I’ll take any possible feedback anywhere, but I’d prefer you make requests on GitHub. Here’s the GitHub issue I just banged out copied and pasted for anybody interested:

I’m dipping into Alba over the next couple work days to add some improvements that we want at MedeAnalytics to test HTTP APIs that are going to be secured with JWT Bearer Token authentication that’s ultimately backed up by IdentityServer5. I’ve done some proof of concepts off to the side on how to deal with that in Alba tests with or without the actual identity server up and running, so I know it’s possible.

However, when I started looking into how to build a more formal “JWT Token” add on to Alba and catching up with GitHub issues on Alba, it’s led me to the conclusion that it’s time for a little overhaul of Alba as part of the JWT work that’s turning into a proposed Alba v5 release.

Here’s what I’m thinking:

  • Ditch all support < .Net 5.0 just because that makes things easier on me maintaining Alba. Selfishness FTW.
  • New Alba extension model, more on this below
  • Revamp the Alba bootstrapping so that it’s just an extension method hanging off the end of IHostBuilder that will create a new ITestingHost interface for the existing SystemUnderTest class. More on this below.
  • Decouple Alba’s JSON facilities from Newtonsoft.Json. A couple folks have asked if you can use System.Text.Json with Alba so far, so this time around we’ll try to dig into the application itself and find the JSON formatter from the system and make all the JSON serialization “just” use that instead of recreating anything
  • Enable easy testing of GPRC endpoints like we do for classic JSON services

DevOps changes

  • Replace the Rake build script with Bullseye
  • Move the documentation website from stdocs to VitePress with a similar set up as the new Marten website
  • Retire the existing AppVeyor CI (that works just fine), and move to GitHub actions for CI, but add a new action for publishing Nugets

Bootstrapping Changes

Today you use code like this to bootstrap Alba against your ASP.Net Core application:

var system = new SystemUnderTest(WebApi.Program.CreateHostBuilder(new string[0]));

where Alba’s SystemUnderTest intercepts an IHostBuilder, adds some additional configuration, swaps out Kestrel for the ASP.Net Core TestServer, and starts the underlying IHost.

In Alba v5, I’d like to change this a little bit to being an extension method hanging off of IHostBuilder so you’d have something more like this:

var host = WebApi.Program.CreateHostBuilder(new string[0]).StartAlbaHost();

// or

var host = await WebApi.Program.CreateHostBuilder(new string[0]).StartAlbaHostAsync();

The host above would be this new interface (still the old SystemUnderTest under the covers):

public interface IAlbaHost : IHost
{
    Task<ScenarioResult> Scenario(Action<Scenario> configure);

    // various methods to register actions to run before or after any scenario is executed
}

Extension Model

Right now the only extension I have in mind is one for JWT tokens, but it probably goes into a separate Nuget, so here you go.

public interface ITestingHostExtension : IDisposable, IAsyncDisposable
{
    // Make any necessary registrations to the actual application
    IHostBuilder Configure(IHostBuilder builder);

    // Spin it up if necessary before any scenarios are run, also gives
    // an extension a chance to add any BeforeEach() / AfterEach() kind
    // of hooks to the Scenario or HttpContext
    ValueTask Start(ITestingHost host);
   
}

My thought is that ITestingHostExtension objects would be passed into the StartAlbaHost(params ITestingHostExtension[] extensions) method from above.

JWT Bearer Token Extension

This will probably need to be in a separate Alba.JwtTokens Nuget library.

  • Extension method on Scenario to add a bearer token to the Authorization header. Small potatoes.
  • Helper of some sort to fetch JWT tokens from an external identity server and use that token on further scenario runs. I’ve got this prototyped already
  • a new Alba extension that acts as a stubbed identity server. More below:

For the stubbed in identity server, what I’m thinking is that a registered IdentityServerStub extension would:

  • Spin up a small fake OIDC web application hosted in memory w/ Kestrel that responds to the basic endpoints to validate JWT tokens.
  • With this extension, Alba will quietly stick a JWT token into the Authorization header of each request so you can test API endpoints that are secured by bearer tokens
  • Reaches into the system under test’s IHostBuilder and sets the authority url to the local Url of the stubbed OIDC server (I’ve already successfully prototyped this one)
  • Also reaches into the system under test to find the JwtBearerOptions as configured and uses the signing keys that are already configured for the application for the JWT generation. That should make the setup of this quite a bit simpler
  • The new JWT stub extension can be configured with baseline claims that would be part of any generated JWT
  • A new extension method on Scenario to add or override claims in the JWT tokens on a per-scenario basis to support fine-grained authorization testing in your API tests
  • An extension method on Scenario to disable the JWT header altogether so you can test out authentication failures
  • Probably need a facility of some sort to override the JWT expiration time as well

Dynamic Code Generation in Marten V4

Marten V4 extensively uses runtime code generation backed by Roslyn runtime compilation for dynamic code. This is both much more powerful than source generators in what it allows us to actually do, but can have significant memory usage and “cold start” problems (seems to depend on exact configurations, so it’s not a given that you’ll have these issues). In this post I’ll show the facility we have to “generate ahead” the code to dodge the memory and cold start issues at production time.

Before V4, Marten had used the common model of building up Expression trees and compiling them into lambda functions at runtime to “bake” some of our dynamic behavior into fast running code. That does work and a good percentage of the development tools you use every day probably use that technique internally, but I felt that we’d already outgrown the dynamic Expression generation approach as it was, and the new functionality requested in V4 was going to substantially raise the complexity of what we were going to need to do.

Instead, I (Marten is a team effort, but I get all the blame for this one) opted to use the dynamic code generation and compilation approach using LamarCodeGeneration and LamarCompiler that I’d originally built for other projects. This allowed Marten to generate much more complex code than I thought was practical with other models (we could have used IL generation too of course, but that’s an exercise in masochism). If you’re interested, I gave a talk about these tools and the approach at NDC London 2019.

I do think this has worked out in terms of performance improvements at runtime and certainly helped to be able to introduce the new document and event store metadata features in V4, but there’s a catch. Actually two:

  1. The Roslyn compiler sucks down a lot of memory sometimes and doesn’t seem to ever release it. It’s gotten better with newer releases and it’s not consistent, but still.
  2. There’s a sometimes significant lag in cold start scenarios on the first time Marten needs to generate and compile code at runtime

What we could do though, is provide what I call..

Marten’s “Generate Ahead” Strategy

To side step the problems with the Roslyn compilation, I developed a model (I did this originally in Jasper) to generate the code ahead of time and have it compiled into the entry assembly for the system. The last step is to direct Marten to use the pre-compiled types instead of generating the types at runtime.

Jumping straight into a sample console project to show off this functionality, I’m configuring Marten with the AddMarten() method you can see in this code on GitHub.

The important line of code you need to focus on here is this flag:

opts.GeneratedCodeMode = TypeLoadMode.LoadFromPreBuiltAssembly;

This flag in the Marten configuration directs Marten to first look in the entry assembly of the application for any types that it would normally try to generate at runtime, and if that type exists, load it from the entry assembly and bypass any invocation of Roslyn. I think in a real application you’d wrap that call something like this so that it only applies when the application is running in production mode:

if (Environment.IsProduction())
{

    options.GeneratedCodeMode = 
        TypeLoadMode.LoadFromPreBuiltAssembly;
}

The next thing to notice is that I have to tell Marten ahead of time what the possible document types and even about any compiled query types in this code so that Marten will “know” what code to generate in the next section. The compiled query registration is new, but you already had to let Marten know about the document types to make the schema migration functionality work anyway.

Generating and exporting the code ahead of time is done from the command line through an Oakton command. First though, add the LamarCodeGeneration.Commands Nuget to your entry project, which will also add a transitive reference to Oakton if you’re not already using it. This is all described in the Oakton getting started page, but you’ll need to change your Program.Main() method slightly to activate Oakton:

        // The return value needs to be Task<int> 
        // to communicate command success or failure
        public static Task<int> Main(string[] args)
        {
            return CreateHostBuilder(args)
                
                // This makes Oakton be your CLI
                // runner
                .RunOaktonCommands(args);
        }

If you’ll open up the command terminal of your preference at the root directory of the entry project, type this command to see the available Oakton commands:

dotnet run -- help

That’ll spit out a list of commands and the assemblies where Oakton looked for command types. You should see output similar to this:

Searching 'LamarCodeGeneration.Commands, Version=0.0.0.0, Culture=neutral, PublicKeyToken=null' for commands
Searching 'Marten.CommandLine, Version=0.0.0.0, Culture=neutral, PublicKeyToken=null' for commands

  -------------------------------------------------
    Available commands:
  -------------------------------------------------
        check-env -> Execute all environment checks against the application
          codegen -> Utilities for working with LamarCodeGeneration and LamarCompiler

and assuming you got that far, now type dotnet run -- help codegen to see the list of options with the codegen command.

If you just want to preview the generated code in the console, type:

dotnet run -- codegen preview

To just verify that the dynamic code can be successfully generated and compiled, use:

dotnet run -- codegen test

To actually export the generated code, use:

dotnet run — codegen write

That command will write a single C# file at /Internal/Generated/DocumentStorage.cs for any document storage types and another at `/Internal/Generated/Events.cs` for the event storage and projections.

Just as a short cut to clear out any old code, you can use:

dotnet run -- codegen delete

If you’re curious, the generated code — and remember that it’s generated code so it’s going to be butt ugly — is going to look like this for the document storage, and this code for the event storage and projections.

The way that I see this being used is something like this:

  • The LoadFromPreBuiltAssembly option is only turned on in Production mode so that developers can iterate at will during development. That should be disabled at development time.
  • As part of the CI/CD process for a project, you’ll run the dotnet run -- codegen write command as an initial step, then proceed to the normal compilation and testing cycles. That will bake in the generated code right into the compiled assemblies and enable you to also take advantage of any kind AOT compiler optimizations

Duh. Why not source generators doofus?

Why didn’t we use source generators you may ask? The Roslyn-based approach in Marten is both much better and much worse than source generators. Source generators are part of the compilation process itself and wouldn’t have any kind of cold start problem like Marten has with the runtime Roslyn compilation approach. That part is obviously much better, plus there’s no weird two step compilation process at CI/CD time. But on the downside, source generators can only use information that’s available at compilation time, and the Marten code generation relies very heavily on type reflection, conventions applied at runtime, and configuration built up through a fluent interface API. I do not believe that we could use source generators for what we’ve done in Marten because of that dependency on runtime information.