JasperFx OSS Plans for .Net 6 (Marten et al)

I’m going to have to admit that I got caught flat footed by the .Net 6 release a couple weeks ago. I hadn’t really been paying much attention to the forthcoming changes, maybe got cocky by how easy the transition from netcoreapp3.1 to .Net 5 was, and have been unpleasantly surprised by how much work it’s going to take to move some OSS projects up to .Net 6. All at the same time that the advance users of the world are clamoring for all their dependencies to target .Net 6 yesterday.

All that being said, here’s my running list of plans to get the projects in the JasperFx GitHub organization successfully targeting .Net 6. I’ll make edits to this page as things get published to Nuget.

Baseline

Baseline is a grab bag utility library full of extension methods that I’ve relied on for years. Nobody uses it directly per se, but it’s a dependency of just about every other project in the organization, so it went first with the 3.2.2 release adding a .Net 6 target. No code changes were necessary other than adding .Net 6 to the CI testing. Easy money.

Oakton

EDIT: Oakton v4.0 is up on Nuget. WebApplication is supported, but you can’t override configuration in commands with this model like you can w/ HostBuilder only. I’ll do a follow up at some point to fill in this gap.

Oakton is a tool to add extensible command line options to .Net applications based on the HostBuilder model. Oakton is my problem child right now because it’s a dependency in several other projects and its current model does not play nicely with the new WebApplicationBuilder approach for configuring .Net 6 applications. I’d also like to get the Oakton documentation website moved to the VitePress + MarkdownSnippets model we’re using now for Marten and some of the other JasperFx projects. I think I’ll take a shortcut here and publish the Nuget and let the documentation catch up later.

Alba

Alba is an automated testing helper for ASP.Net Core. Just like Oakton, Alba worked very well with the HostBuilder model, but was thrown for a loop with the new WebApplicationBuilder configuration model that’s the mechanism for using the new Minimal API (*cough* inevitable Sinatra copy *cough*) model. Fortunately though, Hawxy came through with a big pull request to make Alba finally work with the WebApplicationFactory model that can accommodate the new WebApplicationBuilder model, so we’re back in business soon. Alba 5.1 will be published soon with that work after some documentation updates and hopefully some testing with the Oakton + WebApplicationBuilder + Alba model.

EDIT: Alba 7.0 is up with the necessary changes, but the docs will come later this week

Lamar

Lamar is an IoC/DI container and the modern successor to StructureMap. The biggest issue with Lamar on v6 was Nuget dependencies on the IServiceCollection model, plus needing some extra implementation to light up the implied service model of Minimal APIs. All the current unit tests and even integration tests with ASP.Net Core are passing on .Net 6. To finish up a new Lamar 7.0 release is:

  • One .Net 6 related bug in the diagnostics
  • Better Minimal API support
  • Upgrade Oakton & Baseline dependencies in some of the Lamar projects
  • Documentation updates for the new IAsyncDisposable support and usage with WebApplicationBuilder with or without Minimal API usage

Marten/Weasel

We just made the gigantic V4 release a couple months ago knowing that we’d have to follow up quickly with a V5 release with a few breaking changes to accommodate .Net 6 and the latest version of Npgsql. We are having to make a full point release, so that opens the door for other breaking changes that didn’t make it into V4 (don’t worry, I think shifting from V4 to V5 will be easy for most people). The other Marten core team members have been doing most of the work for this so far, but I’m going to jump into the fray later this week to do some last minute changes:

  • Review some internal changes to Npgsql that might have performance impacts on Marten
  • Consider adding an event streaming model within the new V4 async daemon. For folks that wanna use that to publish events to some kind of transport (Kafka? Some kind of queue?) with strict ordering. This won’t be much yet, but it keeps coming up so we might as well consider it.
  • Multi-tenancy through multiple databases. It keeps coming up, and potentially causes breaking API changes, so we’re at least going to explore it

I’m trying not to slow down the Marten V5 release with .Net 6 support for too long, so this is all either happening really fast or not at all. I’ll blog more later this week about multi-tenancy & Marten.

Weasel is a spin off library from Marten for database change detection and ADO.Net helpers that are reused in other projects now. It will be published simultaneously with Marten.

Jasper

Oh man, I’d love, love, love to have Jasper 2.0 done by early January so that it’ll be available for usage at my company on some upcoming work. This work is on hold while I deal with the other projects, my actual day job, and family and stuff.

Marten Takes a Giant Leap Forward with the Official V4 Release!

Starting next week I’ll be doing some more deep dives into new Marten V4 improvements and some more involved sample usages.

Today I’m very excited to announce the official release of Marten V4.0! The Nugets just went live, and we’ve published out completely revamped project website at https://martendb.io.

This has been at least a two year journey of significant development effort by the Marten core team and quite a few contributors, preceded by several years of brainstorming within the Marten community about the improvements realized by this release. There’s plenty more to do in the Marten backlog, but I think this V4 release puts Marten on a very solid technical foundation for the long term future.

This was a massive effort, and I’d like to especially thank the other core team members Oskar Dudycz for answering so many user questions and being the champion for our event sourcing feature set, and Babu Annamalai for the newly improved website and all our grown up DevOps infrastructure. Their contributions over the years and especially on this giant release have been invaluable.

I’d also like to thank:

  • JT for taking on the nullability sweep and many other things
  • Ville Häkli might have accidentally become our best tester and helped us discover and deal with several issues along the way
  • Julien Perignon and his team for their patience and help with the Marten V4 shakedown cruise
  • Barry Hagan started the ball rolling with Marten’s new, expanded metadata collection
  • Raif Atef for several helpful bug reports and some related fixes
  • Simon Cropp for several pull requests and doing some dirty work
  • Kasper Damgård for a lot of feedback on Linq queries and memory usage
  • Adam Barclay helped us improve Marten’s multi-tenancy support and its usability

and many others who raised actionable issues, gave us feedback, and even made code contributions. Keeping in mind that I personally grew up on a farm in the middle of nowhere in the U.S., it’s a little mind-blowing to me to work on a project of this magnitude that at a quick glance included contributors from at least five continents on this release.

One of my former colleagues at Calavista likes to ask prospective candidates for senior architect roles what project they’ve done that they’re the most proud of. I answered “Marten” at the time, but I think I mean that even more now.

What Changed in this Release?

To quote the immortal philosopher Ferris Bueller:

The question isn‘t ‘what are we going to do’, the question is ‘what aren’t we going to do? ‘

We did try to write up a list of breaking changes for V4 in the migration guide, but here’s some highlights:

  • We generally made a huge sweep of the Marten code internals looking for every possible opportunity to reduce object allocations and dictionary lookups for low level performance improvements. The new dynamic code generation approach in Marten helped get us to that point.
  • We think Marten is even easier to bootstrap in new projects with improvements to the IServiceCollection.AddMarten() extensions
  • Marten supports System.Text.Json — but use that with some caution of course
  • The Linq support took a big step forward with a near rewrite and filled in some missing support for better querying through child collections as a big example. The Linq support is now much more modular and we think that will help us continue to grow that support. It’s a small thing, but the Linq parsing was even optimized a little bit for performance
  • Event Sourcing in Marten got a lot of big improvements that were holding up adoption by some users, especially in regards to the asynchronous projection support. The “async daemon” was completely rewritten and is now much easier to incorporate into .Net systems.
  • As a big user request, Marten supports much more options for tracking flexible metadata like correlation ids and even user defined headers in both document and event storage
  • Multi-tenancy support was improved
  • Soft delete support got some additional usability features
  • PLv8 adoption has been a stumbling block, so all the features related to PLv8 were removed to a separate add-on library called Marten.PLv8
  • The schema management features in Marten made some significant strides and should be able to handle more scenarios with less manual intervention — we think/hope/let’s just be positive for now

What’s Next for Marten?

Full point OSS releases inevitably bring a barrage of user reported errors, questions about migrating, possibly confusing wording in new documentation, and lots of queries about some significant planned features we just couldn’t fit into this already giant release. For that matter, we’ll probably have to quickly spin out a V5 release for .Net 6 and Npgsql 6 because there’s breaking changes coming due to those dependencies. OSS projects are never finished, only abandoned, and there’ll be a world of things to take care of in the aftermath of 4.0 — but for right now, Don’t Steal My Sunshine!.

Efficient Web Services with Marten V4

We’re genuinely close to finally pulling the damn trigger on Marten V4. One of the last things I’m coding for the release is a new recipe for users to write very efficient web services backed by Marten database storage.

A Traditional .Net Approach

Before I get into the exact mechanics of that, let’s set the stage a little bit and draw some contrasts with a more traditional .Net web service stack. Let’s start by saying that in that traditional stack, you’re not using any kind of CQRS and the system state is persisted in the “one true model” approach using Entity Framework Core and an RDBMS like Sql Server. Now, in some web services you need to query data from the database and serve up a subset of that data into the outgoing HTTP response in a web service endpoint. The typical flow — with a focus on what’s happening under the covers — would be to:

  1. ASP.Net Core receives an HTTP GET, finds the proper endpoint, and calls the right handler for that route. Not that it actually matters, but let’s assume the endpoint handler is an MVC Core Controller method
  2. ASP.Net Core invokes a call to your DI container to build up the MVC controller object, and calls the right method for the route
  3. You’ve been to .Net conferences and internalized the idea that an MVC controller shouldn’t get too big or do things besides HTTP mediation, so you’re delegating to a tool like MediatR. MediatR itself is going to go through another DI container service resolution to find the right handler for the input model, then invoke that handler
  4. EF Core issues a query against your Sql Server database. If you’re needing to fetch data on more than just the root aggregate model, the query is going to be an outer join against all the child tables too
  5. EF Core loops around in the database rows and creates objects for your .Net domain model classes based on its ORM mappings
  6. You certainly don’t want to send the raw domain model on the wire because of coupling concerns, or don’t want every bit of data exposed to the client in a particular web service, so you use some kind of tool like AutoMapper to transform the internal domain model objects built up by EF Core into Data Transfer Objects (DTO) purposely designed to go over the wire.
  7. Lastly, you return the outgoing DTO model, which is serialized to JSON and sent down the HTTP response by MVC Core

Sound pretty common? That’s also a lot of overhead. A lot of memory allocations, data mappings between structures, JSON serialization, and a lot of dictionary lookups just to get data out of the database and spit it out into the HTTP response. It’s also a non-trivial amount of code, and I’d argue that some of the tools I mentioned are high ceremony.

Now do CQRS!

I initially thought of CQRS as looking like a whole lot more work to code, and that’s not an uncommon impression when folks are first introduced to it. I’ve come to realize over time that it’s not really more work so much as it’s really just doing about the same amount of work in different places and different times in the application’s workflow.

Now let’s at least introduce CQRS into our application architecture. I’m not saying that that automatically implies using event sourcing, but let’s say that you are writing a pre-built “read side” model of the state of your system directly to a database of some sort. Now from that same web service I was describing before, you just need to fetch that persisted “read side” model from the database and spit that state right out to the HTTP response.

Now then, I’ve just yada, yada’d all the complexity of the CQRS architecture that continuously updates the read side view for you, but hey, Marten does that for you too and that can be a shortly forthcoming follow up blog post.

Finally bringing Marten V4 into play, let’s say our read side model for an issue tracking system looks like this:

    public class Note
    {
        public string UserName { get; set; }
        public DateTime Timestamp { get; set; }
        public string Text { get; set; }
    }

    public class Issue
    {
        public Guid Id { get; set; }
        public string Description { get; set; }
        public bool Open { get; set; }

        public IList<Note> Notes { get; set; }
    }

Before anyone gets bent out of shape by this, it’s perfectly possible to tell Marten to serialize the persisted documents to JSON with camel or even snake casing to be more idiomatic JSON or Javascript friendly.

Now, let’s build out two controller endpoints, one that gives you an Issue payload by searching by its id, and a second endpoint that gives you all the open Issue models in a single JSON array payload. That controller — using some forthcoming functionality in a new Marten.AspNetCore Nuget — looks like this:

    public class GetIssueController: ControllerBase
    {
        private readonly IQuerySession _session;

        public GetIssueController(IQuerySession session)
        {
            _session = session;
        }

        [HttpGet("/issue/{issueId}")]
        public Task Get(Guid issueId)
        {
            // This "streams" the raw JSON to the HttpResponse
            // w/o ever having to read the full JSON string or
            // deserialize/serialize within the HTTP request
            return _session.Json
                .WriteById<Issue>(issueId, HttpContext);
        }

        [HttpGet("/issue/open")]
        public Task OpenIssues()
        {
            // This "streams" the raw JSON to the HttpResponse
            // w/o ever having to read the full JSON string or
            // deserialize/serialize within the HTTP request
            return _session.Query<Issue>()
                .Where(x => x.Open)
                .WriteArray(HttpContext);
        }

In the GET: /issue/{issueId} endpoint, you’ll notice the call to the new IQuerySession.Json.WriteById() extension method, and how I’m passing to it the current HttpContext. That method is:

  1. Executing a database query against the underlying Postgresql database. And in this case, all of the data is stored in a single column in a single row, so there’s not JOINs or sparse datasets like there would be with an ORM querying an object that has child collections.
  2. Write the raw bytes of the persisted JSON data directly to the HttpResponse.Body without ever bothering to write the whole thing to a .Net string and definitely without having to incur the overhead of JSON deserialization/serialization. That extension method also sets the HTTP content-length and content-type response headers, as well as setting the HTTP status code to 200 if the document is found, or 404 if the data is not found.

In the second HTTP endpoint for GET: /issue/open, the call to WriteArray(HttpContext) is doing something very similar, but writing the results as a JSON array.

By no means is this technique going to be applicable to every HTTP GET endpoint, but when it does, this is far, far more efficient and simpler to code than the more traditional approach that involves all the extra pieces and just has so much more memory allocations and hardware operations going on just to shoot some JSON down the wire.

For a little more context, here’s a test against the /issue/{issueId} endpoint, with a cameo from Alba to help me test the HTTP behavior:

        [Fact]
        public async Task stream_a_single_document_hit()
        {
            var issue = new Issue {Description = "It's bad", Open = true};

            var store = theHost.Services.GetRequiredService<IDocumentStore>();
            using (var session = store.LightweightSession())
            {
                session.Store(issue);
                await session.SaveChangesAsync();
            }

            var result = await theHost.Scenario(s =>
            {
                s.Get.Url($"/issue/{issue.Id}");
                s.StatusCodeShouldBeOk();
                s.ContentTypeShouldBe("application/json");
            });

            var read = result.ReadAsJson<Issue>();

            read.Description.ShouldBe(issue.Description);
        }

and one more test showing the “miss” behavior:

        [Fact]
        public async Task stream_a_single_document_miss()
        {
            await theHost.Scenario(s =>
            {
                s.Get.Url($"/issue/{Guid.NewGuid()}");
                s.StatusCodeShouldBe(404);
            });
        }

Feedback anyone?

This code isn’t even released yet even in an RC Nuget, so feel very free to make any kind of suggestion or send us feedback.

Integration Testing: IHost Lifecycle with NUnit

Starting yesterday, all of my content about automated testing is curated under the new Automated Testing page on this site.

I kicked off a new blog series yesterday with Integration Testing: IHost Lifecycle with xUnit.Net. I started by just discussing how to manage the lifecycle of a .Net IHost inside of an xUnit.Net testing library. I used xUnit.Net because I’m much more familiar with that library, but we mostly use NUnit for our testing at MedeAnalytics, so I’m going to see how the IHost lifecycle I discussed and demonstrated last time in xUnit.NEt could work in NUnit.

To catch you up from the previous post, I have two projects:

  1. An ASP.Net Core web service creatively named WebApplication. This web service has a small endpoint that allows you to post an array of numbers that returns a response telling you the sum and product of those numbers. The code for that controller action is shown in my previous post.
  2. A second testing project using NUnit that references WebApplication. The testing project is going to use Alba for integration testing at the HTTP layer.

With NUnit, I chose to use the SetupFixture construct to manage and share the IHost for the test suite like this:

    [SetUpFixture]
    public class Application
    {
        // Make this lazy so you don't build it out
        // when you don't need it.
        private static readonly Lazy<IAlbaHost> _host;

        static Application()
        {
            _host = new Lazy<IAlbaHost>(() => Program
                .CreateHostBuilder(Array.Empty<string>())
                .StartAlba());
        }

        public static IAlbaHost AlbaHost => _host.Value;

        // I want to expose the underlying Lamar container for some later
        // usage
        public static IContainer Container => (IContainer)_host.Value.Services;

        // Make sure that NUnit will shut down the AlbaHost when
        // all the projects are finished
        [OneTimeTearDown]
        public void Teardown()
        {
            if (_host.IsValueCreated)
            {
                _host.Value.Dispose();
            }
        }
    }

With the IHost instance managed by the Application static class above, I can consume the Alba host in an NUnit test like this:

    public class sample_integration_fixture
    {
        [Test]
        public async Task happy_path_arithmetic()
        {
            // Building the input body
            var input = new Numbers
            {
                Values = new[] {2, 3, 4}
            };

            var response = await Application.AlbaHost.Scenario(x =>
            {
                // Alba deals with Json serialization for us
                x.Post.Json(input).ToUrl("/math");
                
                // Enforce that the HTTP Status Code is 200 Ok
                x.StatusCodeShouldBeOk();
            });

            var output = response.ReadAsJson<Result>();
            output.Sum.ShouldBe(9);
            output.Product.ShouldBe(24);
        }
    }

And now a couple notes about what I did in Application:

  1. I think it’s important to create the IHost lazily, so that you don’t incur the cost of spinning up the IHost when you might be running other tests in your suite that don’t need the IHost. Rapid developer feedback is important, and that’s an awfully easy optimization that could pay off.
  2. The static Teardown() method is decorated with the `[OneTimeTearDown]` attribute to direct NUnit to call that method after all the tests are executed. I cannot stress enough how important it is to clean up resources in your test harness to ensure your ability to quickly iterate through subsequent test runs.
  3. NUnit has a very different model for parallelization than xUnit.Net, and it’s completely “opt in”, so I think there’s less to worry about on that front with NUnit.

At this point I don’t think I have a hard opinion about xUnit.Net vs. NUnit, and I certainly wouldn’t bother switching an existing project from one to the other (even though I’ve certainly done that plenty of times in the past). I haven’t thought this one through enough, but I still think that xUnit.Net is a little bit cleaner for unit testing, but NUnit might be better for integration testing because it gives you finer grained control over fixture lifecycle and has some built in support for test timeouts and retries. At one point I had high hopes for Fixie as another alternative, and that project has become active again, but it would have a long road to challenge either of the two now mainstream tools.

What’s Next?

This series is meant to support my colleagues at MedeAnalytics, so it’s driven by what we just happen to be talking about at any given point. Tomorrow I plan to put out a little post on some Lamar-specific tricks that are helpful in integration testing. Beyond that, I think dealing with database state is the most important thing we’re missing at work, so that needs to be a priority.

Integration Testing: IHost Lifecycle with xUnit.Net

I’m part of an initiative at work to analyze and ultimately improve our test automation practices. As part of that work, I’ll be blogging quite a bit about test automation starting with my brain dump on test automation last week and my most recent post on mocks and stubs last month. From here on out, I’m curating all of my posts and selected writings from other folks on my new Automated Testing page.

I’m already on record as saying that the generic host (IHost) in recent versions of .Net is one of the best things that’s ever happened to the .Net ecosystem. In my previous post I stated that I strongly prefer having the system under test running in process with the test harness for faster feedback cycles and easier debugging. The generic host builder introduced in .Net Core turns out to be a very effective way to bootstrap your system within automated test harnesses.

Before I dive into how to use the IHost in automated testing, here’s a couple issues I think you have to address in your integration testing strategy before we go willy nilly spinning up an IHost:

  • You ideally want to test against your code running in a realistic way, so the way code is bootstrapped and configured should be relatively close to how that code is started up in the real application.
  • There will inevitably need to be at least some configuration that needs to be different in testing or some services — usually accessing resources external to your system — that need to be replaced with stubs or some other kind of fake implementation.
  • It’s important to cleanly dispose or shutdown any IHost object you create in memory to avoid potential locks of resources like database connections, files, or ports. Failing to clean up resources in tests can easily make it harder to iterate through test fixes if you find yourself needing to manually kill processes or restart your IDE to release locked resources (been there, done that).
  • The IHost can be expensive to build up, and sometimes there’s going to be some serious benefit in reusing the IHost between tests to make the test suite run faster.
  • But the IHost is stateful, and there could easily be resources (singleton scoped services, databases, and whatnot) that could impact later test runs in the suite.

Before I jump into solutions, let’s assume that I have two projects:

  1. WebApplication is an ASP.Net Core web service project. WebApplication uses Lamar as its underlying DI container.
  2. A test project that references WebApplication

xUnit.Net Mechanics

I’m more comfortable with xUnit.Net, so I’m going to use that first. My typical usage is to share the IHost through xUnit.Net’s CollectionFixture mechanism (and if you think the usage of this thing is confusing, welcome to the club). First up, I’ll build out a new class I usually call AppFixture to manage the lifecycle of the IHost. The example project I’ve built here is an ASP.Net Core web service project, so I’m going to use Alba to wrap the host inside of AppFixture as shown below:

    public class AppFixture : IDisposable, IAsyncLifetime
    {
        public IAlbaHost Host { get; private set; }
        public async Task InitializeAsync()
        {
            // Program.CreateHostBuilder() is the code from the WebApplication
            // that configures the HostBuilder for the system
            Host = await Program
                .CreateHostBuilder(Array.Empty<string>())
                
                // This extension method starts up the underlying IHost,
                // but Alba replaces Kestrel with a TestServer and
                // wraps the IHost
                .StartAlbaAsync();
        }

        public Task DisposeAsync()
        {
            return Host.StopAsync();
        }

        public void Dispose()
        {
            Host?.Dispose();
        }
    }

A couple things to note in that code above:

  • As we’ll set up next, that class above will be constructed once in memory by xUnit and shared between test fixture classes
  • The Dispose() and DisposeAsync() methods both dispose the IHost. By normal .Net mechanics, that will also dispose the underlying Lamar IoC container, which will in turn dispose any services created by Lamar at runtime that implement IDisposable. Disposing the IHost also stops any registered IHostedService services that your application may be using for long running tasks (for my colleagues who may be reading this, both NServiceBus and MassTransit start and stop their message listeners in an IHostedService, so that might be in use even if you don’t explicitly use that technique).

Next, we’ll set up AppFixture to be shared between our integration test classes by using the [CollectionDefinition] attribute on a marker class:

    [CollectionDefinition("Integration")]
    public class AppFixtureCollection : ICollectionFixture<AppFixture>
    {
        
    }

Lastly, I like to build out a base class for integration tests like this one:

    [Collection("Integration")]
    public abstract class IntegrationContext
    {
        protected IntegrationContext(AppFixture fixture)
        {
            theHost = fixture.Host;
            
            // I am using Lamar as the underlying DI container
            // and want some Lamar specific things later on
            // in the tests
            Container = (IContainer)fixture.Host.Services;
        }

        public IAlbaHost theHost { get; }
        
        public IContainer Container { get; }
    }

The [Collection] attribute is meaningful here because that makes xUnit.Net run all the tests that are contained in test fixture classes that inherit from IntegrationContext in a single thread so we don’t have to worry about concurrent test runs.*

And finally to bring this all together, let’s say that WebApplication has this simplistic web service code to do some arithmetic:

    public class Result
    {
        public int Sum { get; set; }
        public int Product { get; set; }
    }

    public class Numbers
    {
        public int[] Values { get; set; }
    }
    
    public class ArithmeticController : ControllerBase
    {
        [HttpPost("/math")]
        public Result DoMath([FromBody] Numbers input)
        {
            var product = 1;
            foreach (var value in input.Values)
            {
                product *= value;
            }

            return new Result
            {
                Sum = input.Values.Sum(),
                Product = product
            };
        }
    }

In the next code block, let’s finally see a test fixture class that uses the new IntegrationContext as a base class and tests the HTTP endpoint shown in the block above.

    public class ArithmeticApiTests : IntegrationContext
    {
        public ArithmeticApiTests(AppFixture fixture) : base(fixture)
        {
        }

        [Fact]
        public async Task post_to_a_secured_endpoint_with_jwt_from_extension()
        {
            // Building the input body
            var input = new Numbers
            {
                Values = new[] {2, 3, 4}
            };

            var response = await theHost.Scenario(x =>
            {
                // Alba deals with Json serialization for us
                x.Post.Json(input).ToUrl("/math");
                
                // Enforce that the HTTP Status Code is 200 Ok
                x.StatusCodeShouldBeOk();
            });

            var output = response.ReadAsJson<Result>();
            output.Sum.ShouldBe(9);
            output.Product.ShouldBe(24);
        }
    }

Alright, at this point we’ve got a way to shared the system’s IHost in tests for better efficiency, and we’re making sure that all the resources in the IHost are cleaned up when the test suite is done. We’re using the WebApplication’s exact configuration for the IHost, but we still might need to alter that in testing. And there’s also the issue of needing to roll back state in our system between tests. I’ll pick up those subjects in my next couple posts, as well as using NUnit instead of xUnit.Net because that’s what the majority of code at my work uses for testing.

* It would be nice to be able to run parallel tests using our shared IHost, but that can often be problematic because of shared state, so I generally bypass test parallelization in integrated tests. The subject of parallelizing integration tests is worthy of a later blog post of some thoughts I haven’t quite elucidated yet.

A brain dump on automated integration testing

I’m strictly talking about automated testing in this post. I’m more or less leading an effort at work to improve our test automation and Test Driven Development practices at work, so I’ll be trying to blog quite a bit about related topics in the next couple months. After reviewing quite a bit of in flight code, I think I’ll try to revisit some of my old blog posts on testability design from the CodeBetter days and update those old lessons from the early days of TDD to what we’re building now.

My company builds and maintains several long running software systems with a healthy back log of feature requests, performance improvements, and stories to retire technical debt. All that is to say that we’re constantly adding to or improving existing code — which implies we’re always running some non-zero risk of creating regression defects. To keep everybody’s stress levels down, we’re taking incremental steps toward a true continuous delivery model where we can smoothly and consistently build and deploy fully tested features while being confident that we aren’t introducing regression defects.

As you’d likely guess, we’re very interested in improving our automated testing practices as a safety net to enable continuous delivery while also improving our quality in general. That leads to the next question, what kind of automated testing should we be doing? Followed by, is there automated testing we’re doing today that isn’t delivering enough bang for the buck?

To that point, let’s take a look at the classic idea of the testing pyramid at some point, as shown below:

From Unit test: sociable or solitary

The thinking behind the testing pyramid is that there’s a certain, healthy mix of different sorts of automated tests that efficiently lead to better results. I say a “mix” here because unit tests, though relatively cheap compared to other tests, cannot detect many defects that only come out during integration between components, code modules, or systems. From a quick search, I found worlds of memes along the lines of this one:

No integration tests, but all the unit tests pass!

To address exactly what kind of automated tests we should be writing, here’s a stream of consciousness brain dump that I later organized with a patina of organization:

On End to End User Interface Tests

Any kind of test that uses a tool like Selenium to do end to end, black-box testing is going to run slowly. These tests are also frequently be unstable because of asynchronous timing issues in modern browser applications. There’s an unhealthy tendency in many shops who adopt Selenium as a test automation solution to use it as their golden hammer to the exclusion of other testing techniques that can be much more efficient in certain circumstances. To put it bluntly, it’s very difficult to successfully author and maintain test automation suites based on Selenium against complicated applications. In my experience in shops that have attempted large scale Selenium usage, I do not believe that the benefits of those tests have ever outweighed the costs.

I would still recommend using some small number of end to end tests with a tool like Selenium, but those tests should be focused on proving out integration mechanisms between a user interface and backing server side code. For example, I’ve been working on a new integration of Open Id Connect (OIDC) authentication into our web services and web applications. I’ve used Playwright to automate browser testing to prove out the interactions and redirects between the OIDC service and our applications or services.

I also find Cypress.io interesting, but more for doing integration testing of our Angular applications by themselves with a dummy backend. For true end to end testing of .Net-backed web applications, I think I’m interested in replacing Selenium from here on out with Playwright as I think and hope it just does much more for you to make performant and reliable automated tests compared to Selenium.

Driving a browser should not be used to automate functional testing of business logic or data services or any kind of data analysis that could possibly be tested without using the full browser.

If you insist on trying to do a lot of browser automation testing, you better invest in collecting diagnostic information in test runs that can be used by developers to debug test failures. Ideally, I like to have the application’s log output correlated to the test run somehow. I’ve worked with teams that were able to pipe the console.log() tracing from the JavaScript code running in the browser to the test results and that was extremely helpful. Taking screenshots as part of the test can certainly help. As another ideal, I very strongly recommend that any kind of browser automation tests be executable by developers on their local machine on demand for easier debugging. More on this later.

Again, if you absolutely have to write Selenium/Playwright/Cypress tests despite all of my warnings, I strongly recommend you write those tests in the same programming language as the real application. That statement is going to be controversial if any real test automation engineers stumble into this post, but I think it’s important to make it as easy as possible for developers to collaborate with test automation engineers. Moreover, I despise the kind of shadow data access layers you can get from test automation code doing their own thing to write to and read from the underlying data store of the system under test. I think it’s less likely to get that kind of insidious, hidden code duplication if the test code is written in the same programming language and even uses the system’s own data access code to set up or verify database state as part of the automated tests.

Choosing Solitary/Unit Tests or Sociable/Integration Tests

I missed out on this when it was first published, but I think I like the nomenclature of solitary vs sociable tests better than thinking about unit vs integration tests. I also encourage folks to think of that as a continuum rather than a hard categorization. Moreover, I recommend that you switch between solitary and sociable tests even within the same test library where one or the other is more effective.

I would recommend organizing tests by functional area first, and only consider separating out integration tests into a separate testing project when it’s advantageous to use a “fast test, slow test” division for more efficient development.

We have formal requirements for test coverage metrics in our continuous integration builds, so I’d definitely make sure that any integration tests count toward that coverage number.

I think an emphasis on always writing classical unit tests can easily create a strong coupling between the production code and your testing code. That can and will reduce your ability to evolve your code, add new functionality, or do performance optimizations without rewriting your tests.

In many cases, integration tests that start at a natural sub-system facade or a logical controller/conductor entry point will do much better for you as a regression safety net to allow you to refactor your code to allow for new behavior or do important performance optimizations.

Case in point, I relied strictly on fine grained unit tests in my early work in StructureMap, and I definitely felt the negative consequences of that approach (I gave a talk about it in 2008 that’s still relevant). With later releases of StructureMap and now with Lamar, I lean much more heavily on integrated acceptance tests (let’s go ahead and call it Behavior Driven Development) that test from the entry point of the library down and focus on user-centric scenarios. I feel like that testing approach has led to much better results — both in the ease of adding new features, detecting regression defects in automated builds, and allowing me to evolve the functionality of the library.

On the other hand, integration tests can be harder to troubleshoot when they fail because you have more ground to cover. They also run slower of course. If you find that your feedback cycles feel too slow to efficiently run the tests continuously or especially if you find yourself doing long, marathon sessions in your debugger, stop and consider introducing more fine-grained tests first.

Running the Tests Locally vs Remotely

To the previous point, I think it’s critical that developers should be able to easily spin up and run automated integration tests on demand on their local development boxes. Tests will fail, and being able to easily troubleshoot a failing test is a prerequisite for successful test automation. If you can run an integration test locally, you’re much more likely to be able to iterate and try potential fixes quickly. There’s also the very real possibility of attaching a debugger to the testing process.

I think that our current technology set makes it much easier to do integration testing than it was when I was first getting started and the strict Michael Feathers definition of a unit test was in vogue. Just speaking from my own experience, the current .Net 5 generation is very easy to spin up and down in process for automated testing. Docker has been a great way to stand up development environments using Sql Server, Postgresql, Rabbit MQ, and other infrastructural tools.

As a follow up to the previous section, it’s also advantageous for automated tests to be able to run in process with the test harness code. For instance, I’d much rather do Alba testing of HTTP endpoints in .Net 5 where I’m able to quickly spin up an actual web service in memory and shut it down from my testing project. As opposed to the old, full .Net framework where you’d have to run the web service project in IIS or IISExpress first, then use HttpClient to address the service from your unit tests. The first, .Net 5/Alba approach is a much faster iteration and feedback cycle to support a Test Driven Development workflow than the 2nd approach.

Likewise, when given a choice between tests that can be run locally versus tests that can only be executed in a remote server location, give me the local tests every time. When and if you hit a scenario where you really need to run tests remotely *cough* serverless *cough*, that’s the one and only exception I can think of to my “never deploy from your local development box” rule. If depending on remote execution of tests, I’d at least want the ability to send my local development branch to the remote server at will. Just having a CI server build out pull request branches might get you there of course, but then you might be dependent upon being about to run that test suite in parallel with other CI builds. That’s not a show stopper, but it might make you have to invest more in your build automation to spin up isolated environments on demand.

Apollo Testing!

There’s plenty of debate over what the actual ration of UI tests to integration tests to unit tests should be, and plenty of folks have different metaphors than a “pyramid” to describe what they thing the ratio should be. I happen to like the integration test heavy ratio described in The Testing Trophy and Testing Classifications with the graphic below:

From The Testing Trophy and Testing Classifications

I think his image of the “testing trophy” looks a lot like the command module from the Apollo missions to the moon in the 70’s:

The Apollo Command Module

So from now on, I’m calling our intended testing approach the Apollo Testing Method!

What is the purpose of testing?

I’m just barely old enough that I started my official software development career in old fashioned waterfall models. In those days we did some unit testing with ad hoc testing tools to troubleshoot new code, but it wasn’t anywhere close to what developers do today with Test Driven Development and xUnit tools. As developers, we mostly ran the complete application on our development boxes and stepped through things manually to check out new code locally before throwing things over the wall to QA at the end of the project.

Regardless of whatever ad hoc, local testing developers did of their code locally, the only official testing that actually counted was the purely manual testing done by our testers in the testing environment that was supposed to exactly mimic the production environment (it never quite did, but that’s a story for another day). The QA team strictly used black-box testing with some direct access to the underlying database.

That old black-box testing approach at the end of the project was a much slower feedback cycle than we’re accustomed to today after the advent of Agile Software Development. The killer problem was that the testing feedback cycles were too slow to consider evolutionary design approaches because of the real fear of regression failures. It was also harder as a developer to address defects found in the testing cycle because you were frequently needing to work with code you hadn’t touched in many months that certainly wasn’t fresh in your mind. That frequently led to marathon debugging sessions while a helpful project manager came by your cubicle to cheerfully ask for any updates several times a day and crank up the pressure. As a developer you were also completely at the mercy of QA for information about what was really happening in the system when they found bugs.

In my mind, the most important element of Agile software development overall was the emphasis on improving feedback cycles. Faster feedback allowed

Testing is not about proving that our code works perfectly so much as a way to find and remove enough problems from the code that it can be deployed to production. I think this is an important approach because it allows us to use faster, finer grained testing approaches like isolated unit testing or intermediate level white-box integration tests that are generally faster running and cheaper to build than classic black-box, end to end tests.

What’s Next?

My organization has started an effort to introduce much more integration testing into our development processes as a way of improving quality and throughput. To help out on that, I’m going to attempt to write a series of blog posts going into specific areas about tools and techniques, but for right now I’m just jotting down this stream of consciousness brain dump to get started.

Based on what I think we need to establish at work, I’m thinking to cover:

  • .Net IHost bootstrapping and lifecycle within xUnit.Net or NUnit. I’m much more familiar with xUnit.Net from recent development, but we mostly use NUnit at work so I’ll be trying to cover that base as well.
  • HTTP API testing, which will inevitably feature Alba
  • Dealing with databases in tests, and that’s gonna have to cover both RDBMS databases and probably Mongo Db for now
  • Message handler testing with MassTransit and NServiceBus (we use both in different products)
  • Another brain dump on doing end to end testing after a meeting today about the CI of one of our big systems.
  • How should automated testing be integrated into the development cycle, and who should be responsible for these tests, and why is the obvious answer a very close collaboration between testers, developers, and even business experts
  • This will be a bigger stretch for me, but maybe get into how to do some semi-integration testing of an Angular front end with NgRX.
  • After talking through some of our issues with test automation at work, I think I’d like to blog about some of the positive things we did with Storyteller. I’ve been increasingly frustrated with xUnit.Net (and don’t think NUnit would be much better) for integration testing, so I’ve got quite a bit of notes about what an alternative tool optimized for integration tests could look like I wouldn’t mind publishing.

Alba v5.0 is out! Easy recipes for integration testing ASP.Net web services

Alba is a small library to facilitate easier integration testing of web services built on the ASP.Net Core platform. It’s more or less a wrapper around the ASP.Net TestServer, but does a lot to make testing much easier than writing low level code against TestServer. You would use Alba from within xUnit.Net or NUnit testing projects.

I outlined a proposal for Alba V5 a couple weeks back, and today I was able to publish the new Nuget. The big changes are:

  • The Alba documentation website was rebuilt with Vitepress and refreshed to reflect V5 changes
  • The old SystemUnderTest type was renamed AlbaHost to make the terminology more inline with ASP.Net Core
  • The Json serialization is done through the configured input and output formatters within your ASP.Net Core application — meaning that Alba works just fine regardless of whether your application uses Newtonsoft.Json or System.Text.Json for its Json serialization.
  • There is a new extension model that was added specifically to support testing web services that are authenticated with JWT bearer tokens.

For security, you can opt into:

  1. Stub out all of the authentication and use hard-coded claims
  2. Add a valid JWT to every request with user-defined claims
  3. Let Alba deal with live OIDC integration by handling the JWT integration with your OIDC identity server

Testing web services secured by JWT tokens with Alba v5

We’re working toward standing up a new OIDC infrastructure built around Identity Server 5, with a couple gigantic legacy monolith applications and potentially dozens of newer microservices needing to use this new identity server for authentication. We’ll have to have a good story for running our big web applications that will have this new identity server dependency at development time, but for right now I just want to focus on an automated testing strategy for our newer ASP.Net Core web services using the Alba library.

First off, Alba is a helper library for integration testing HTTP API endpoints in .Net Core systems. Alba wraps the ASP.Net Core TestServer while providing quite a bit of convenient helpers for setting up and verifying HTTP calls against your ASP.Net Core services. We will be shortly introducing Alba into my organization at MedeAnalytics as a way of doing much more integration testing at the API layer (think the middle layer of any kind of testing pyramid concept).

In my previous post I laid out some plans and proposals for a quickly forthcoming Alba v5 release, with the biggest improvement being a new model for being able to stub out OIDC authentication for APIs that are secured by JWT bearer tokens (I think I win programming bingo for that sentence!).

Before I show code, I should say that all of this code is in the v5 branch of Alba on GitHub, but not yet released as it’s very heavily in flight.

To start, I’m assuming that you have a web service project, then a testing library for that web service project. In your web application, bearer token authentication is set up something like this inside your Startup.ConfigureServices() method:

services.AddAuthentication("Bearer")
    .AddJwtBearer("Bearer", options =>
    {
        // A real application would pull all this information from configuration
        // of course, but I'm hardcoding it in testing
        options.Audience = "jwtsample";
        options.ClaimsIssuer = "myapp";
        
        // don't worry about this, our JwtSecurityStub is gonna switch it off in
        // tests
        options.Authority = "https://localhost:5001";
            

        options.TokenValidationParameters = new TokenValidationParameters
        {
            ValidateAudience = false,
            IssuerSigningKey = new SymmetricSecurityKey(Encoding.UTF8.GetBytes("some really big key that should work"))
        };
    });

And of course, you also have these lines of code in your Startup.Configure() method to add in ASP.Net Core middleware for authentication and authorization:

app.UseAuthentication();
app.UseAuthorization();

With these lines of setup code, you will not be able to hit any secured HTTP endpoint in your web service unless there is a valid JWT token in the Authorization header of the incoming HTTP request. Moreover, with this configuration your service would need to make calls to the configured bearer token authority (http://localhost:5001 above). It’s going to be awkward and probably very brittle to depend on having the identity server spun up and running locally when our developers try to run API tests. It would obviously be helpful if there was a quick way to stub out the bearer token authentication in testing to automatically supply known claims so our developers can focus on developing their individual service’s functionality.

That’s where Alba v5 comes in with its new JwtSecurityStub extension that will:

  1. Disable any validation interactions with an external OIDC authority
  2. Automatically add a valid JWT token to any request being sent through Alba
  3. Give developers fine-grained control over the claims attached to any specific request if there is logic that will vary by claim values

To demonstrate this new Alba functionality, let’s assume that you have a testing project that has a direct reference to the web service project. The direct project reference is important because you’ll want to spin up the “system under test” in a test fixture like this:

    public class web_api_authentication : IDisposable
    {
        private readonly IAlbaHost theHost;

        public web_api_authentication()
        {
            // This is calling your real web service's configuration
            var hostBuilder = Program.CreateHostBuilder(new string[0]);

            // This is a new Alba v5 extension that can "stub" out
            // JWT token authentication
            var jwtSecurityStub = new JwtSecurityStub()
                .With("foo", "bar")
                .With(JwtRegisteredClaimNames.Email, "guy@company.com");

            // AlbaHost was "SystemUnderTest" in previous versions of
            // Alba
            theHost = new AlbaHost(hostBuilder, jwtSecurityStub);
        }

I was using xUnit.Net in this sample, but Alba is agnostic about the actual testing library and we’ll use both NUnit and xUnit.Net at work.

In the code above I’ve bootstrapped the web service with Alba and attached the JwtSecurityStub. I’ve also established some baseline claims that will be added to every JWT token on all Alba scenario requests. The AlbaHost extends the IHost interface you’re already used to in .Net Core, but adds the important Scenario() method that you can use to run HTTP requests all the way through your entire application stack like this test :

        [Fact]
        public async Task post_to_a_secured_endpoint_with_jwt_from_extension()
        {
            // Building the input body
            var input = new Numbers
            {
                Values = new[] {2, 3, 4}
            };

            var response = await theHost.Scenario(x =>
            {
                // Alba deals with Json serialization for us
                x.Post.Json(input).ToUrl("/math");
                
                // Enforce that the HTTP Status Code is 200 Ok
                x.StatusCodeShouldBeOk();
            });

            var output = response.ResponseBody.ReadAsJson<Result>();
            output.Sum.ShouldBe(9);
            output.Product.ShouldBe(24);
        }

You’ll notice that I did absolutely nothing in regards to JWT set up or claims or anything. That’s because the JwtSecurityStub is taking care of everything for you. It’s:

  1. Reaching into your application’s bootstrapping to pluck out the right signing key so that it builds JWT token strings that can be validated with the right signature
  2. Turns off any external token validation using an external OIDC authority
  3. Places a unique, unexpired JWT token on each request that matches the issuer and authority configuration of your application

Now, to further control the claims used on any individual scenario request, you can use this new method in Scenario tests:

        [Fact]
        public async Task can_modify_claims_per_scenario()
        {
            var input = new Numbers
            {
                Values = new[] {2, 3, 4}
            };

            var response = await theHost.Scenario(x =>
            {
                // This is a custom claim that would only be used for the 
                // JWT token in this individual test
                x.WithClaim(new Claim("color", "green"));
                x.Post.Json(input).ToUrl("/math");
                x.StatusCodeShouldBeOk();
            });

            var principal = response.Context.User;
            principal.ShouldNotBeNull();
            
            principal.Claims.Single(x => x.Type == "color")
                .Value.ShouldBe("green");
        }

I’ve got plenty of more ground to cover in how we’ll develop locally with our new identity server strategy, but I’m feeling pretty good about having a decent API testing strategy. All of this code is just barely written, so any feedback you might have would be very timely. Thanks for reading about some brand new code!

Introducing Jasper as an In Process Command Bus for .Net

A couple weeks ago I wrote a blog post called If you want your OSS project to be successful… about trying to succeed with open source development efforts. One of the things I said was “don’t go dark” when you’re working on an OSS project. Not only did I go “dark” on Jasper for quite awhile, I finally rolled out its 1.0 release during the worst global pandemic in a century. So all told, Jasper is by no means an exemplary project model for anyone to follow who’s trying to succeed with an OSS project.

This sample application is also explained and demonstrated in the documentation page Jasper as a Mediator.

Jasper is a new open source tool that can be used as an in process “command bus” inside of .Net Core 3 applications. Used locally, Jasper can provide a superset of the “mediator” functionality popularized by MediatR that many folks like using within ASP.Net MVC Core applications to simplify controller code by offloading most of the processing to separate command handlers. Jasper certainly supports that functionality, but also adds rich options for asynchronous processing commands with built in resiliency mechanisms.

Part of the reason why Jasper went cold was waiting for .Net Core 3.0 to be released. With the advent of .Net Core 3.0, Jasper was somewhat re-wired to support the new generic HostBuilder for bootstrapping and configuration. With this model of bootstrapping, Jasper can easily be integrated into any kind of .Net Core application (MVC Core application, web api, windows service, console app, “worker” app) that uses the HostBuilder.

Let’s jump into seeing how Jasper could be integrated into a .Net Core Web API system. All the sample code I’m showing here is on GitHub in the “InMemoryMediator” project. InMemoryMediator uses EF Core with Sql Server as its backing persistence. Additionally, this sample shows off Jasper’s support for the “Outbox” pattern for reliable messaging without having to resort to distributed transactions.

To get started, generated a project with the dotnet new webapi template. From there, I added some extra Nuget dependencies:

  1. Microsoft.EntityFrameworkCore.SqlServer — because we’re going to use EF Core with Sql Server as the backing persistence for this service
  2. Jasper — this is the core library, and all that you would need to use Jasper as an in process command bus
  3. Jasper.Persistence.EntityFrameworkCore — extension library to add Jasper’s “Outbox” and transactional support to EF Core
  4. Jasper.Persistence.SqlServer — extension library to add persistence for the “Outbox” support
  5. Swashbuckle.AspNetCore — just to add Swagger support

Your First Jasper Handler

Before we get into bootstrapping, let’s just start with how to build a Jasper command handler and how that would integrate with an MVC Core Controller. Keeping to a very simple problem domain, let’s say that we’re capturing, creating, and tracking new Item entities like this:

public class Item
{
    public string Name { get; set; }
    public Guid Id { get; set; }
}

So let’s build a simple Jasper command handler that would process a CreateItemCommand message, persist a new Item entity, and then raise an ItemCreated event message that would be handled by Jasper as well, but asynchronously somewhere off to the side in a different through. Lastly, we want things to be reliable, so we’re going to introduce Jasper’s integration of Entity Framework Core for “Outbox” support for the event messages being raised at the same time we create new Item entities.

First though, to put things in context, we’re trying to get to the point where our controller classes mostly just delegate to Jasper through its ICommandBus interface and look like this:

public class UseJasperAsMediatorController : ControllerBase
{
    private readonly ICommandBus _bus;

    public UseJasperAsMediatorController(ICommandBus bus)
    {
        _bus = bus;
    }

    [HttpPost("/items/create")]
    public Task Create([FromBody] CreateItemCommand command)
    {
        // Using Jasper as a Mediator
        return _bus.Invoke(command);
    }
}

You can find a lot more information about what Jasper can do as a local command bus in the project documentation.

When using Jasper as a mediator, the controller methods become strictly about the mechanics of reading and writing data to and from the HTTP protocol. The real functionality is now in the Jasper command handler for the CreateItemCommand message, as coded with this Jasper Handler class:

public class ItemHandler
{
    // This attribute applies Jasper's transactional
    // middleware
    [Transactional]
    public static ItemCreated Handle(
        // This would be the message
        CreateItemCommand command,

        // Any other arguments are assumed
        // to be service dependencies
        ItemsDbContext db)
    {
        // Create a new Item entity
        var item = new Item
        {
            Name = command.Name
        };

        // Add the item to the current
        // DbContext unit of work
        db.Items.Add(item);

        // This event being returned
        // by the handler will be automatically sent
        // out as a "cascading" message
        return new ItemCreated
        {
            Id = item.Id
        };
    }
}

You’ll probably notice that there’s no interface and mandatory base class usage in the code up above. Similar to MVC Core, Jasper will auto-discover the handler classes and message handling methods from your code through type scanning. Unlike MVC Core and every other service bus kind of tool .Net I’m aware of, Jasper only depends on naming conventions rather than base classes or interfaces.

The only bit of framework “stuff” at all in the code above is the [Transactional] attribute that decorates the handler class. That adds Jasper’s own middleware for transaction and outbox support around the message handling to just that message. At runtime, when Jasper handles the CreateItemCommand in that handler code up above, it:

  • Sets up an “outbox” transaction with the EF Core ItemsDbContextservice being passed into the Handle() method as a parameter
  • Take the ItemCreated message that “cascades” from the handler method and persists that message with ItemsDbContext so that both the outgoing message and the new Item entity are persisted in the same Sql Server transaction
  • Commits the EF Core unit of work by calling ItemsDbContext.SaveChangesAsync()
  • Assuming that the transaction succeeds, Jasper kicks the new ItemCreated message into its internal sending loop to speed it on its way. That outgoing event message could be handled locally in in-memory queues or sent out via external transports like Rabbit MQ or Azure Service Bus

If you’re interested in what the code above would look like without any of Jasper’s middleware or cascading message conventions, see the section near the bottom of this post called “Do it All Explicitly Controller”.

So that’s the MVC Controller and Jasper command handler, now let’s move on to integrating Jasper into the application.

Bootstrapping and Configuration

This is just an ASP.Net Core application, so you’ll probably be familiar with the generated Program.Main() entry point. To completely utilize Jasper’s extended command line support (really Oakton.AspNetCore), I’ll make some small edits to the out of the box generated file:

public class Program
{
    // Change the return type to Task to communicate
    // success/failure codes
    public static Task Main(string[] args)
    {
        return CreateHostBuilder(args)

            // This replaces Build().Start() from the default
            // dotnet new templates
            .RunJasper(args);
    }

    public static IHostBuilder CreateHostBuilder(string[] args) =>
        Host.CreateDefaultBuilder(args)

            // You can do the Jasper configuration inline with a 
            // Lambda, but here I've centralized the Jasper
            // configuration into a separate class
            .UseJasper<JasperConfig>()

            .ConfigureWebHostDefaults(webBuilder =>
            {
                webBuilder.UseStartup();
            });
}

This isn’t mandatory, but there’s just enough Jasper configuration for this project with the outbox support that I opted to put the Jasper configuration in a new file called JasperConfig that inherits from JasperOptions:

public class JasperConfig : JasperOptions
{
    public override void Configure(IHostEnvironment hosting, IConfiguration config)
    {
        if (hosting.IsDevelopment())
        {
            // In development mode, we're just going to have the message persistence
            // schema objects dropped and rebuilt on app startup so you're
            // always starting from a clean slate
            Advanced.StorageProvisioning = StorageProvisioning.Rebuild;
        }

        // Just the normal work to get the connection string out of
        // application configuration
        var connectionString = config.GetConnectionString("sqlserver");

        // Setting up Sql Server-backed message persistence
        // This requires a reference to Jasper.Persistence.SqlServer
        Extensions.PersistMessagesWithSqlServer(connectionString);

        // Set up Entity Framework Core as the support
        // for Jasper's transactional middleware
        Extensions.UseEntityFrameworkCorePersistence();

        // Register the EF Core DbContext
        // You can register IoC services in this file in addition
        // to any kind of Startup.ConfigureServices() method,
        // but you probably only want to do it in one place or the 
        // other and not both.
        Services.AddDbContext(
            x => x.UseSqlServer(connectionString),

            // This is important! Using Singleton scoping
            // of the options allows Jasper + Lamar to significantly
            // optimize the runtime pipeline of the handlers that
            // use this DbContext type
            optionsLifetime:ServiceLifetime.Singleton);
    }
}

Returning a Response to the HTTP Request

In the UseJasperAsMediatorController controller, we just passed the command into Jasper and let MVC return an HTTP status code 200 with no other context. If instead, we wanted to send down the ItemCreated message as a response to the HTTP caller, we could change the controller code to this:

public class WithResponseController : ControllerBase
{
    private readonly ICommandBus _bus;

    public WithResponseController(ICommandBus bus)
    {
        _bus = bus;
    }

    [HttpPost("/items/create2")]
    public Task<ItemCreated> Create([FromBody] CreateItemCommand command)
    {
        // Using Jasper as a Mediator, and receive the
        // expected response from Jasper
        return _bus.Invoke<ItemCreated>(command);
    }
}

“Do it All Explicitly Controller”

Just for a comparison, here’s the CreateItemCommand workflow implemented inline in a controller action with explicit code to handle the Jasper “Outbox” support:

// This controller does all the transactional work and business
// logic all by itself
public class DoItAllMyselfItemController : ControllerBase
{
    private readonly IMessageContext _messaging;
    private readonly ItemsDbContext _db;

    public DoItAllMyselfItemController(IMessageContext messaging, ItemsDbContext db)
    {
        _messaging = messaging;
        _db = db;
    }

    [HttpPost("/items/create3")]
    public async Task Create([FromBody] CreateItemCommand command)
    {
        // Start the "Outbox" transaction
        await _messaging.EnlistInTransaction(_db);

        // Create a new Item entity
        var item = new Item
        {
            Name = command.Name
        };

        // Add the item to the current
        // DbContext unit of work
        _db.Items.Add(item);

        // Publish an event to anyone
        // who cares that a new Item has
        // been created
        var @event = new ItemCreated
        {
            Id = item.Id
        };

        // Because the message context is enlisted in an
        // "outbox" transaction, these outgoing messages are
        // held until the ongoing transaction completes
        await _messaging.Send(@event);

        // Commit the unit of work. This will persist
        // both the Item entity we created above, and
        // also a Jasper Envelope for the outgoing
        // ItemCreated message
        await _db.SaveChangesAsync();

        // After the DbContext transaction succeeds, kick out
        // the persisted messages in the context "outbox"
        await _messaging.SendAllQueuedOutgoingMessages();
    }
}

As a huge lesson learned from Jasper’s predecessor project, it’s always possible to easily bypass any kind of Jasper conventional “magic” and write explicit code as necessary.

There’s a lot more to say about Jasper and you can find a *lot* more information on its documentation website. I’ll be back sometime soon with more examples of Jasper, with probably some focus on functionality that goes beyond other mediator tools.

In the next post, I’ll talk about Jasper’s runtime execution pipeline and how it’s very different than other .Net tools with similar functionality (hint, it involves a boatload less generics magic than anything else).

 

Environment Checks and Better Command Line Abilities for your .Net Core Application

EDIT Oct. 12th, 2021: Oakton.AspNetCore was folded into Oakton proper as part of the 3.0 release.

Oakton.AspNetCore is a new package built on top of the Oakton 2.0+ command line parser that adds extra functionality to the command line execution of ASP.Net Core and .Net Core 3.0 codebases. At the bottom of this blog post is a small section showing you how to set up Oakton.AspNetCore to run commands in your .Net Core application.

First though, you need to understand that when you use the dotnet run command to build and execute your ASP.Net Core application, you can pass arguments and flags both to dotnet run itself and to your application through the string[] args argument of Program.Main(). These two types of arguments or flags are separated by a double dash, like this example: dotnet run --framework netcoreapp2.0 -- ?. In this case, “–framework netcoreapp2.0” is used by dotnet run itself, and the values to the right of the “–” are passed into your application as the args array.

With that out of the way, let’s see what Oakton.AspNetCore brings to the table.

Extended “Run” Options

In the default ASP.Net Core templates, your application can be started with all its defaults by using dotnet run.  Oakton.AspNetCore retains that usage, but adds some new abilities with its “Run” command. To check the syntax options, type dotnet run -- ? run:

 Usages for 'run' (Runs the configured AspNetCore application)
  run [-c, --check] [-e, --environment <environment>] [-v, --verbose] [-l, --log-level <logleve>] [----config:<prop> <value>]

  ---------------------------------------------------------------------------------------------------------------------------------------
    Flags
  ---------------------------------------------------------------------------------------------------------------------------------------
                        [-c, --check] -> Run the environment checks before starting the host
    [-e, --environment <environment>] -> Use to override the ASP.Net Environment name
                      [-v, --verbose] -> Write out much more information at startup and enables console logging
          [-l, --log-level <logleve>] -> Override the log level
          [----config:<prop> <value>] -> Overwrite individual configuration items
  ---------------------------------------------------------------------------------------------------------------------------------------

To run your application under a different hosting environment name value, use a flag like so:

dotnet run -- --environment Testing

or

dotnet run -- -e Testing

To overwrite configuration key/value pairs, you’ve also got this option:

dotnet run -- --config:key1 value1 --config:key2 value2

which will overwrite the configuration keys for “key1” and “key2” to “value1” and “value2” respectively.

Lastly, you can have any configured environment checks for your application immediately before starting your application by using this flag:

dotnet run -- --check

More on this function in the next section.

Environment Checks

I’m a huge fan of building environment tests directly into your application. Environment tests allow your application to self-diagnose issues with deployment, configuration, or environmental dependencies upfront that would impact its ability to run.

As a very real world example, let’s say your ASP.Net Core application needs to access another web service that’s managed independently by other teams and maybe, just maybe your testers have occasionally tried to test your application when:

  • Your application configuration has the wrong Url for the other web service
  • The other web service isn’t running at all
  • There’s some kind of authentication issue between your application and the other web service

In the real world project that spawned the example above, we added a formal environment check that would try to touch the health check endpoint of the external web service and throw an exception if we couldn’t connect to the external system. The next step was to execute our application as it was configured and deployed with this environment check as part of our Continuous Deployment pipeline. If the environment check failed, the deployment itself failed and triggered off the normal set of failure alerts letting us know to go fix the environment rather than letting our testers waste time on a bad deployment.

With all that said, let’s look at what Oakton.AspNetCore does here to help you add environment checks. Let’s say your application uses a single Sql Server database, and the connection string should be configured in the “connectionString” key of your application’s connection. You would probably want an environment check just to verify at a minimum that you can successfully connect to your database as it’s configured.

In your ASP.Net Core Startup class, you could add a new service registration for an environment check like this example:

// This method gets called by the runtime. Use this method to add services to the container.
public void ConfigureServices(IServiceCollection services)
{
    // Other registrations we don't care about...
    
    // This extension method is in Oakton.AspNetCore
    services.CheckEnvironment<IConfiguration>("Can connect to the application database", config =>
    {
        var connectionString = config["connectionString"];
        using (var conn = new SqlConnection(connectionString))
        {
            // Just attempt to open the connection. If there's anything
            // wrong here, it's going to throw an exception
            conn.Open();
        }
    });
}

Now, during deployments or even just pulling down the code to run locally, we can run the environment checks on our application like so:

dotnet run -- check-env

Which in the case of our application above, blows up with output like this because I didn’t add configuration for the database in the first place:

Running Environment Checks
   1.) Failed: Can connect to the application database
System.InvalidOperationException: The ConnectionString property has not been initialized.
   at System.Data.SqlClient.SqlConnection.PermissionDemand()
   at System.Data.SqlClient.SqlConnectionFactory.Permissi
onDemand(DbConnection outerConnection)
   at System.Data.ProviderBase.DbConnectionInternal.TryOpenConnectionInternal(DbConnection outerConnection, DbConnectionFactory connectionFactory, TaskCompletionSource`1
 retry, DbConnectionOptions userOptions)
   at System.Data.ProviderBase.DbConnectionClosed.TryOpenConnection(DbConnection outerConnection, DbConnectionFactory connectionFactory, TaskCompletionSource`1 retry, 
DbConnectionOptions userOptions)
   at System.Data.SqlClient.SqlConnection.TryOpen(TaskCompletionSource`1 retry)
   at System.Data.SqlClient.SqlConnection.Open()
   at MvcApp.Startup.<>c.<ConfigureServices>b_
_4_0(IConfiguration config) in /Users/jeremydmiller/code/oakton/src/MvcApp/Startup.cs:line 41
   at Oakton.AspNetCore.Environment.EnvironmentCheckExtensions.<>c__DisplayClass2_0`1.<CheckEnvironment>b__0(IServ
iceProvider s, CancellationToken c) in /Users/jeremydmiller/code/oakton/src/Oakton.AspNetCore/Environment/EnvironmentCheckExtensions.cs:line 53
   at Oakton.AspNetCore.Environment.LambdaCheck.Assert(IServiceP
rovider services, CancellationToken cancellation) in /Users/jeremydmiller/code/oakton/src/Oakton.AspNetCore/Environment/LambdaCheck.cs:line 19
   at Oakton.AspNetCore.Environment.EnvironmentChecker.ExecuteAll
EnvironmentChecks(IServiceProvider services, CancellationToken token) in /Users/jeremydmiller/code/oakton/src/Oakton.AspNetCore/Environment/EnvironmentChecker.cs:line 31

If you ran this command during continuous deployment scripts, the command should cause your build to fail when it detects environment problems.

In some of Calavista’s current projects , we’ve been adding environment tests to our applications for items like:

  • Can our application read certain configured directories?
  • Can our application as it’s configured connect to databases?
  • Can your application reach other web services?
  • Are required configuration items specified? That’s been an issue as we’ve had to build out Continuous Deployment pipelines to many, many different server environments

I don’t see the idea of “Environment Tests” mentioned very often, and it might have other names I’m not aware of. I learned about the idea back in the Extreme Programming days from a blog post from Nat Pryce that I can’t find any longer, but there’s this paper from those days too.

Add Other Commands

I’ve frequently worked in projects where we’ve built parallel console applications that reproduce a lot of the same IoC and configuration setup to perform administrative tasks or add other diagnostics. It could be things like adding users, rebuilding an event store projection, executing database migrations, or loading some kind of data into the application’s database. What if instead, you could just add these directly to your .Net Core application as additional dotnet run -- [command] options? Fortunately, Oakton.AspNetCore let’s you do exactly that, and even allows you to package up reusable commands in other assemblies that could be distributed by Nuget.

If you use Lamar as your IoC container in an ASP.Net Core application (or .Net Core 3.0 console app using the new unified HostBuilder), we now have an add on Nuget called Lamar.Diagnostics that will add new Oakton commands to your application that give you access to Lamar’s diagnostic tools from the command line. As an example, this library adds a command to write out the “WhatDoIHave()” report for the underlying Lamar IoC container of your application to the command line or a file like this:

dotnet run --lamar-services

Now, using the command above as an example, to build or add your own commands start by decorating the assembly containing the command classes with this attribute:

[assembly:OaktonCommandAssembly]

Having this assembly tells Oakton.AspNetCore to search the assembly for additional Oakton commands. There is no other setup necessary.

If your command needs to use the application’s services or configuration, have the Oakton input type inherit from NetCoreInput type from Oakton.AspNetCore like so:

public class LamarServicesInput : NetCoreInput
{
    // Lots of other flags
}

Next, the new command for “lamar-services” is just this:

[Description("List all the registered Lamar services", Name = "lamar-services")]
public class LamarServicesCommand : OaktonCommand<LamarServicesInput>
{
    public override bool Execute(LamarServicesInput input)
    {
        // BuildHost() will return an IHost for your application
        // if you're using .Net Core 3.0, or IWebHost for
        // ASP.Net Core 2.*
        using (var host = input.BuildHost())
        {
            // The actual execution using host.Services
            // to get at the underlying Lamar Container
        }

        return true;
    }


}

Getting Started

In both cases I’m assuming that you’ve bootstrapped your application with one of the standard project templates like dotnet new webapi or dotnet new mvc. In both cases, you’ll first add a reference to the Oakton.AspNetCore Nuget. Next, break into the Program.Main()entry point method in your project and modify it like the following samples.

If you’re absolutely cutting edge and using ASP.Net Core 3.0:

public class Program
{
    public static Task<int> Main(string[] args)
    {
        return CreateHostBuilder(args)
            
            // This extension method replaces the calls to
            // IWebHost.Build() and Start()
            .RunOaktonCommands(args);
    }

    public static IHostBuilder CreateHostBuilder(string[] args) =>
        Host.CreateDefaultBuilder(args)
            .ConfigureWebHostDefaults(x => x.UseStartup<Startup>());
    
}

For what I would guess is most folks, the ASP.Net Core 2.* setup (and this would work as well for ASP.Net Core 3.0 as well):

public class Program
{
    public static Task<int> Main(string[] args)
    {
        return CreateWebHostBuilder(args)
            
            // This extension method replaces the calls to
            // IWebHost.Build() and Start()
            .RunOaktonCommands(args);
    }

    public static IWebHostBuilder CreateWebHostBuilder(string[] args) =>
        WebHost.CreateDefaultBuilder(args)
            .UseStartup<Startup>();
    
}

The two changes from the template defaults is to:

  1. Change the return value to Task<int>
  2. Replace the calls to IHost.Build() and IHost.Start() to use the RunOaktonCommands(args) extension method that hangs off IWebHostBuilder and the new unified IHostBuilder if you’re targeting netcoreapp3.0.

And that’s it, you’re off to the races.