Using Postgresql Advisory Locks for Leader Election

If you’re running an application with a substantial workload, or just want some kind of high availability, you’re probably running that application across multiple servers (heretofore called “nodes” because who knows where they’re physically running these days). That’s great and all, but it’s not too uncommon that you’ll need to make some kind of process run on only one of those nodes at any one time.

As an example, the Marten event store functionality as a feature to support asynchronous projection builders called the “async daemon” (because I thought that sounded cool at the time). The async daemon is very stateful, and can only function while running on one node at a time — but it doesn’t have any existing infrastructure to help you manage that. What we know we need to do for the upcoming Marten v4.0 release is to provide “leader election” to make sure the async daemon is actively building projections on only one node and can be activated or fail over to another node as needed to guarantee that exactly one node is active at all times.

From Wikipedia, Leader Election “is the process of designating a single process as the organizer of some task distributed among several computers.” There’s plenty of existing art to do this, but it’s not for the feint of heart. In the past, I tried to do this with FubuMVC using a custom implementation of the Bully Algorithm. Microsoft’s microservices pattern guidance has some .Net centric approaches to leader election. Microsoft’s new Dapr tool is supposed to support leader election some day.

From my previous experience, building out and especially testing custom election infrastructure was very difficult. As a far easier approach, I’ve used Advisory Locks in Postgresql in Jasper (I’m also using the Sql Server equivalents as well) as what I think of as a “poor man’s leader election.”

An advisory lock in Postgresql is an arbitrary, application-managed lock on a named resource. Postgresql simply tracks these locks as a distributed lock such that only one active client can hold the lock at any one time. These locks can be held either at:

  1. The connection level, such that the lock, once obtained, is held as long as the database connection is open.
  2. The transaction level, such that a lock obtained within the course of one Postgresql transaction is held until the transaction is committed, rolled back, or the connection is lost.

As an example, Jasper‘s “Durability Agent” is a constantly running process in Jasper applications that tries to read and process any persisted messages persisted in a Postgresql or Sql Server database. Since you certainly don’t want a unique message to be processed by more than one node, the durability uses advisory locks to try to temporarily take sole ownership of replaying persisted messages with a workflow similar to this sequence diagram:

Transaction Scoped Advisory Lock Usage

That’s working well so far for Jasper, but in Marten v4.0, we want to use the connection scoped advisory lock for leader election of a long running process for the async daemon.

Sample Usage for Leader Election

Before you look at any of these code samples, just know that this is over-simplified to show the concept, isn’t in production, and would require a copious amount of error handling and logging to be production worthy.

For Marten v4.0, we’ll use the per-connection usage to ensure that the new version of the async daemon will only be running on one node (or at least the actual “leader” process that distributes and assigns work across other nodes if we do it well). The async daemon process itself is probably going to be a .Net Core IHostedService that runs in the background.

As just a demonstrator, I’ve pushed up a little project called AdvisoryLockSpike to GitHub just to show the conceptual usage. First let’s say that the actual worker bee process of the async daemon implements this interface:

public enum ProcessState
{
    Active,
    Inactive,
    Broken
}

public interface IActiveProcess : IDisposable
{
    Task<ProcessState> State();
    
    
    // The way I've done this before, the
    // running code does all its work using
    // the currently open connection or at
    // least checks the connection to "know"
    // that it still has the leadership role
    Task Start(NpgsqlConnection conn);
}

Next, we need something around that to actually deal with the mechanics of trying to obtain the global lock and starting or stopping the active process. Since that’s a background process within an application, I’m going to use the built in BackgroundService in .Net Core with this little class:

public class LeaderHostedService<T> : BackgroundService
    where T : IActiveProcess
{
    private readonly LeaderSettings<T> _settings;
    private readonly T _process;
    private NpgsqlConnection _connection;

    public LeaderHostedService(LeaderSettings<T> settings, T process)
    {
        _settings = settings;
        _process = process;
    }

    protected override async Task ExecuteAsync(CancellationToken stoppingToken)
    {
        // Don't try to start right off the bat
        await Task.Delay(_settings.FirstPollingTime, stoppingToken);
            
        _connection = new NpgsqlConnection(_settings.ConnectionString);
        await _connection.OpenAsync(stoppingToken);
        
        while (!stoppingToken.IsCancellationRequested)
        {
            var state = await _process.State();
            if (state != ProcessState.Active)
            {
                // If you can take the global lock, start
                // the process
                if (await _connection.TryGetGlobalLock(_settings.LockId, cancellation: stoppingToken))
                {
                    await _process.Start(_connection);
                }
            }

            // Start polling again
            await Task.Delay(_settings.OwnershipPollingTime, stoppingToken);
        }

        if (_connection.State != ConnectionState.Closed)
        {
            await _connection.DisposeAsync();
        }

    }
}

To fill in the blanks, the TryGetGlobalLock() method is an extension method helper to call the underlying pg_try_advisory_lock function in Postgresql to try to obtain a global advisory lock for the configured lock id. That extension method is shown below:

// Try to get a global lock with connection scoping
public static async Task<bool> TryGetGlobalLock(this DbConnection conn, int lockId, CancellationToken cancellation = default(CancellationToken))
{
    var c = await conn.CreateCommand("SELECT pg_try_advisory_lock(:id);")
        .With("id", lockId)
        .ExecuteScalarAsync(cancellation);

    return (bool) c;
}

Raw ADO.Net is so verbose and unusable out of the box that I’ve built up a set of extension methods to streamline its usage that you might observe above if you notice that that isn’t quite out of the box ADO.Net.

I’m generally a fan of strong typed configuration, and .Net Core makes that easy now, so I’ll use this class to represent the configuration:

public class LeaderSettings<T> where T : IActiveProcess
{
    public TimeSpan OwnershipPollingTime { get; set; } = 5.Seconds();
    
    // It's a random number here so that if you spin
    // up multiple nodes at the same time, they won't
    // all collide trying to grab ownership at the exact
    // same time
    public TimeSpan FirstPollingTime { get; set; } 
        = new Random().Next(100, 3000).Milliseconds();
    
    // This would be something meaningful
    public int LockId { get; set; }
    
    public string ConnectionString { get; set; }
}

In this approach, the background services will be constantly polling to try to take over as the async daemon if the async daemon is not active somewhere else. If the current async daemon node fails, the connection will drop and the global advisory lock is released and ready for another node to take over. We’ll see how this goes, but the early feedback from my own usage on Jasper and other Marten contributors other projects is positive. With this approach, we hope to enable teams to use the async daemon on multi-node deployments of their application with just Marten out of the box and without having to have any kind of sophisticated infrastructure for leader election.

Sponsored Post Take HeartNancyC

This is a review of Take Heart: 100 Devotions to Seeing God when Life’s Not Okay; all opinions expressed are mine.

We all walk through difficult seasons in life, and If you aren’t going through a tough time yourself right now, you probably have friends or family who are. Could these words have been written or spoken by you or someone you know?

Read More...

Kicking off Marten v4 Development

banner

If you’re not familiar with Marten, it’s a library that allows .Net developers to treat Postgresql as a Document Database and also as an Event Store. For a quick background on Marten, here’s my presentation on Marten from Dotnetconf 2018.

The Marten community is gearing up to work on the long awaited v4.0 release, which I think is going to be the most ambitious release of Marten since the original v1.0 release in September of 2016. Marten is blessed with a strong user community, and we’ve got a big backlog of issues, feature requests, and suggestions for improvements that have been building up for quite some time.

First, some links for anyone who wants to be a part of that conversation:

Now would be a perfect time to add any kind of feedback or requests for Marten. We do have an unofficial “F# advisory board” this time around, but the more the merrier on that side. There’ll be plenty of opportunities for folks to contribute, whether that’s in coding, experimenting with early alphas, or just being heard in the discussions online.

Now, on to the release plans:

Event Sourcing Improvements

The biggest area of focus in Marten v4.0 is likely going to be the event store functionality. At the risk of being (rightly) mocked, I’d sum up our goals here as “Make the Marten Event Store be Web Scale!”

There’s a lot of content on the aforementioned Marten v4.0 discussion, and also on an older GitHub discussion on Event Store improvements for v4.0. The highlights (I think) are:

  • Event metadata (a long running request)
  • Partitioning Support
  • Improved projection support, including the option to project event data to flat database tables
  • A lot of improvements (near rewrite) to the existing support for asynchronous projections including multi-tenancy support, running across multiple application nodes, performance optimizations, and projection rebuild improvements
  • New support for asynchronous projection builders using messaging as an alternative to the polling async daemon. The first cut of this is very likely to be based around Jasper (I’m using this as a way to push Jasper forward too of course) and either Rabbit MQ or Azure Service Bus. If things go well, I’d hope that that expands to at least Jasper + Kafka and maybe parallel support for NServiceBus and MassTransit.
  • Snapshotting
  • The ability to rebuild projections with zero downtime

Linq Support Improvements

I can’t overstate how complex building and maintaining a comprehensive Linq provider can be just out of the sheer number of usage permutations.

Marten v4.0 is going to include a pretty large overhaul of the Linq support. Along the way, we’re hopefully going to:

  • Rewrite the functionality to include related documents altogether and finally support Include() on collections that’s been a big user request for 3+ years.
  • Improve the efficiency of the generated SQL
  • More variable data type behavior within the Linq support. You can read more about the mechanics on this issue. This also includes being quite a bit smarter about the Json serialization and how that varies for different .Net data types. As an example, I think these changes will allow Marten’s Linq support to finally deal with things like F# Discriminator Union types.
  • Improve the internals by making it more modular and a little more performant in how it handles marshaling data from the database to C# objects.
  • Improve Marten’s ability to query within child collections
  • More flexibility in event stream identifier strateggies

 

Other Improvements

  • Document Metadata improvements, both exposing Marten’s metadata and user customizable metadata for things like user id or correlation identifiers
  • An option to stream JSON directly to .Net Stream objects like HTTP responses without going through serialization first or even a JSON string.
  • Optimizing the Partial update support to use Postgresql native operators when possible
  • Formal support for .Net Core integration through the generic HostBuilder mechanism

 

Planning

Right now it’s all about the discussions and approach, but that’s going pretty well right now. The first thing I’m personally going to jump into is a round of automated test performance just to make everything go faster later.

The hope is that a subsequent V5 release might finally take Marten into supporting other database engines besides Postgresql, with Sql Server being the very obvious next step. We might take some preliminary steps with V4 to make the multi-database engine support be more feasible later with fewer breaking API changes, but don’t hold me to that.

I don’t have any idea about timing yet, sorry.

Jasper’s Efficient and Flexible Roslyn-Powered Execution Pipeline

You’ll need to pull the very latest code from the sample application linked to in this post because of course I found some bugs by dog-fooding while writing this:/

This is an update to a post from a couple years ago called Jasper’s Roslyn-Powered “Special Sauce.” You can also find more information about Lamar’s code generation and runtime compilation support from my Dynamic Runtime Code with Roslyn talk at London NDC 2019.

The more or less accepted understanding of a “framework” as opposed to a “library” is that a framework calls your application code (the Hollywood Principle). To be useful, frameworks should be dealing with cross cutting infrastructure concerns like security, data marshaling, or error handling. At some point, frameworks have to have some way to call your application code.

Since generics were introduced way back in .Net 3.5, many .Net frameworks have used what I call the “IHandler of T” approach where you are almost inevitably asked to implement a common layer supertype interface like this:

public interface IHandler
{
    Task Handle(T message, IContext context);
{

From a framework author’s perspective, this is easy to implement, and most modern IoC frameworks have reasonably strong support for generic types (like Lamar generic types support for example). Off the top of my head, NServiceBus, MassTransit, and MediatR are examples of this approach (my recollection is that NServiceBus did this first and I distinctly remember Udi Dahan describing this years ago at an ALT.Net event).

The other general approach I’ve seen — and what Jasper itself uses — is to have a framework call your code through some kind of dynamic code that adapts the signature of your code to the shape or interface that the framework needs. ASP.Net MVC Core Controller methods are an example of this approach. Here’s a sample controller method to demonstrate this:

public class WithResponseController : ControllerBase
{
    private readonly ICommandBus _bus;

    public WithResponseController(ICommandBus bus)
    {
        _bus = bus;
    } 
    
    // MVC Core calls this method, and uses the signature
    // and attributes like the [FromBody] to "know" how
    // to call this code at runtime
    [HttpPost("/items/create2")]
    public Task Create([FromBody] CreateItemCommand command)
    {
        // Using Jasper as a Mediator, and receive the
        // expected response from Jasper
        return _bus.Invoke(command);
    }
}

To add some more complexity to designing frameworks, there’s also the issue of how you can combine the basic handlers with some sort of Russian Doll middleware approach to handle cross cutting concerns. There’s a couple ways to handle this:

  • If a framework uses the “IHandler of T” approach, folks often try to use the decorator pattern to wrap the inner “IHandler”. This approach can lead to folks crashing and burning by getting way too complex with generic constraints. It can also lead to some pretty severe levels of object allocations and garbage collection thrashing from the sheer number of objects being created and discarded. To add more fuel to the fire, this approach can easily lead to absurdly large stack trace exceptions that can be very intimidating to parse for the average developer.
  • ASP.Net Core’s middleware approach through functional composition. This can also lead to some dramatically bad stack traces and similar issues with object allocation.
  • Do some runtime code generation to piece together the middleware. I believe that NServiceBus does this internally with its version of Behaviors.

Lastly, most .Net web, service bus, or command execution frameworks use some sort of scoped IoC container (nested container in Lamar or StructureMap parlance) per request or command execution to deal with object scoping and cleanup or disposal in their execution pipelines.

Jasper’s Unique Approach

Jasper was originally conceived and planned as a replacement to the older FubuMVC project (but it’s gone off in different directions later of course). As a framework, some of FubuMVC’s design goals were to:

  1. Minimize the amount of framework artifacts (interfaces, base classes, attributes, et al) in your application code — and it generally succeeded at that
  2. Provide a very strong model for composable middleware (what we called Behaviors) in the runtime pipeline — which also succeeded in terms of capability, but at runtime it was extremely inefficient, object allocations were off the charts, the IoC integration was complex, and the exception stack traces when things went wrong were epically big.

With Jasper, the initial goals were to recreate the “cleanliness” of the application code and the flexibility of FubuMVC’s “Russian Doll” approach, but do so in a way that was much more efficient at runtime. And because this actually matters quite a bit, make sure that any exceptions thrown out of application code while running within Jasper have minimal exception stack traces for easier troubleshooting.

Roslyn introduced support for compiling C# code at runtime. Some time in 2015 I was in the office with some of the old FubuMVC core contributors and drew up on the whiteboard a new approach where we’d generate C# code at application bootstrapping time to weave in both the calls to the application code and designated middleware.

Later on, I expanded that vision to try to also encompass the object construction and cleanup functionality of an IoC container in the same generated code. The result of that initial envisioning has become the combination of Jasper and Lamar, where Jasper actually uses Lamar’s registration model to try to generate and inline the functionality that would normally be done at runtime through an IoC container. The theory here being that the fasted IoC container is no IoC container.

Alright, to make this concrete, let’s see how this plays out in real usage. In my last post, Introducing Jasper as an In Process Command Bus for .Net, I demonstrated a small message handler in a sample Jasper application with this code (I’ve stripped out the original comments to make it smaller):

public class ItemHandler
{
    [Transactional]
    public static ItemCreated Handle(
        CreateItemCommand command,
        ItemsDbContext db)
    {
        var item = new Item
        {
            Name = command.Name
        };

        db.Items.Add(item);

        return new ItemCreated
        {
            Id = item.Id
        };
    }
}

The code above is meant to:

  1. Create a new Item entity based on the incoming CreateItemCommand
  2. Persist that new Item with Entity Framework Core
  3. Publish a new ItemCreated event message to be handled somewhere else (how that happens isn’t terribly important for this blog post)

And lastly, the [Transactional] attribute tells Jasper to apply its transactional middleware and outbox support to the message handler, such that a single database commit will save both the new Item entity and do the “store” part of a “store and forward” operation to send out the cascading ItemCreated event.

Internally, Jasper is going to generate a new class that inherits from this base class (slightly simplified):

public abstract class MessageHandler
{
    public abstract Task Handle(
        IMessageContext context, 
        CancellationToken cancellation
    );
}

For that ItemHandler class shown up above in the sample application, Jasper is generating and compiling through Roslyn this (butt ugly, but remember that it’s generated) class:

    public class InMemoryMediator_Items_CreateItemCommand : Jasper.Runtime.Handlers.MessageHandler
    {
        private readonly Microsoft.EntityFrameworkCore.DbContextOptions _dbContextOptions;

        public InMemoryMediator_Items_CreateItemCommand(Microsoft.EntityFrameworkCore.DbContextOptions dbContextOptions)
        {
            _dbContextOptions = dbContextOptions;
        }



        public override async Task Handle(Jasper.IMessageContext context, System.Threading.CancellationToken cancellation)
        {
            var createItemCommand = (InMemoryMediator.Items.CreateItemCommand)context.Envelope.Message;
            using (var itemsDbContext = new InMemoryMediator.Items.ItemsDbContext(_dbContextOptions))
            {
                // Enroll the DbContext & IMessagingContext in the outgoing Jasper outbox transaction
                await Jasper.Persistence.EntityFrameworkCore.JasperEnvelopeEntityFrameworkCoreExtensions.EnlistInTransaction(context, itemsDbContext);
                var itemCreated = InMemoryMediator.Items.ItemHandler.Handle(createItemCommand, itemsDbContext);
                // Outgoing, cascaded message
                await context.EnqueueCascading(itemCreated);
                // Added by EF Core Transaction Middleware
                var result_of_SaveChangesAsync = await itemsDbContext.SaveChangesAsync(cancellation);
            }

        }

    }

If you want, you can see the raw code at any time by executing the dotnet run -- codegen command from the root of the sample project.

So here’s what I’m claiming is the advantage of Jasper’s approach:

  • Allows your application code to be “clean” of framework artifacts and much more decoupled from Jasper than you can achieve with many other application frameworks like MVC Core
  • Using the diagnostic commands to dump out the generated source code in the application, Jasper can tell you exactly how it’s handling a message at runtime
  • By just generating Poor Man’s Dependency Injection code to build up the EF Core dependency and also to deal with disposing it later, the generated code eliminates any need to use the IoC container at runtime. And even for the very fastest IoC container in the world — and Lamar isn’t a slouch on the performance side of things — pure dependency injection is faster. Do mote that Jasper + Lamar can’t do this for every possible message handler and will have to revert to a scoped container per message with service location in some circumstances, usually because of a scoped or transient Lambda service registration or internal types.
  • The generated code minimizes the number of object allocations compared to a typical .Net framework that depends on adapter types
  • Jasper will allow you to use either synchronous or asynchronous handlers as appropriate for your use case, so no wasted keystrokes typing out return Task.CompletedTask; littering up your code.
  • The stack traces are pretty clean and you’ll see effectively nothing related to Jasper in logged exceptions except for the outermost frame where the generated MessageHandler.Handle() method is called.
  • Some other .Net frameworks try to do what Jasper does conceptually by trying to generate Expression trees and compile those down to Func objects. I’ve certainly done that in other tools (Lamar, StructureMap, Marten), but that’s rarified air that most developers can’t deal with. It’s also batshit insane for async heavy code like you’ll inevitably hit with anything that involves IO these days. My theory, yet to be proven, is that by relying on C#, Jasper’s middleware approach will be much more approachable to whatever community Jasper attracts down the line.

 

For some follow up reading if you’re interested:

Introducing Jasper as an In Process Command Bus for .Net

A couple weeks ago I wrote a blog post called If you want your OSS project to be successful… about trying to succeed with open source development efforts. One of the things I said was “don’t go dark” when you’re working on an OSS project. Not only did I go “dark” on Jasper for quite awhile, I finally rolled out its 1.0 release during the worst global pandemic in a century. So all told, Jasper is by no means an exemplary project model for anyone to follow who’s trying to succeed with an OSS project.

This sample application is also explained and demonstrated in the documentation page Jasper as a Mediator.

Jasper is a new open source tool that can be used as an in process “command bus” inside of .Net Core 3 applications. Used locally, Jasper can provide a superset of the “mediator” functionality popularized by MediatR that many folks like using within ASP.Net MVC Core applications to simplify controller code by offloading most of the processing to separate command handlers. Jasper certainly supports that functionality, but also adds rich options for asynchronous processing commands with built in resiliency mechanisms.

Part of the reason why Jasper went cold was waiting for .Net Core 3.0 to be released. With the advent of .Net Core 3.0, Jasper was somewhat re-wired to support the new generic HostBuilder for bootstrapping and configuration. With this model of bootstrapping, Jasper can easily be integrated into any kind of .Net Core application (MVC Core application, web api, windows service, console app, “worker” app) that uses the HostBuilder.

Let’s jump into seeing how Jasper could be integrated into a .Net Core Web API system. All the sample code I’m showing here is on GitHub in the “InMemoryMediator” project. InMemoryMediator uses EF Core with Sql Server as its backing persistence. Additionally, this sample shows off Jasper’s support for the “Outbox” pattern for reliable messaging without having to resort to distributed transactions.

To get started, generated a project with the dotnet new webapi template. From there, I added some extra Nuget dependencies:

  1. Microsoft.EntityFrameworkCore.SqlServer — because we’re going to use EF Core with Sql Server as the backing persistence for this service
  2. Jasper — this is the core library, and all that you would need to use Jasper as an in process command bus
  3. Jasper.Persistence.EntityFrameworkCore — extension library to add Jasper’s “Outbox” and transactional support to EF Core
  4. Jasper.Persistence.SqlServer — extension library to add persistence for the “Outbox” support
  5. Swashbuckle.AspNetCore — just to add Swagger support

Your First Jasper Handler

Before we get into bootstrapping, let’s just start with how to build a Jasper command handler and how that would integrate with an MVC Core Controller. Keeping to a very simple problem domain, let’s say that we’re capturing, creating, and tracking new Item entities like this:

public class Item
{
    public string Name { get; set; }
    public Guid Id { get; set; }
}

So let’s build a simple Jasper command handler that would process a CreateItemCommand message, persist a new Item entity, and then raise an ItemCreated event message that would be handled by Jasper as well, but asynchronously somewhere off to the side in a different through. Lastly, we want things to be reliable, so we’re going to introduce Jasper’s integration of Entity Framework Core for “Outbox” support for the event messages being raised at the same time we create new Item entities.

First though, to put things in context, we’re trying to get to the point where our controller classes mostly just delegate to Jasper through its ICommandBus interface and look like this:

public class UseJasperAsMediatorController : ControllerBase
{
    private readonly ICommandBus _bus;

    public UseJasperAsMediatorController(ICommandBus bus)
    {
        _bus = bus;
    }

    [HttpPost("/items/create")]
    public Task Create([FromBody] CreateItemCommand command)
    {
        // Using Jasper as a Mediator
        return _bus.Invoke(command);
    }
}

You can find a lot more information about what Jasper can do as a local command bus in the project documentation.

When using Jasper as a mediator, the controller methods become strictly about the mechanics of reading and writing data to and from the HTTP protocol. The real functionality is now in the Jasper command handler for the CreateItemCommand message, as coded with this Jasper Handler class:

public class ItemHandler
{
    // This attribute applies Jasper's transactional
    // middleware
    [Transactional]
    public static ItemCreated Handle(
        // This would be the message
        CreateItemCommand command,

        // Any other arguments are assumed
        // to be service dependencies
        ItemsDbContext db)
    {
        // Create a new Item entity
        var item = new Item
        {
            Name = command.Name
        };

        // Add the item to the current
        // DbContext unit of work
        db.Items.Add(item);

        // This event being returned
        // by the handler will be automatically sent
        // out as a "cascading" message
        return new ItemCreated
        {
            Id = item.Id
        };
    }
}

You’ll probably notice that there’s no interface and mandatory base class usage in the code up above. Similar to MVC Core, Jasper will auto-discover the handler classes and message handling methods from your code through type scanning. Unlike MVC Core and every other service bus kind of tool .Net I’m aware of, Jasper only depends on naming conventions rather than base classes or interfaces.

The only bit of framework “stuff” at all in the code above is the [Transactional] attribute that decorates the handler class. That adds Jasper’s own middleware for transaction and outbox support around the message handling to just that message. At runtime, when Jasper handles the CreateItemCommand in that handler code up above, it:

  • Sets up an “outbox” transaction with the EF Core ItemsDbContextservice being passed into the Handle() method as a parameter
  • Take the ItemCreated message that “cascades” from the handler method and persists that message with ItemsDbContext so that both the outgoing message and the new Item entity are persisted in the same Sql Server transaction
  • Commits the EF Core unit of work by calling ItemsDbContext.SaveChangesAsync()
  • Assuming that the transaction succeeds, Jasper kicks the new ItemCreated message into its internal sending loop to speed it on its way. That outgoing event message could be handled locally in in-memory queues or sent out via external transports like Rabbit MQ or Azure Service Bus

If you’re interested in what the code above would look like without any of Jasper’s middleware or cascading message conventions, see the section near the bottom of this post called “Do it All Explicitly Controller”.

So that’s the MVC Controller and Jasper command handler, now let’s move on to integrating Jasper into the application.

Bootstrapping and Configuration

This is just an ASP.Net Core application, so you’ll probably be familiar with the generated Program.Main() entry point. To completely utilize Jasper’s extended command line support (really Oakton.AspNetCore), I’ll make some small edits to the out of the box generated file:

public class Program
{
    // Change the return type to Task to communicate
    // success/failure codes
    public static Task Main(string[] args)
    {
        return CreateHostBuilder(args)

            // This replaces Build().Start() from the default
            // dotnet new templates
            .RunJasper(args);
    }

    public static IHostBuilder CreateHostBuilder(string[] args) =>
        Host.CreateDefaultBuilder(args)

            // You can do the Jasper configuration inline with a 
            // Lambda, but here I've centralized the Jasper
            // configuration into a separate class
            .UseJasper<JasperConfig>()

            .ConfigureWebHostDefaults(webBuilder =>
            {
                webBuilder.UseStartup();
            });
}

This isn’t mandatory, but there’s just enough Jasper configuration for this project with the outbox support that I opted to put the Jasper configuration in a new file called JasperConfig that inherits from JasperOptions:

public class JasperConfig : JasperOptions
{
    public override void Configure(IHostEnvironment hosting, IConfiguration config)
    {
        if (hosting.IsDevelopment())
        {
            // In development mode, we're just going to have the message persistence
            // schema objects dropped and rebuilt on app startup so you're
            // always starting from a clean slate
            Advanced.StorageProvisioning = StorageProvisioning.Rebuild;
        }

        // Just the normal work to get the connection string out of
        // application configuration
        var connectionString = config.GetConnectionString("sqlserver");

        // Setting up Sql Server-backed message persistence
        // This requires a reference to Jasper.Persistence.SqlServer
        Extensions.PersistMessagesWithSqlServer(connectionString);

        // Set up Entity Framework Core as the support
        // for Jasper's transactional middleware
        Extensions.UseEntityFrameworkCorePersistence();

        // Register the EF Core DbContext
        // You can register IoC services in this file in addition
        // to any kind of Startup.ConfigureServices() method,
        // but you probably only want to do it in one place or the 
        // other and not both.
        Services.AddDbContext(
            x => x.UseSqlServer(connectionString),

            // This is important! Using Singleton scoping
            // of the options allows Jasper + Lamar to significantly
            // optimize the runtime pipeline of the handlers that
            // use this DbContext type
            optionsLifetime:ServiceLifetime.Singleton);
    }
}

Returning a Response to the HTTP Request

In the UseJasperAsMediatorController controller, we just passed the command into Jasper and let MVC return an HTTP status code 200 with no other context. If instead, we wanted to send down the ItemCreated message as a response to the HTTP caller, we could change the controller code to this:

public class WithResponseController : ControllerBase
{
    private readonly ICommandBus _bus;

    public WithResponseController(ICommandBus bus)
    {
        _bus = bus;
    }

    [HttpPost("/items/create2")]
    public Task<ItemCreated> Create([FromBody] CreateItemCommand command)
    {
        // Using Jasper as a Mediator, and receive the
        // expected response from Jasper
        return _bus.Invoke<ItemCreated>(command);
    }
}

“Do it All Explicitly Controller”

Just for a comparison, here’s the CreateItemCommand workflow implemented inline in a controller action with explicit code to handle the Jasper “Outbox” support:

// This controller does all the transactional work and business
// logic all by itself
public class DoItAllMyselfItemController : ControllerBase
{
    private readonly IMessageContext _messaging;
    private readonly ItemsDbContext _db;

    public DoItAllMyselfItemController(IMessageContext messaging, ItemsDbContext db)
    {
        _messaging = messaging;
        _db = db;
    }

    [HttpPost("/items/create3")]
    public async Task Create([FromBody] CreateItemCommand command)
    {
        // Start the "Outbox" transaction
        await _messaging.EnlistInTransaction(_db);

        // Create a new Item entity
        var item = new Item
        {
            Name = command.Name
        };

        // Add the item to the current
        // DbContext unit of work
        _db.Items.Add(item);

        // Publish an event to anyone
        // who cares that a new Item has
        // been created
        var @event = new ItemCreated
        {
            Id = item.Id
        };

        // Because the message context is enlisted in an
        // "outbox" transaction, these outgoing messages are
        // held until the ongoing transaction completes
        await _messaging.Send(@event);

        // Commit the unit of work. This will persist
        // both the Item entity we created above, and
        // also a Jasper Envelope for the outgoing
        // ItemCreated message
        await _db.SaveChangesAsync();

        // After the DbContext transaction succeeds, kick out
        // the persisted messages in the context "outbox"
        await _messaging.SendAllQueuedOutgoingMessages();
    }
}

As a huge lesson learned from Jasper’s predecessor project, it’s always possible to easily bypass any kind of Jasper conventional “magic” and write explicit code as necessary.

There’s a lot more to say about Jasper and you can find a *lot* more information on its documentation website. I’ll be back sometime soon with more examples of Jasper, with probably some focus on functionality that goes beyond other mediator tools.

In the next post, I’ll talk about Jasper’s runtime execution pipeline and how it’s very different than other .Net tools with similar functionality (hint, it involves a boatload less generics magic than anything else).

 

Using Alba for Integration Testing ASP.Net Core Web Services

There’s a video of David Giard and I talking about Alba at CodeMash a couple years ago on YouTube if you’re interested.

Alba is a helper library for doing automated testing against ASP.Net Core web services. Alba started out as a subsystem of the defunct FubuMVC project for HTTP contract testing, but was later extracted into its own project distributed via Nuget, ported to ASP.Net Core, and finally re-wired to incorporate TestServer internally.

There are certainly other advantages, but I think the biggest selling point of adopting ASP.Net Core is the ability to run HTTP requests through your system completely in memory without any other kind of external web server setup. In my experience, this has made automated integration testing of ASP.Net Core applications far simpler compared to older versions of ASP.Net.

We could use FubuMVC’s OWIN support with or without Katana to perform the same kind of automated testing that I’m demonstrating here years ago, but hardly anyone ever used that and what I’m showing here is significantly easier to use.

Why not just use TestServer you ask? You certainly can, but Alba provides a lot of helper functionality around exercising HTTP web services that will make your tests much more readable and remove a lot of the repetitive coding you’d have to do with just using TestServer.

To demonstrate Alba, I published a small solution called AlbaIntegrationTesting to GitHub this morning.

Starting with a small web services application generated with the dotnet new webapi template, I added a small endpoint that we’ll later write specifications against:

public class Answers
{
    public int Product { get; set; }
    public int Sum { get; set; }
}

public class AdditionController : ControllerBase
{
    [HttpGet("math/{one}/{two}")]
    public Answers DoMath(int one, int two)
    {
        return new Answers
        {
            Product = one * two,
            Sum = one + two
        };
    }
}

Next, I added a new test project using xUnit.Net with the dotnet new xunit template. With the new skeleton testing project I:

  • Added a reference to the latest Alba Nuget
  • My preference for testing assertions in .Net is Shouldly, so I added a Nuget reference for that as well

Since bootstrapping an ASP.Net Core system is non-trivial in terms of performance, I utilized xUnit.Net’s support for sharing context between tests so that the application only has to be spun up once in the test suite. The first step of that is to create a new “fixture” class that holds the ASP.Net Core system in memory like so:

public class AppFixture : IDisposable
{
    public AppFixture()
    {
        // Use the application configuration the way that it is in the real application
        // project
        var builder = Program.CreateHostBuilder(new string[0])
            
            // You may need to do this for any static 
            // content or files in the main application including
            // appsettings.json files
            
            // DirectoryFinder is an Alba helper
            .UseContentRoot(DirectoryFinder.FindParallelFolder("WebApplication")) 
            
            // Override the hosting environment to "Testing"
            .UseEnvironment("Testing"); 

        // This is the Alba scenario wrapper around
        // TestServer and an active .Net Core IHost
        System = new SystemUnderTest(builder);

        // There's also a BeforeEachAsync() signature
        System.BeforeEach(httpContext =>
        {
            // Take any kind of setup action before
            // each simulated HTTP request
            
            // In this case, I'm setting a fake JWT token on each request
            // as a demonstration
            httpContext.Request.Headers["Authorization"] = $"Bearer {generateToken()}";
        });

        System.AfterEach(httpContext =>
        {
            // Take any kind of teardown action after
            // each simulated HTTP request
        });

    }

    private string generateToken()
    {
        // In a current project, we implement this method
        // to create a valid JWT token with the claims that
        // the web services require
        return "Fake";
    }

    public SystemUnderTest System { get; }

    public void Dispose()
    {
        System?.Dispose();
    }
}

This new AppFixture class will only be built once by xUnit.Net and shared between test unit classes using the IClassFixture<AppFixture> interface.

Do note that you can express some actions in Alba to take immediately before or after executing an HTTP request through your system for typical setup or teardown operations. In one of my current projects, we exploit this capability to add a pre-canned JWT to the request headers that’s required by our system. In our case, that’s allowing us to test the security integration and the multi-tenancy support that depends on the JWT claims at the same time we’re exercising controller actions and the associated database access underneath it. If you’re familiar with the test pyramid idea, these tests are the middle layers of our testing pyramid.

To simplify the xUnit.Net usage throughout the testing suite, I like to introduce a base class that inevitable accumulates utility methods for running Alba or common database setup and teardown functions. I tend to call this class IntegrationContext and here’s a sample:

public abstract class IntegrationContext : IClassFixture<AppFixture>
{
    protected IntegrationContext(AppFixture fixture)
    {
        Fixture = fixture;
    }

    public AppFixture Fixture { get; }

    /// <summary>
    /// Runs Alba HTTP scenarios through your ASP.Net Core system
    /// </summary>
    /// <param name="configure"></param>
    /// <returns></returns>
    protected Task<IScenarioResult> Scenario(Action<Scenario> configure)
    {
        return Fixture.System.Scenario(configure);
    }

    // The Alba system
    protected SystemUnderTest System => Fixture.System;

    // Just a convenience because you use it pretty often
    // in tests to get at application services
    protected IServiceProvider Services => Fixture.System.Services;

}

Now with the AppFixture and IntegrationContext in place, let’s write some specifications against the AdditionController endpoint shown earlier in this post:

public class DoMathSpecs : IntegrationContext
{
    public DoMathSpecs(AppFixture fixture) : base(fixture)
    {
    }

    // This specification uses the shorthand helpers in Alba
    // that's useful when you really only care about the data
    // going in or out of the HTTP endpoint
    [Fact]
    public async Task do_some_math_adds_and_multiples_shorthand()
    {
        var answers = await System.GetAsJson<Answers>("/math/3/4");
        
        answers.Sum.ShouldBe(7);
        answers.Product.ShouldBe(12);
    }
    
    // This specification shows the longhand way of executing an
    // Alba scenario and using some of its declarative assertions
    // about the expected HTTP response
    [Fact]
    public async Task do_some_math_adds_and_multiples_longhand()
    {
        var result = await Scenario(x =>
        {
            x.Get.Url("/math/3/4");
            x.ContentTypeShouldBe("application/json; charset=utf-8");
        });

        var answers = result.ResponseBody.ReadAsJson<Answers>();
        
        
        answers.Sum.ShouldBe(7);
        answers.Product.ShouldBe(12);
    }
}

Alba can do a lot more to work with the HTTP requests and responses, but I hope this gives you a quick introduction to using Alba for integration testing.

Using Oakton for Development-Time Commands in .Net Core Applications

All of the sample code in this blog post is on GitHub in the OaktonDevelopmentCommands project.

Last year I released the Oakton.AspNetCore library that provides an expanded command line experience for ASP.Net Core applications (and environment tests too!) that adds additional command line flags for the basic dotnet run behavior. Oakton.AspNetCore also gives you the ability to embed completely different named command directly into your application — either from your own application or Oakton.AspNetCore can pull in commands from Nuget libraries.

To make this concrete, I started a new sample project on GitHub with the dotnet new webapi template. To add the new command line experience, I added a Nuget reference to Oakton.AspNetCore and modified the Program.Main() method to this:

// I changed the return type to Task
public static Task Main(string[] args)
{
    return CreateHostBuilder(args)
        // This extension method makes Oakton the active
        // command line parser and executor
        .RunOaktonCommands(args);
}

To show the ability to add commands from an external library, I also swapped out the built in DI container with Lamar, and added a reference to the Lamar.Diagnostics Nuget to expose Lamar’s diagnostics reports via the command line.

Just to show the Lamar integration, I added just the one line of code to the host builder configuration you can see below:

public static IHostBuilder CreateHostBuilder(string[] args) =>
    Host.CreateDefaultBuilder(args)
        .UseLamar() // Overriding the IoC container to use Lamar
        .ConfigureWebHostDefaults(webBuilder =>
        {
            webBuilder.UseStartup();
        });

Now, to switch the actual command line to see the results of all of that, you can see Oakton’s built in command line help by typing the command dotnet run --?, which gives this output:

  ------------------------------------------------------------------------------
    Available commands:
  ------------------------------------------------------------------------------
         check-env -> Execute all environment checks against the application
          describe -> Writes out a description of your running application to either the console or a file
    lamar-scanning -> Runs Lamar's type scanning diagnostics
    lamar-services -> List all the registered Lamar services
    lamar-validate -> Runs all the Lamar container validations
               run -> Runs the configured AspNetCore application
          sayhello -> I'm a simple command that just starts up the app and says hello
  ------------------------------------------------------------------------------

A couple notes about the output above:

  • With the dotnet cli mechanics, the basic usage is dotnet [command] [optional flags to dotnet itself] -- [arguments to your application]. The double dashes marks the boundaries between arguments and flags meant for dotnet itself and arguments or flags for your application. So as an example, to run your application compiled as “Release” and using the “Testing” environment name for your .Net Core application, the command would be dotnet run --configuration Release -- --environment Testing.
  • The check-env, describe, and run commands come in from the base Oakton.AspNetCore library. The run command is the default, so dotnet run actually delegates to the run command.
  • All the lamar-******* commands come from Lamar.Diagnostics because Oakton.AspNetCore can happily find and apply commands from other assemblies in the project.
  • We’ll build out sayhello later in this post

Assuming that you are a Lamar (or a StructureMap) user, the first step to unravel most IoC configuration problems is the old WhatDoIHave() method to see what the service registrations are. To get at that data faster, you can use the dotnet run -- lamar-services command just to dump out the WhatDoIHave() output to the console or a text file.

However, adding Lamar.Diagnostics to your application adds some dependencies you may not want to be deployed. With a little help from the .Net Core SDK project system, we can just tell .Net to only use Lamar.Diagnostics at development time by using the <PrivateAssets> tag in the csproj file like this:

<PackageReference Include="Lamar.Diagnostics" Version="1.1.5">
    <PrivateAssets>all</PrivateAssets>
</PackageReference>

 

I don’t know how to do that without breaking into the csproj file, but once you do, the Lamar.Diagnostics assembly will not be deployed if you’re using dotnet publish to bundle up your application for deployment.

The “AspNetCore” naming is unfortunately a misnomer, because as of .Net Core 3.* the Oakton extension works for any project configured and bootstrapped with the new generic HostBuilder (purely console applications, worker applications, or any kind of web application).

Now, to add a custom command to the application directly into the application without polluting the deployed application, we’re just using some conditional compilation as shown in this super simple example:

// This is necessary to tell Oakton
// to search this assembly for Oakton commands
[assembly:Oakton.OaktonCommandAssembly]

namespace OaktonDevelopmentCommands
{
    // The conditional compilation here just keeps this command from
    // being present in the Release build of the application
#if DEBUG
    // This is also an OaktonAsyncCommand if you need to 
    // call async APIs
    [Description("I'm a simple command that just starts up the app and says hello")]
    public class SayHelloCommand : OaktonCommand<NetCoreInput>
    {
        public override bool Execute(NetCoreInput input)
        {
            // Super cheesy, just starting up the application
            // and shutting it right down
            using (var host = input.BuildHost())
            {
                // You do have access to the host's underlying
                // IoC provider, and hence to any application service
                // including the compiled IConfiguration as well
                
                Console.WriteLine("Hey, I can start up the application!");
            }

            // Gotta return true to let Oakton know that the command succeeded
            // This is important if you're using commands that need to report
            // success or failure to the command line.
            return true;
        }
    }
#endif
}

It’s hoke-y, but the command up above will only be compiled into your application if you are compiling as “Debug” — as you generally do when you’re working locally or running tests. When you deploy as a “Release” build, this command won’t be part of the compiled executable. As silly as it looks, this is being very useful in one of my client projects right now.

If you want your OSS project to be successful…

Don’t take any of this too seriously because I wrote this really fast as I was procrastinating instead of working with some ugly legacy code today.

First off, I don’t know why the hell you’d pay attention to me because I’m only middling successful at OSS in terms of the effort I’ve put in over the years to whatever the benefits are. That being said, I do know quite a bit about what not to do, and that’s valuable.

First off, have the right idea and build something that solves some kind of common problem. It helps tremendously if you’re going into some kind of problem area without a lot of existing solutions. My aging StructureMap project was the very first production ready IoC tool for .Net. I seriously doubt it would have been terribly successful if it hadn’t been for that fact. Likewise, Marten has been successful because the idea of Postgresql as a Document Database just makes sense because of Postgresql’s unusually strong JSON support. I can say with some authority that the project concept was appealing because there were 3-4 other nascent OSS projects to do exactly what Marten became a couple years ago.

If you’re trying to go build a better mousetrap and walk into a problem domain with existing solutions, you just need to have some kind of compelling reason for folks to switch over to your tool. My example there would be Serilog. NLog or Log4Net are fine for what they are, but Serilog’s structured logging approach is different and provided some value beyond what the older logging alternatives did.

Try not to compete against some kind of tool from Microsoft itself. That’s just a losing proposition 95% of the time. And maybe realize upfront that the price of a significant success in .Net OSS means that Microsoft will eventually write their own version of whatever it is you’ve done.

Oh, and don’t do OSS at least in the .Net world if you’re trying to increase your professional visibility. I think it’s the other way around, you increase your visibility through talks, blog posts, and articles first. Then your OSS work will likely be more successful with more community visibility and credibility. Other folks might disagree with that, but that’s how I see it and what I’ve experienced myself both in good and bad ways.

If at all possible, dog-food your OSS project on a real work project. I can’t overstate how valuable that is to see how your tool really functions and to ensure it actually solves problems. The feedback cycle between finding a problem and getting it fixed is substantially faster when you’re dog-fooding your OSS project in a codebase where you have control versus getting GitHub issues from other people in code you’re not privy to. Moreover, it’s incredibly useful to see your colleagues using your OSS tool to find out how other folks reason about your tool’s API, try to apply it, and find out where you have usability issues. Lastly, you’re much more likely to have a good understanding of how to solve a technical problem that you actually have in your own projects.

All that being said about dog-fooding however, I’ve frequently used OSS work to teach myself new techniques, design ideas, and technologies.

Be as openly transparent about the project as you can early on and try to attract other contributors. Don’t go dark on a project or strive for perfection before releasing your project or starting to talk about it in public. I partially blame “going dark” prior to the v1.0 release for FubuMVC being such a colossal OSS failure for me. In 2011 I had a pretty packed room at CodeMash for a half day workshop on the very early FubuMVC, but I quit blogging or talking about it much for a couple years after that. When the FubuMVC team got a chance to do present again at CodeMash 2013 for the big v1.0 rollout, there were about a dozen people in the room and I was absolutely crushed.

Or just as valuable, try to get yourself some early adopters to get feedback and more ideas about the direction of the project. Not in terms of downloads and usage per se, but absolutely in terms of the technical achievement and community, Marten is by far and away the most successful OSS project I’ve had anything to do with over the years. A lot of this I would attribute to just having the right concept for a project, but much of that I believe is in how much effort I put into Marten very early on in publicizing it and blogging the project progress early. You know that saying that “with enough eyeballs, all bugs are shallow?” I have no idea if that’s actually true, but I can absolutely state that plenty of early user feedback and involvement has a magical ability to improve the usability of your OSS project.

Contributors come in all different types, and you want as many as you can attract. An eventual core team of contributors and project leaders is invaluable. Pull requests to implement functionality or fix bugs is certainly valuable. Someone who merely watches your project and occasionally points out usability problems or unclear documentation is absolutely helpful. People who take enough time to write useful GitHub issues contribute to projects getting better. Heck, folks who make itty bitty pull requests to improve the wording of the documentation help quite a bit, especially when there’s a bunch of those.

This deserves its own full fledged conversation, but make sure there’s enough README documentation and build automation to get new contributors up and running fast in the codebase. If you’re living in the .Net space, you make sure that folks can just open up Visual Studio.Net (or better IDE tools like JetBrains Rider;) and go. If there is any other kind of environment setup, get that scripted out where a simple “build” or “./build.sh” command of some sort or another build out whatever they need to run your code and especially the tests. Docker and docker-compose is turning out to be hugely helpful for this. This might force you to give up your favorite build automation tooling in favor of something mainstream (sorry Rake, I still love you). And don’t be like MS teams used to be and always use all kinds of weird Visual Studio.Net project extensions that aren’t on most folks’s box.

Documentation is unfortunately a big deal. It’s not just a matter of having documentation though, it’s also vital to make that documentation easy to edit and keep up to date because your project will change over time. Years ago I was frequently criticized online for StructureMap having essentially no documentation. As part of the huge (at the time) StructureMap 2.5 release I went and wrote up a ton of documentation with code samples in a big static HTML website. The 2.5 release had some annoying fluent interface APIs that nobody liked (including me), so I started introducing some streamlined API usage in 2.6 that still exists in the latest StructureMap (and Lamar) — which was great, except now the documentation was all out of date and people really laid into me online about that for years.

That horrendous experience with the StructureMap documentation led to me now having some personal rules for how I approach the technical documentation on ongoing OSS projects:

  1. Make the project documentation website quick to re-publish because it’s going to change over time. Your best hope to keeping documentation up to date is to make it as painless as possible to update and publish documentation updates
  2. Just give up and author your documentation in markdown one way or another because most developers at this point understand it
  3. Embed code samples in the documentation in some sort of way where they can be kept up to date
  4. As silly as it may sound, use some kind of “Edit me on GitHub” link in each page in your documentation website that let’s random people quickly whip up little pull requests to improve the documentation. You have no idea how helpful that’s been to me over the past 3-4 years.
  5. Make it easy to update and preview the documentation website locally. That helps tremendously for other folks making little contributions.

There are other solutions for the living documentation idea, but all of the projects I’m involved with use the Storyteller “stdocs” tooling (which I of course wrote over years of dog-fooding:)) to manage “living documentation” websites.

Try to be responsive to user problems and questions. This is a “do as I say, not as I do” kind of thing because I can frequently be slow to acknowledge GitHub issues or questions at times. Just letting them know that “I saw this” can buy you some good will. Try to be cool in the face of folks being assholes to you online. Remember that interactions and folks generally come off worse online than they would in real life with body language cues, and also remember that you’re generally dealing with people when they’re already frustrated.

Assuming that your project is on GitHub, I highly recommend Gitter as a way to communicate with users. I find the two way communication a lot faster to help unwind user problems compared to working asynchronously in GitHub issues.

 

 

Fast Build, Slow Build, and the Testing Pyramid

At Calavista we’ve been helping a couple of our clients use Selenium for automated testing of web applications. For one client we’re slowly introducing a slightly different, but still .Net-focused technical stack that allows for much more effective test automation without having to resort to quite so many Selenium tests. For another client we’re trying to help them optimize the execution time of their large Selenium test suite.

At this point, they’re only running the Selenium test suite in a scheduled run overnight, with their testers and developers needing to deal with any test failures the next day. Ideally, they want to get to the point where developers could optionally execute either the whole suite or a targeted subset of the Selenium tests on their own development branches whenever they want.

I think it’s unlikely that we’ll get the full Selenium test suite to where it executes fast enough that a developer would be willing to run those tests as part of their normal “check in dance” routine. To thread the needle a bit between letting a developer get quick feedback from their own local builds or the main continuous integration builds and the desire to run the Selenium suite much more often for faster feedback, we’re suggesting they split the build activity up with what I’ve frequently seen called the “fast build, slow build” pattern (I couldn’t find anybody to attribute this to tonight as I wrote this, but I can’t take credit for it).

First off, let’s assume your project is following the idea of the “testing pyramid” one way or another such that your automated tests probably fall into one of three broad categories:

  1. Unit tests that don’t touch the database or other external services so they generally run pretty quickly. This would probably include things like business logic rules or validation rules.
  2. Integration tests that test a subset of the system and frequently use databases or other external services. HTTP contract tests are another example.
  3. End to end tests that almost inevitably run slowly compared to other types of tests. Selenium tests are notoriously slow and are the obvious example here.

The general idea is to segment the automated build something like this:

  1. Local developer’s build — You might only choose to compile the code and run fast unit tests as a check before you try to push commits to a GitHub/BitBucket/Azure DevOps/whatever you happen to be using branch. If the integration tests in item #2 are fast enough, you might include them in this step. At times, I’ve divided a local build script into “full” and “fast” modes so I can easily choose how much to run at one time for local commits versus any kind of push (I’m obviously assuming that everybody uses Git by this point, so I apologize if the Git-centric terminology isn’t helpful here).
  2. The CI “fast build” — You’d run a superset of the local developer’s build, but add the integration tests that run reasonably quickly and maybe a small smattering of the end to end tests. This is the “fast build” to give the developer reasonable assurance that their push built successfully and didn’t break anything
  3. The CI “slow build” of the rest of the end to end tests. This build would be triggered as a cascading build by the success of the “fast build” on the build server. The “slow build” wouldn’t necessarily be executed for every single push to source control, but there would at least be much more granularity in the tracking from build results to the commits picked up by the “slow build” execution. The feedback from these tests would also be much more timely than running overnight. The segregation into the “fast build / slow build” split allows developers not to be stuck waiting for long test runs before they can check in or continue working, but still get some reasonable feedback cycle from those bigger, slower, end to end tests.

 

 

Standing up a local Sql Server development DB w/ Bullseye, Docker, and Roundhouse

EDIT 3/26: I added the code that delegates to the Sql Server CLI tools in Docker

For one of our Calavista engagements we’re working with a client who has a deep technical investment in Sql Server with their database migrations authored in RoundHousE. The existing project automation depended on Sql Express for standing up local development and testing databases, with some manual set up steps in a Wiki page before you could successfully clone and run the application locally.

As we’ve started to introduce some newer technologies to this client’s web development ecosystem, there was an opportunity to improve what my former colleague Chad Myers used to call the “time to login screen” metric — how long does it take a new developer from making their first initial clone of a codebase to being able to run the system locally on their development box? Being somewhat selfish because I prefer to develop on OS X these days, I opted for running the local development database in Docker instead of Sql Express.

Fortunately, you can quickly stand up Sql Server quickly in a Linux container now. Here’s a sample docker-compose.yaml file we’re using:

version: '3'
services:
  sqlserver:
    image: "microsoft/mssql-server-linux:2017-latest"
    container_name: "Descriptive Container Name"
    ports:
     - "1433:1433"
    environment:
     - "ACCEPT_EULA=Y"
     - "SA_PASSWORD=P@55w0rd"
     - "MSSQL_PID=Developer"

That’s step 1, but there’s a little bit more we needed to do to stand up a local database (actually two databases):

  1. Provision a new database server
  2. Create two named databases
  3. Run the RoundHousE database migrations to bring the database up to the current version

So now let’s step into the realm of project automation scripting. I unilaterally chose to use Bullseye for build scripting because of the positive experience the Marten team had when we migrated the Marten build from Rake to Bullseye. Using Bullseye where you’re just writing C#, we have this task:

Target("init-db", () =>
{
    // This verifies that the docker instance
    // defined in docker-compose.yaml is up
    // and running
    Run("docker-compose", "up -d");

    // The command above is asynchronous, so wait
    // until Sql Server is responsive
    WaitForSqlServerToBeReady();

    // Create the two databases
    CreateDatabase("Database Name #1");
    CreateDatabase("Database Name #2");

    // Run RoundHousE to apply the latest database migrations
    Run("dotnet", "tool update -g dotnet-roundhouse");
});

To flesh this out a little more, the Sql Server Docker image embeds some of the Sql Server command line tools in the image, so we were able to create the new named databases like this:

        // No points for style!!!
        private static void WaitForSqlServerToBeReady()
        {
            var attempt = 0;
            while (attempt < 10)
                try
                {
                    using (var conn = new SqlConnection(DockerConnectionString))
                    {
                        conn.Open();
                        Console.WriteLine("Sql Server is up and ready!");
                        break;
                    }
                }
                catch (Exception)
                {
                    Thread.Sleep(250);
                    attempt++;
                }
        }

The CreateDatabase() method just delegates to the sqlcmd tool within the Docker container like this (the Run() method comes from SimpleExec):

        private static void CreateDatabase(string databaseName)
        {
            try
            {
                Run("docker",
                    $"exec -it SurveySqlServer /opt/mssql-tools/bin/sqlcmd -S localhost -U SA -P \"{SqlServerPassword}\" -Q \"CREATE DATABASE {databaseName}\"");
            }
            catch (Exception e)
            {
                Console.WriteLine($"Could not create database '{databaseName}': {e.Message}");
            }
        }

It was a lot of Googling for very few lines of code, but once it was done, voilà, you’ve got a completely functional Sql Server database for local development and testing. Even better yet, it’s super easy to turn development database on and off when I switch between different projects by just stopping and starting Docker images.

It’s an OSS Nuget Release Party! (Jasper v1.0, Lamar, Alba, Oakton)

My bandwidth for OSS work has been about zero for the past couple months. With the COVID-19 measures drastically reducing my commute and driving kids to school time, getting to actually use some of my projects at work, and a threat from an early adopter to give up on Jasper if I didn’t get something out soon, I suddenly went active again and got quite a bit of backlog things, pull requests, and bug fixes out.

My main focus in terms of OSS development for the past 3-4 years has been a big project called “Jasper” that was originally going to be a modernized .Net Core successor to FubuMVC. Just by way of explanation, Jasper, MO is my ancestral hometown and all the other projects I’m mentioning here are named after either other small towns or landmarks around the titular “Jasper.”

Alba

Alba is a library for HTTP contract testing with ASP.Net Core endpoints. It does quite a bit to reduce the repetitive code from using TestServer by itself in tests and makes your tests be much more declarative and intention revealing. Alba v3.1.1 was released a couple weeks ago to address a problem exposing the application’s IServiceProvider from the Alba SystemUnderTest. Fortunately for once, I caught this one myself while dogfooding Alba on a project at work.

Alba originated with the code we used to test FubuMVC HTTP behavior back in the day, but was modernized to work with ASP.Net Core instead of OWIN, and later retrofitted to just use TestServer under the covers.

Baseline

Baseline is a grab bag of utility code and extension methods on common .Net types that you can’t believe is missing from the BCL, most of which is originally descended from FubuCore.

As an ancillary project, I ripped out the type scanning and assembly finding code from Lamar into a separate BaselineTypeDiscovery Nuget that’s used by most of the other libraries in this post. There was a pretty significant pull request in the latest BaselineTypeDiscovery v1.1.0 release that should improve the application start up time for folks that use Lamar to discover assemblies in their application.

Oakton

Oakton is a command parsing library for .Net that was lifted originally from FubuCore that is used by Jasper. Oakton v2.0.4 and Oakton.AspNetCore v2.1.3 just upgrade the assembly discovery features to use the newer BaselineTypeDiscovery release above.

Lamar

Lamar is a modern, fast, ASP.Net Core compliant IoC container and the successor to StructureMap. I let a pretty good backlog of issues and pull requests amass, so I took some time yesterday to burn that down and the result is Lamar v4.2. This release upgrades the type scanning, fixes some bugs, and added quite a few fixes to the documentation website.

Jasper

Jasper at this point is a command executor ala MediatR (but much more ambitious) and a lightweight messaging framework — but the external messaging will mature much more in subsequent releases.

This feels remarkably anti-climactic seeing as how it’s been my main focus for years, but I pushed Jasper v1.0 today, specifically for some early adopters. The documentation is updated here. There’s also an up to date repository of samples that should grow. I’ll make a much bigger deal out of Jasper when I make the v1.1 or v2.0 release sometime after the Coronavirus has receded and my bandwidth slash ambition level is higher. For right now I’m just wanting to get some feedback from early users and let them beat it up.

Marten

There’s nothing new to say from me about Marten here except that my focus on Jasper has kept me from contributing too much to Marten. With Jasper v1.0 out, I’ll shortly (and finally) turn my attention to helping with the long planned Marten v4 release. For a little bit of synergy, part of my plans there is to use Jasper for some of the advanced Marten event store functionality we’re planning.