Build Automation on a Database Backed .Net System

Hey, I blog a lot about the OSS tools I work on, so this week I’m going in a different direction and blogging about other OSS tools I use in daily development. In no small part, this blog post is a demonstration to some of my colleagues to get them to weigh in on the approach I took here.

I’ve been dragging my feet for way, way too long at work on what’s going to be our new centralized identity provider service based on Identity Server 5 from Duende Software. It is the real world, so for the first phase of things, the actual user credentials are stored in an existing Sql Server database, with roughly a database per client strategy of multi-tenancy. For this new server, I’m introducing a small lookup database to store the locations of the client specific databases. So the new server has this constellation of databases:

After some initial spiking, the first serious thing I did was to set up the automated developer build for the codebase. For local development, I need a script that:

  • Sets up multiple Sql Server databases for local development and testing
  • Restore Nuget dependencies
  • Can build the actual C# code (and might later delegate to NPM if there’s any JS/TS code in the user interface)
  • Run all the tests in the codebase

For very simple projects I’ll just use the dotnet command line to run tests from the command line in CI builds or at Git commit time. Likewise in Node.js projects, npm by itself is frequently good enough. If all there was was the C# code, dotnet test would be enough of a build script in this Identity Server project, but the database requirements are enough to justify a more complex build automation approach.

Build Scripting with Bullseye

Until very recently, I still used the Ruby-based Rake tooling for build scripting, but Ruby as a scripting language has definitely fallen out of favor in .Net circles. After Babu Annamalai introduced Bullseye/SimpleExec into Marten, I’m using Bullseye as my go to build scripting tool.

At least in my development circles, make-like, task-oriented build automation tools have definitely lost popularity in recent years. But in this identity server project, that’s exactly what I want for build automation. My task-oriented build scripting tool of choice for .Net work is the combination of Bullseye with SimpleExec. Bullseye itself is very easy to use because you’re using C# in a small .Net project. Because it’s just a .Net console application, you also have complete access to Nuget libraries — as we’ll exploit in just a bit.

To get started with Bullseye, I created a folder called build off of the repository root of my identity server codebase, and created a small .Net console application that I also call build. You can see an example of this in the Lamar codebase.

Because we’ll need this in a minute, I’ll also place some wrapper scripts at the root directory of the repository to call the build project called build.cmd, build.ps1, and build.sh for Windows, Powershell, and *nix development. The build.cmd file is just delegating to the new build project and passing all the command line variables like so:

@echo off

dotnet run --project build/build.csproj -c Release -- %*

Back to the new build project, I added Nuget references to Bullseye and SimpleExec. In the Program.Main() function (this could be a little simpler with the new streamlined .Net 6 entry point), I’ll add a couple static namespace declarations:

using static Bullseye.Targets;
using static SimpleExec.Command;

Now we’re ready to write our first couple tasks directly into the Program code file. I still prefer to have separately executable tasks restoring Nugets, compiling, and running all the tests so you can run partial builds at will. In this case, using some sample code from the Oakton build script:

// Just delegating to the dotnet cli to restore nugets
Target("restore", () =>
{
    Run("dotnet", "restore src/Oakton.sln");
});

// compile the whole solution, but after running
// the restore task
Target("compile",  DependsOn("restore"),() =>
{
    Run("dotnet",
        $"build src/Oakton.sln --no-restore");
});

Target("test", DependsOn("compile"),() =>
{
    RunTests("Tests");
});

// Little helper function to execute tests by test project name
// still just delegating to the command line
private static void RunTests(string projectName, string directoryName = "src")
{
    Run("dotnet", $"test --no-build {directoryName}/{projectName}/{projectName}.csproj");
}

We’ve got a couple more steps to make this a full build script. We also need this code at the very bottom of our Program.Main() function to actually run tasks:

RunTargetsAndExit(args);

I typically have an explicit “default” task that gets executed when you just type build / ./build.sh that usually just includes other named tasks. In the case of Oakton, it runs the unit test task plus another task called “commands” that smoke tests several command line calls:

Target("default", DependsOn("test", "commands"));

Usually, I’ll also use a “ci” task that is intended for continuous integration builds that is a superset of the default build task with extra integration tests or Nuget publishing (this isn’t as common now that we tend to use separate GitHub actions for Nuget publishing). In Oakton’s case the “ci” task is exactly the same:

Target("ci", DependsOn("default"));

After all that is in place, and working in Windows at the moment, I like to make git commits with the old “check in dance” like this:

build && git commit -a -m "some commit message"

Less commonly, but still valuable, let’s say that Microsoft has just released a new version of .Net that causes a cascade of Nuget updates and other havoc in your projects. While working through that, I’ll frequently do something like this to work out Nuget resolution issues:

git clean -xfd && build restore

Or once in awhile the IDE build error window can be misleading, so I’ll build from the command line with:

build compile

So yeah, most of the build “script” I’m showing here is just delegating to the dotnet CLI and it’s not very sophisticated. I still like having this file so I can jump between my projects and just type “build” or “build compile” without having to worry about what the solution file name is, or telling dotnet test which projects to run. That being said though, let’s jump into something quite a bit more complicated.

Adding Sql Server and EF Core into the Build Script

For the sake of testing my new little identity server, I need at least a couple different client databases plus the lookup database. Going back to first principles of Agile Development practices, it should be possible for a brand new developer to do a clean clone of the new identity server codebase and very quickly be running the entire service and all its tests. I’m going to pull that off by adding new tasks to the Bullseye script to set up databases and automate all the testing.

First up, I don’t need very much data for testing, so I’m more than good enough just running Sql Server in Docker, so I’ll add this docker-compose.yml file to my repository:

version: '3'
services:
  sqlserver:
    image: "microsoft/mssql-server-linux"
    ports:
      - "1435:1433"
    environment:
      - "ACCEPT_EULA=Y"
      - "SA_PASSWORD=P@55w0rd"
      - "MSSQL_PID=Developer"

The only think interesting to note is that I mapped a non-default port number (1435) to this container for the sole sake of being able to run this container in parallel to Sql Server itself that I have to have for other projects at work. Back to Bullseye, and I’ll add a new task to delegate to docker compose to start up Sql Server:

Target("docker-up", async () =>
{
    await RunAsync("docker", "compose up -d");
});

And trust me on this one, the Docker setup is asynchronous, so you actually need to make your build script wait a little bit until the new Sql Server database is accessible before doing anything else. For that purpose, I use this little function:

public static async Task WaitForDatabaseToBeReady()
{
    Console.WriteLine("Waiting for Sql Server to be available...");
    var stopwatch = new Stopwatch();
    stopwatch.Start();
    while (stopwatch.Elapsed.TotalSeconds < 30)
    {
        try
        {
            // ConnectionSource is really just exposing a constant
            // with the known connection string to the Dockerized
            // Sql Server
            await using var conn = new SqlConnection(ConnectionSource.ConnectionString);
            await conn.OpenAsync();

            var cmd = conn.CreateCommand();
            cmd.CommandText = "select 1";
            await cmd.ExecuteReaderAsync();

            Console.WriteLine("Sql Server is up and ready!");
            return;
        }
        catch (Exception)
        {
            await Task.Delay(250);
            Console.WriteLine("Database not ready yet, trying again.");
        }
    }
}

Next, I need some code to create additional databases (I’m sure you can do this somehow in the docker compose file itself, but I didn’t know how at the time and this was easy). I’m going to omit the actual CREATE DATABASE calls, but just know there’s a method with this signature on a static class in my build project called Database:

public static async Task BuildDatabases()

I’m using EF Core for data access in this project, and also using EF Core migrations to do database schema building, so we’ll want the dotnet ef tooling available, so I added a task for just that:

Target("install-ef", IgnoreIfFailed(() =>
    Run("dotnet", "tool install --global dotnet-ef")
));

The dotnet ef command line usage has a less than memorable pattern of usage, so I made a little helper function that’s gonna get called for different combinations of EF Core context name and database connection strings:

public static async Task RunEfUpdate(string contextName, string databaseName)
{
    Console.WriteLine($"Running EF Migration for context {contextName} on database '{databaseName}'");

    // ConnectionSource is a little helper specific to my 
    // identity server project
    var connection = ConnectionSource.ConnectionStringForDatabase(databaseName);
    await Command.RunAsync("dotnet",
        $"ef database update --project src/ProjectName/ProjectName.csproj --context {contextName} --connection \"{connection}\"");
}

For a little more context, I have two separate EF Core DbContext classes (obfuscated from the real code):

  1. LookupDbContext — the “master” registry of client databases by client id
  2. IdentityDbContext — addresses a single client database holding user credentials

And now, after all that work, here’s a Bullseye script that can stand up a new Sql Server database in Docker, build the required databases if necessary, establish baseline data, and run the correct EF Core migrations as needed:

Target("database", DependsOn("docker-up"), async () =>
{
    // "Database" is a static class in my build project where
    // I've dumped database helper code
    await Database.BuildDatabases();

    // RunEfUpdate is delegating to dotnet ef
    await Database.RunEfUpdate("LookupDbContext", "identity");
    
    // Not shown, but fleshing out some static lookup data
    // with straight up SQL calls
    
    // Running migrations on all three test databases for client
    // credential databases
    await Database.RunEfUpdate("IdentityDbContext", "environment1");
    await Database.RunEfUpdate("IdentityDbContext", "environment2");
    await Database.RunEfUpdate("IdentityDbContext", "environment3");
});

Now, the tests for this identity server are almost all going to be integration tests, so I won’t even bother separating out integration tests from unit tests. That being said, our main test library is going to require the Sql Server database built above to be available before the tests are executed, so I’m going to add a dependency to the test task like so:

// The database is required
Target("test", DependsOn("compile", "database"), () =>
{
    RunTests("Test Project Name");
});

Now, when someone does a quick clone of this codebase, they should be able to just run the build.cmd/ps1/sh script and assuming that they already have the correct version of .Net installed plus Docker Desktop:

  1. Have all the nuget dependencies restored
  2. Compile the entire solution
  3. Start a new Sql Server instance in Docker with all testing databases built out with the correct database structure and lookup data
  4. Have executed all the automated tests

Bonus Section: Integration with GitHub Actions

I’m a little bit old school with CI. I grew up in the age when you tried to keep your CI set up as crude as possible and mostly just delegated to a build script that did all the actual work. To that end, if I’m using Bullseye as my build scripting tool and GitHub Actions for CI, I delegate to Bullseye like this from the Oakton project:

name: .NET

on:
  push:
    branches: [ master ]
  pull_request:
    branches: [ master ]
    
env:
  config: Release
  disable_test_parallelization: true
  DOTNET_CLI_TELEMETRY_OPTOUT: 1
  DOTNET_SKIP_FIRST_TIME_EXPERIENCE: 1

jobs:
  build:

    runs-on: ubuntu-latest
    timeout-minutes: 20

    steps:
    - uses: actions/checkout@v2

    - name: Setup .NET 5
      uses: actions/setup-dotnet@v1
      with:
        dotnet-version: 5.0.x
    - name: Setup .NET 6
      uses: actions/setup-dotnet@v1
      with:
        dotnet-version: 6.0.x
    - name: Build Script
      run: dotnet run -p build/build.csproj -- ci

The very bottom line of code is the pertinent part that delegates to our Bullseye script and runs the “ci” target that’s my own idiom. Part of the point here is to have the build script steps committed and versioned to source control — which these days is also done with the YAML GitHub action definition files, so that’s not as important as it used to be. What is still important today is that coding in YAML sucks, so I try to keep most of the actual functionality in nice, clean C#.

Bonus: Why didn’t you…????

  • Why didn’t you just use MSBuild? It’s possible to use MSBuild as a task runner, but no thank you. I was absolutely sick to death of coding via XML in NAnt when MSBuild was announced, and I’ll admit that I never gave MSBuild the time of day. I’ll pass on more coding in Xml.
  • Why didn’t you just use Nuke or Cake? I’ve never used Nuke and can’t speak to it. I’m not a huge Cake fan, and Bullseye is a simple model to me
  • Why didn’t you just use Powershell? You end up making powershell scripts call other scripts and it clutters the file system up.

Leave a comment