Category Archives: Uncategorized

If you want your OSS project to be successful…

Don’t take any of this too seriously because I wrote this really fast as I was procrastinating instead of working with some ugly legacy code today.

First off, I don’t know why the hell you’d pay attention to me because I’m only middling successful at OSS in terms of the effort I’ve put in over the years to whatever the benefits are. That being said, I do know quite a bit about what not to do, and that’s valuable.

First off, have the right idea and build something that solves some kind of common problem. It helps tremendously if you’re going into some kind of problem area without a lot of existing solutions. My aging StructureMap project was the very first production ready IoC tool for .Net. I seriously doubt it would have been terribly successful if it hadn’t been for that fact. Likewise, Marten has been successful because the idea of Postgresql as a Document Database just makes sense because of Postgresql’s unusually strong JSON support. I can say with some authority that the project concept was appealing because there were 3-4 other nascent OSS projects to do exactly what Marten became a couple years ago.

If you’re trying to go build a better mousetrap and walk into a problem domain with existing solutions, you just need to have some kind of compelling reason for folks to switch over to your tool. My example there would be Serilog. NLog or Log4Net are fine for what they are, but Serilog’s structured logging approach is different and provided some value beyond what the older logging alternatives did.

Try not to compete against some kind of tool from Microsoft itself. That’s just a losing proposition 95% of the time. And maybe realize upfront that the price of a significant success in .Net OSS means that Microsoft will eventually write their own version of whatever it is you’ve done.

Oh, and don’t do OSS at least in the .Net world if you’re trying to increase your professional visibility. I think it’s the other way around, you increase your visibility through talks, blog posts, and articles first. Then your OSS work will likely be more successful with more community visibility and credibility. Other folks might disagree with that, but that’s how I see it and what I’ve experienced myself both in good and bad ways.

If at all possible, dog-food your OSS project on a real work project. I can’t overstate how valuable that is to see how your tool really functions and to ensure it actually solves problems. The feedback cycle between finding a problem and getting it fixed is substantially faster when you’re dog-fooding your OSS project in a codebase where you have control versus getting GitHub issues from other people in code you’re not privy to. Moreover, it’s incredibly useful to see your colleagues using your OSS tool to find out how other folks reason about your tool’s API, try to apply it, and find out where you have usability issues. Lastly, you’re much more likely to have a good understanding of how to solve a technical problem that you actually have in your own projects.

All that being said about dog-fooding however, I’ve frequently used OSS work to teach myself new techniques, design ideas, and technologies.

Be as openly transparent about the project as you can early on and try to attract other contributors. Don’t go dark on a project or strive for perfection before releasing your project or starting to talk about it in public. I partially blame “going dark” prior to the v1.0 release for FubuMVC being such a colossal OSS failure for me. In 2011 I had a pretty packed room at CodeMash for a half day workshop on the very early FubuMVC, but I quit blogging or talking about it much for a couple years after that. When the FubuMVC team got a chance to do present again at CodeMash 2013 for the big v1.0 rollout, there were about a dozen people in the room and I was absolutely crushed.

Or just as valuable, try to get yourself some early adopters to get feedback and more ideas about the direction of the project. Not in terms of downloads and usage per se, but absolutely in terms of the technical achievement and community, Marten is by far and away the most successful OSS project I’ve had anything to do with over the years. A lot of this I would attribute to just having the right concept for a project, but much of that I believe is in how much effort I put into Marten very early on in publicizing it and blogging the project progress early. You know that saying that “with enough eyeballs, all bugs are shallow?” I have no idea if that’s actually true, but I can absolutely state that plenty of early user feedback and involvement has a magical ability to improve the usability of your OSS project.

Contributors come in all different types, and you want as many as you can attract. An eventual core team of contributors and project leaders is invaluable. Pull requests to implement functionality or fix bugs is certainly valuable. Someone who merely watches your project and occasionally points out usability problems or unclear documentation is absolutely helpful. People who take enough time to write useful GitHub issues contribute to projects getting better. Heck, folks who make itty bitty pull requests to improve the wording of the documentation help quite a bit, especially when there’s a bunch of those.

This deserves its own full fledged conversation, but make sure there’s enough README documentation and build automation to get new contributors up and running fast in the codebase. If you’re living in the .Net space, you make sure that folks can just open up Visual Studio.Net (or better IDE tools like JetBrains Rider;) and go. If there is any other kind of environment setup, get that scripted out where a simple “build” or “./build.sh” command of some sort or another build out whatever they need to run your code and especially the tests. Docker and docker-compose is turning out to be hugely helpful for this. This might force you to give up your favorite build automation tooling in favor of something mainstream (sorry Rake, I still love you). And don’t be like MS teams used to be and always use all kinds of weird Visual Studio.Net project extensions that aren’t on most folks’s box.

Documentation is unfortunately a big deal. It’s not just a matter of having documentation though, it’s also vital to make that documentation easy to edit and keep up to date because your project will change over time. Years ago I was frequently criticized online for StructureMap having essentially no documentation. As part of the huge (at the time) StructureMap 2.5 release I went and wrote up a ton of documentation with code samples in a big static HTML website. The 2.5 release had some annoying fluent interface APIs that nobody liked (including me), so I started introducing some streamlined API usage in 2.6 that still exists in the latest StructureMap (and Lamar) — which was great, except now the documentation was all out of date and people really laid into me online about that for years.

That horrendous experience with the StructureMap documentation led to me now having some personal rules for how I approach the technical documentation on ongoing OSS projects:

  1. Make the project documentation website quick to re-publish because it’s going to change over time. Your best hope to keeping documentation up to date is to make it as painless as possible to update and publish documentation updates
  2. Just give up and author your documentation in markdown one way or another because most developers at this point understand it
  3. Embed code samples in the documentation in some sort of way where they can be kept up to date
  4. As silly as it may sound, use some kind of “Edit me on GitHub” link in each page in your documentation website that let’s random people quickly whip up little pull requests to improve the documentation. You have no idea how helpful that’s been to me over the past 3-4 years.
  5. Make it easy to update and preview the documentation website locally. That helps tremendously for other folks making little contributions.

There are other solutions for the living documentation idea, but all of the projects I’m involved with use the Storyteller “stdocs” tooling (which I of course wrote over years of dog-fooding:)) to manage “living documentation” websites.

Try to be responsive to user problems and questions. This is a “do as I say, not as I do” kind of thing because I can frequently be slow to acknowledge GitHub issues or questions at times. Just letting them know that “I saw this” can buy you some good will. Try to be cool in the face of folks being assholes to you online. Remember that interactions and folks generally come off worse online than they would in real life with body language cues, and also remember that you’re generally dealing with people when they’re already frustrated.

Assuming that your project is on GitHub, I highly recommend Gitter as a way to communicate with users. I find the two way communication a lot faster to help unwind user problems compared to working asynchronously in GitHub issues.

 

 

Fast Build, Slow Build, and the Testing Pyramid

At Calavista we’ve been helping a couple of our clients use Selenium for automated testing of web applications. For one client we’re slowly introducing a slightly different, but still .Net-focused technical stack that allows for much more effective test automation without having to resort to quite so many Selenium tests. For another client we’re trying to help them optimize the execution time of their large Selenium test suite.

At this point, they’re only running the Selenium test suite in a scheduled run overnight, with their testers and developers needing to deal with any test failures the next day. Ideally, they want to get to the point where developers could optionally execute either the whole suite or a targeted subset of the Selenium tests on their own development branches whenever they want.

I think it’s unlikely that we’ll get the full Selenium test suite to where it executes fast enough that a developer would be willing to run those tests as part of their normal “check in dance” routine. To thread the needle a bit between letting a developer get quick feedback from their own local builds or the main continuous integration builds and the desire to run the Selenium suite much more often for faster feedback, we’re suggesting they split the build activity up with what I’ve frequently seen called the “fast build, slow build” pattern (I couldn’t find anybody to attribute this to tonight as I wrote this, but I can’t take credit for it).

First off, let’s assume your project is following the idea of the “testing pyramid” one way or another such that your automated tests probably fall into one of three broad categories:

  1. Unit tests that don’t touch the database or other external services so they generally run pretty quickly. This would probably include things like business logic rules or validation rules.
  2. Integration tests that test a subset of the system and frequently use databases or other external services. HTTP contract tests are another example.
  3. End to end tests that almost inevitably run slowly compared to other types of tests. Selenium tests are notoriously slow and are the obvious example here.

The general idea is to segment the automated build something like this:

  1. Local developer’s build — You might only choose to compile the code and run fast unit tests as a check before you try to push commits to a GitHub/BitBucket/Azure DevOps/whatever you happen to be using branch. If the integration tests in item #2 are fast enough, you might include them in this step. At times, I’ve divided a local build script into “full” and “fast” modes so I can easily choose how much to run at one time for local commits versus any kind of push (I’m obviously assuming that everybody uses Git by this point, so I apologize if the Git-centric terminology isn’t helpful here).
  2. The CI “fast build” — You’d run a superset of the local developer’s build, but add the integration tests that run reasonably quickly and maybe a small smattering of the end to end tests. This is the “fast build” to give the developer reasonable assurance that their push built successfully and didn’t break anything
  3. The CI “slow build” of the rest of the end to end tests. This build would be triggered as a cascading build by the success of the “fast build” on the build server. The “slow build” wouldn’t necessarily be executed for every single push to source control, but there would at least be much more granularity in the tracking from build results to the commits picked up by the “slow build” execution. The feedback from these tests would also be much more timely than running overnight. The segregation into the “fast build / slow build” split allows developers not to be stuck waiting for long test runs before they can check in or continue working, but still get some reasonable feedback cycle from those bigger, slower, end to end tests.

 

 

Standing up a local Sql Server development DB w/ Bullseye, Docker, and Roundhouse

EDIT 3/26: I added the code that delegates to the Sql Server CLI tools in Docker

For one of our Calavista engagements we’re working with a client who has a deep technical investment in Sql Server with their database migrations authored in RoundHousE. The existing project automation depended on Sql Express for standing up local development and testing databases, with some manual set up steps in a Wiki page before you could successfully clone and run the application locally.

As we’ve started to introduce some newer technologies to this client’s web development ecosystem, there was an opportunity to improve what my former colleague Chad Myers used to call the “time to login screen” metric — how long does it take a new developer from making their first initial clone of a codebase to being able to run the system locally on their development box? Being somewhat selfish because I prefer to develop on OS X these days, I opted for running the local development database in Docker instead of Sql Express.

Fortunately, you can quickly stand up Sql Server quickly in a Linux container now. Here’s a sample docker-compose.yaml file we’re using:

version: '3'
services:
  sqlserver:
    image: "microsoft/mssql-server-linux:2017-latest"
    container_name: "Descriptive Container Name"
    ports:
     - "1433:1433"
    environment:
     - "ACCEPT_EULA=Y"
     - "SA_PASSWORD=P@55w0rd"
     - "MSSQL_PID=Developer"

That’s step 1, but there’s a little bit more we needed to do to stand up a local database (actually two databases):

  1. Provision a new database server
  2. Create two named databases
  3. Run the RoundHousE database migrations to bring the database up to the current version

So now let’s step into the realm of project automation scripting. I unilaterally chose to use Bullseye for build scripting because of the positive experience the Marten team had when we migrated the Marten build from Rake to Bullseye. Using Bullseye where you’re just writing C#, we have this task:

Target("init-db", () =>
{
    // This verifies that the docker instance
    // defined in docker-compose.yaml is up
    // and running
    Run("docker-compose", "up -d");

    // The command above is asynchronous, so wait
    // until Sql Server is responsive
    WaitForSqlServerToBeReady();

    // Create the two databases
    CreateDatabase("Database Name #1");
    CreateDatabase("Database Name #2");

    // Run RoundHousE to apply the latest database migrations
    Run("dotnet", "tool update -g dotnet-roundhouse");
});

To flesh this out a little more, the Sql Server Docker image embeds some of the Sql Server command line tools in the image, so we were able to create the new named databases like this:

        // No points for style!!!
        private static void WaitForSqlServerToBeReady()
        {
            var attempt = 0;
            while (attempt < 10)
                try
                {
                    using (var conn = new SqlConnection(DockerConnectionString))
                    {
                        conn.Open();
                        Console.WriteLine("Sql Server is up and ready!");
                        break;
                    }
                }
                catch (Exception)
                {
                    Thread.Sleep(250);
                    attempt++;
                }
        }

The CreateDatabase() method just delegates to the sqlcmd tool within the Docker container like this (the Run() method comes from SimpleExec):

        private static void CreateDatabase(string databaseName)
        {
            try
            {
                Run("docker",
                    $"exec -it SurveySqlServer /opt/mssql-tools/bin/sqlcmd -S localhost -U SA -P \"{SqlServerPassword}\" -Q \"CREATE DATABASE {databaseName}\"");
            }
            catch (Exception e)
            {
                Console.WriteLine($"Could not create database '{databaseName}': {e.Message}");
            }
        }

It was a lot of Googling for very few lines of code, but once it was done, voilà, you’ve got a completely functional Sql Server database for local development and testing. Even better yet, it’s super easy to turn development database on and off when I switch between different projects by just stopping and starting Docker images.

It’s an OSS Nuget Release Party! (Jasper v1.0, Lamar, Alba, Oakton)

My bandwidth for OSS work has been about zero for the past couple months. With the COVID-19 measures drastically reducing my commute and driving kids to school time, getting to actually use some of my projects at work, and a threat from an early adopter to give up on Jasper if I didn’t get something out soon, I suddenly went active again and got quite a bit of backlog things, pull requests, and bug fixes out.

My main focus in terms of OSS development for the past 3-4 years has been a big project called “Jasper” that was originally going to be a modernized .Net Core successor to FubuMVC. Just by way of explanation, Jasper, MO is my ancestral hometown and all the other projects I’m mentioning here are named after either other small towns or landmarks around the titular “Jasper.”

Alba

Alba is a library for HTTP contract testing with ASP.Net Core endpoints. It does quite a bit to reduce the repetitive code from using TestServer by itself in tests and makes your tests be much more declarative and intention revealing. Alba v3.1.1 was released a couple weeks ago to address a problem exposing the application’s IServiceProvider from the Alba SystemUnderTest. Fortunately for once, I caught this one myself while dogfooding Alba on a project at work.

Alba originated with the code we used to test FubuMVC HTTP behavior back in the day, but was modernized to work with ASP.Net Core instead of OWIN, and later retrofitted to just use TestServer under the covers.

Baseline

Baseline is a grab bag of utility code and extension methods on common .Net types that you can’t believe is missing from the BCL, most of which is originally descended from FubuCore.

As an ancillary project, I ripped out the type scanning and assembly finding code from Lamar into a separate BaselineTypeDiscovery Nuget that’s used by most of the other libraries in this post. There was a pretty significant pull request in the latest BaselineTypeDiscovery v1.1.0 release that should improve the application start up time for folks that use Lamar to discover assemblies in their application.

Oakton

Oakton is a command parsing library for .Net that was lifted originally from FubuCore that is used by Jasper. Oakton v2.0.4 and Oakton.AspNetCore v2.1.3 just upgrade the assembly discovery features to use the newer BaselineTypeDiscovery release above.

Lamar

Lamar is a modern, fast, ASP.Net Core compliant IoC container and the successor to StructureMap. I let a pretty good backlog of issues and pull requests amass, so I took some time yesterday to burn that down and the result is Lamar v4.2. This release upgrades the type scanning, fixes some bugs, and added quite a few fixes to the documentation website.

Jasper

Jasper at this point is a command executor ala MediatR (but much more ambitious) and a lightweight messaging framework — but the external messaging will mature much more in subsequent releases.

This feels remarkably anti-climactic seeing as how it’s been my main focus for years, but I pushed Jasper v1.0 today, specifically for some early adopters. The documentation is updated here. There’s also an up to date repository of samples that should grow. I’ll make a much bigger deal out of Jasper when I make the v1.1 or v2.0 release sometime after the Coronavirus has receded and my bandwidth slash ambition level is higher. For right now I’m just wanting to get some feedback from early users and let them beat it up.

Marten

There’s nothing new to say from me about Marten here except that my focus on Jasper has kept me from contributing too much to Marten. With Jasper v1.0 out, I’ll shortly (and finally) turn my attention to helping with the long planned Marten v4 release. For a little bit of synergy, part of my plans there is to use Jasper for some of the advanced Marten event store functionality we’re planning.

 

 

 

 

 

 

.Net Core Backend + React.js Frontend — Optimizing the development time experience

I’ve seen some chatter on Twitter this week that the enforced home time due to the Coronavirus will lead to much more blogging and OSS activity. Hopefully all the social distance measures work out and we’ll have time and energy to be concerned about relatively unimportant things like software development in the weeks to come. Or maybe this kind of thing is just a nice distraction.

As a follow up to my last post a while back titled Choosing a “Modern” React.js + .Net Core Stack, I want to talk about how our team is trying to create a productive development time experience for working with both the React.js frontend and the .Net Core backend. In this case we’re really building out a replacement for a large feature in an existing ASP.Net MVC application, so even though we’ve opted for a brand new ASP.Net Core backend for the new feature, we’ve got to play nice with the existing application. To that end, out new React.js bundle is going to be hosted in production by the existing MVC5 application in a minimalistic Razor page (really just the inevitable <div id="main" /> tag and a <script /> tag that serves up the Javascript bundle. I should also point out that the React.js bundle is created at build time in the CI/CD pipelines by Parcel.js.

All that being said, the production time architecture simply looks like this, with the production database, the existing MVC5 web application, and our new ASP.Net Core service all being hosted on Azure:

ProductionMode

So that’s in production, but at development time we need to have a little different configuration. After a couple iterations, our development time model for running the application locally looks like this:

DevelopmentMode

I feel strongly that most development tasks can and should be done with a test driven development approach (but I’m not demanding any kind of test-first purity as long as the tests are actually written concurrently with new code) — especially on the .Net backend side. Even quite a bit of the React.js components and any other Javascript code can be written inside of unit testing cycles using the Jest watch mode.

However, there’s still a lot of times when you want to run the application to see the user interface itself and work on the interactions between the full stack. To that end we did want a configuration that allowed developers to run everything locally on their own development box.

Since I’m a big believer in “per developer development databases,” I set up a little bit of project automation with Bullseye to spin up Sql Server in a Docker container, provision the application databases with the existing RoundHousE migrations, and add some sample data as necessary. I’ll write a separate blog post someday (maybe) about this because I thought it came together pretty smoothly, but you still had to piece things together across a lot of disjointed blog posts and online documentation.

Running the .Net Core backend locally is easy with ASP.Net Core. Using the Oakton.AspNetCore command line extensions, we can spin up the application at the command line or an IDE run configuration with dotnet run -- -e Development to run the application in “development” mode with configuration pointing at local resources (i.e., appsettings.Development.json).

Running the front end React.js code was a little trickier in our case because of its reliance on so much server side state. We very initially used hard-coded JSON data in the React.js code itself to get started, but we quickly outgrew that. We later kicked the tires on stand in API tools like json-server. In the end, we opted to run the real backing API web service when working with the React.js application to avoid having any incompatibilities between a fake web service and the real thing. I don’t know that I’d opt to make that same decision in every case, but I think it’s going to work out in this one.

For UI development, I’m a huge fan of using React.js’s hot module replacement mode for quick change loop / feedback cycles at development time. Using Parcel.js’s development server, you can serve up your React.js bundle in a minimal HTML page where the React.js components are automatically rebuilt and refreshed on the page while maintaining the existing state of the application whenever the backing code files are changed.

As a hot tip though, if you’re going to try to use the hot replacement mode while also talking to a local ASP.Net Core, disable the HTTPS support in development mode on the .Net side of things. I found that it knocks out the websockets communication that Parcel.js uses to reload the React.js components.

Using Dotenv for Javascript Environment Variables

One of the first issues we faced was that the API’s URL location is different locally versus in production mode, and the code that made HTTP calls in the React.js bundle needed to be aware of that. Fortunately, we’re using Parcel.js and it has out of the box support for the dotenv library to support the idea of environment variables within the Javascript bundled code for configuration items like the base url of our backing web service.

The environment variables are defined in text files with the naming convention `.env.[profile]`. For the “development” mode, we added a file parallel to the package.json file called .env.developmentthat consists for now of just this line:

BASE_URL=http://localhost:5000

When you run the Parcel.js development server with the hot module replacement, it uses the “development” profile and the value of “BASE_URL” in the file above will be embedded into the bundled Javascript accessible from the global process.env object.

In the Javascript file in our React.js bundle that makes HTTP calls to the backend service, we lookup the base url for the backend service like this:

var baseUrl = '';

// dotenv places the values from the file above
// into the "process.env" object at build time
if (process.env.BASE_URL){
    baseUrl = process.env.BASE_URL;
}

 

There’s still some ongoing work around security via JWT tokens, but right now we’re able to run the React.js bundle in the hot replacement mode, but still connect to the locally running ASP.Net Core web service and I’m happy with the development experience so far.

Choosing a “Modern” React.js + .Net Core Stack

This is really just preparation for a meeting tomorrow and I haven’t blogged much lately. Also, I had to write this way, way too late at night and the grammar probably shows that:/ I’m happy to take any kind of feedback, mockery, or questions.

One of our Calavista development teams and I are having a kind of kickoff meeting tomorrow morning to discuss our new technical stack that we plan to use for some big new customer-facing features in a large web application this year. The current application we’re building onto was originally designed about a decade ago and uses a lot of the .Net tools and approaches that I myself would have espoused at that time and even a few OSS things that I helped build — but some of which are clearly aluminum wiring today (RIP K. Scott Allen) or at least have been surpassed by later tools.

First off, we’ve got a couple constraints:

  1. .Net for the backend
  2. Sql Server as the database (but with perfect control on a greenfield project I’d obviously opt for Marten + Postgresql 😉
  3. We can’t rewrite or update the very large application all at one time because of course we can’t
  4. The “new” stack still has to cooperate with the “old” stack
  5. Azure hosting
  6. Bootstrap styling

And, what we need and/or want in a new technical stack:

  • The ability to craft very good user experiences in the new features
  • Better testability all the way through the technical architecture. We have to depend much more on Selenium testing than I prefer, and I’d like to see much more of a balanced test pyramid / test funnel distribution of tests. At the bottom of this post is a brief explanation of why I’m leery of too much Selenium usage.
  • Good feedback cycles for the developers doing the front end work
  • Fairly mainstream tools as much as possible so our client can more easily deal with the system after we’re gone
  • A way to incorporate the new technology into the existing system with minimal disruption to the current state and without the users being aware of the technical sausage making (other than hopefully some better usability and richer features).

The Proposed “New” Stack

For a variety of reasons, we’re moving to using React.js for the client side backed by an ASP.Net Core “backend for frontend” service that will mostly be zapping JSON back and forth as an API.

On the server side, we’re going with:

  • ASP.Net Core 3.*. Probably with MVC Core for API endpoints, but I’m voting to use it in the lightest possible way. Our client wants to do this anyway, but I’ve been pushing it for awhile because of the easier build automation, testability, faster builds and throughput, and honestly because I want to code on OS X and not have to use my Windows VM;-)
  • I’m probably not willing to use any kind of “Mediator tool” because I think it’s unnecessary complexity, but we might pull in Jasper for any kind of asynchronous messaging or just plain asynchronous work through it’s local command bus
  • Entity Framework Core. Using Sql Server is a constraint, I’m not particularly a fan of Dapper, I don’t mind EF Core, it’s very commonly used now, and it’s what the development team wants to use. If we can get away with it, I want to use EF Core’s migrations support to create development databases on the fly for rapid prototyping and development throughput. If you’re old enough to remember that I was one of the authors of the EF Vote of No Confidence, I’d say that EF Core is a perfectly fine heavy ORM if that’s what you need and EF has left behind all the design decisions from EF v1 that we so strenuously disagreed with way back when.
  • We will also not be using any Azure tool that cannot be executed on a local developer box. So, using Azure Service Bus that you can connect to locally is fine, but weird serverless tools that can only be run on Azure are absolutely off the table as long as I have a say in anything.

The one place where we’ll deviate from what I guess is mainstream MVC Core is to absolutely ditch any kind of leftover Ruby on Rails Models/Views/Controllers folder and/or namespace layout. I’m very much in favor of using a “feature folder” organization where closely related classes/services/DTOs for a use case live in the same namespace instead of being scattered all over God’s green earth. Moreover, I’m dead set against any kind of “Onion Architecture” code organization, but that’s a rant for another day.

More interestingly, on the client side we’ve rewritten an existing feature in the application with a new “React Stack” that we plan to start with:

  • React.js vLatest. I’ve been mostly out of React.js for a few years, and I’ve been pretty happy with the improvements since I built Storyteller 3.0 with React v11. I really like React Hooks even though I didn’t really understand them well when they were brewing a couple years ago.
  • Parcel.js. ZOMG, Parcel.js is so much easier to get up and going that Webpack.js was a couple years ago. I’m absolutely sold. I think the hot module replacement ability in React.js is a killer feature and a huge advantage over trying to do complex user interfaces in MVC + Razor because of the vastly better feedback cycle, but it used to be a nightmare to get up and going with Webpack.js (IMO). It basically comes out of the box with Parcel.js.
  • React-Bootstrap. The existing application is based around Bootstrap anyway, and using this library instantly gives us a consistently styled application with the rest of the application. Plus it’s a pretty rich out of the box component library.
  • Redux and React-Redux for state management. I had good results with these tools before, and I don’t see any compelling reason to move to something else or to do without.
  • I don’t think we’ll use TypeScript, but I’m not opposed if the team wants to do that. I don’t see much advantage for React components, but maybe for Redux reducer code
  • I played some with Redux middleware and I’m at least intrigued by react-api-middleware, but we might just stick with simple axios usage.

More on the testing tools in a later section because that’s a crucial part of all of this.

 

Integrating the New with the Old

I’m going to stop using the word “microservice” with our client because that apparently conjures up visions of hopeless complexity. Instead, we’re starting a new stack off to the side for new features that may also become a strangler application that eventually takes over the existing functionality.

All the same though, there’s much less literature about microservices in an conglomerate user interface application than there is on backend services. We’re initially going down a path of running our new React.js feature bundles inside the existing application’s Razor layout in an architecture something like this:

 

ReactInExistingMVC5

For new features, we’ll keep to the existing navigation structure and application look and feel by just adding new Razor pages that do nothing but embed a small Razor application like so:

@page
@model Application.Feature1.ViewModel
@{
    ViewBag.Title = "Feature Title";
}

http://~/assets/feature_bundle.js
<link rel="stylesheet" type="text/css" href="~/assets/feature_bundle.css">

<div id="main"></div>

There’s some details to work out around security, API gateways, and the like — but the hope is to have the React.js mini-applications completely communicating with a new ASP.Net Core “BFF” API.

I’m hoping there’s not a lot of overlap in the necessary database data between the old and the new worlds, but I’m prepared to be wrong. Our current thinking is that we’ll *gasp* keep using the old database, but keep a new schema to isolate the new tables. Right now my working theory is that we’ll have a background queue to synchronize any “writes” to the new database schema to the old database if that proves to be necessary.

 

Testing Approach

Alright, the part of this that I’m most passionate about. I’ve written before about my personal philosophy for automated testing in Jeremy’s Only Law of Testing (Test with the finest grained mechanism that tells you something important) and Succeeding with Automated Integration Tests. I think that both React.js + Redux and ASP.Net Core have good testability stories, especially compared to the MVC5 + Razor stack we’re using today. React.js + Redux because there are great tools for fast running unit tests against the client side code that isn’t really feasible with Razor — and especially now that there are effective ways to test React components without needing to use real browsers with Karma (shudder). ASP.Net Core because you can run your application in process, there’s some decent testing support for HTTP contract testing, and it’s far more modular than ASP.Net MVC pre-Core was.

Looking at another low fidelity view of the proposed architecture just to demonstrate how the new testing pyramid should go:

TestApproachDotNetCore

We’ll probably use xUnit.Net instead of NUnit, but at the time I drew this diagram I thought we’d have less control over things.

With this stack, our testing pyramid approach would be:

  • Unit tests as appropriate and useful for the .Net code in the backend, with a focus on isolating business logic from infrastructure using techniques described in Jim Shore’s Testing Without Mocks article — which is a fantastic paper about designing for testability in my opinion.
  • Use Alba (wraps TestServer) to execute integrated HTTP tests against the entire MVC Core stack
  • We’ll probably use Storyteller for acceptance tests when we hit some very data intensive business logic that I know is coming up this year
  • Possibly use Docker to run little isolated Sql Server databases for testing something like this approach
  • At least smoke tests against the React.js components with Jest and react-testing-library (I think I came down on preferring its approach to Enzyme). I’m going to prefer some real unit tests on the behavior of the components using react-testing-library.
  • Jest unit tests against the state tracking logic in the Redux store and reducers.
  • Use moxios for testing the interaction with the backend from the JS code?
  • I had some success in the past writing tests that combined the Redux store, the react-redux bindings, and the React components in more coarse grained integration tests that flush out problems that little unit tests can’t
  • And just a modicum of new Selenium tests just to feel safe about the end to end integration between the React client and the ASP.Net Core server

 

 

Why Not….?

  • Angular.js? Ick, no.
  • Vue.js? I think Vue.js sounds perfectly viable, but the team and I have prior experience with React.js and the existing ecosystem of components matters
  • GraphQL? I don’t see it as applicable to this application
  • Alternative web frameworks in .Net Core? Not unless it’s my own;)
  • Dapper? Meh from me.
  • Blazor? This one is a little more serious conversation because the client asked us about it. My feeling is that it’s not quite ready for prime time, doesn’t have much of an ecosystem yet, nobody is familiar with it yet, and we’d lose that awesome hot module replacement feedback cycle in React.js

 

Why am I down on Selenium?

I spent all my summers in High School and College working on my Dad’s house framing crew. And as any framer knows, the “Sawzall” is the one tool that will always be able to get you out of a jam when you can’t possibly utilize any other kind of saw — but it’s never your first choice of tool and it generally only came out when something was put together wrongly.

sawzall

 

Selenium is the Sawzall of automated software testing (at least in web applications). It’s almost never the first choice of testing tool, but if you didn’t architect for testability, it’s your last resort because it can always work by driving the entire stack through the browser.

It’s also:

  • Slow to execute
  • Laborious to do well in more complex applications — especially when there’s asynchronous behavior in the user interface
  • Frequently flake-y because of the asynchronous behavior
  • Brittle when the user interface is evolving
  • Hard to debug failures because you’re testing the whole damn system

In summary, Selenium is what you use when nothing else can work but never your first choice of approach.

Lamar 4.1: Multithreading improvements, diagnostics, documentation updates, and some thoughts on troubleshooting

As promised in my previous post My (Big) OSS Plans for 2020, the very first thing out the gate for me this year is a bug fix release for Lamar.

Lamar 4.1 and its related libraries was released on Nuget late last week with a variety of bug fixes, a couple new features, and a documentation refresh at https://jasperfx.github.io/lamar — including some new guidance here and there as a direct reaction to GitHub issues. Continuing my personal theme of OSS interactions being more positive than not over the past couple years, I received a lot of help from Lamar users. This release was largely the result of users submitting pull requests with fixes or failing unit tests that reproduced issues — and I cannot stress enough how helpful those reproduction tests are for an OSS maintainer. Another user took the time to investigate how an error message could be greatly improved. Thank you to all the users who helped on this release with pull requests, suggestions, and feedback on GitHub.

All told, the libraries updated are:

  • Lamar 4.1.0 — Multi-threading issues were finally addressed, fixes for Lamar + ASP.Net Core logging, some finer grained control over type scanning registrations
  • Lamar.Microsoft.DependencyInjection v4.1.0 — Adds support back for IWebHostBuilder for .Net Core 3.0 applications
  • Lamar.Diagnostics v1.1.3 — More on this one below
  • LamarCompiler 2.1.1 — Just updated the Roslyn dependencies. Lamar itself doesn’t use this, so you’re very unlikely to be impacted
  • LamarCodeGeneration v1.4.0 — Some small additions to support a couple Jasper use cases
  • LamarCodeGeneration.Commands v1.0.2 — This isn’t documented yet, and is really just to support some diagnostics and pre-generation of code for Jasper

 

A Note on Troubleshooting

I have a partially written blog post slash treatise on troubleshooting and debugging. One of the things I try to suggest when troubleshooting a technical issue is to come up with a series of theories about why something isn’t working and figuring out the quickest way to prove or disprove that theory.

In the case of Lamar’s multi-threading issues addressed in this release and a very similar issue fixed previously, the “obvious” theory was that somewhere there was some issue with data structures or locking. Myself and several others tried to investigate Lamar’s internals down this path, but came up empty handed.

The actual root cause turned out to be related to the Expression construction and compilation inside of Lamar that allowed variables to bleed through threads in heavily multi-threaded usage.

So, I still think that my idea of “build a theory about why something is failing, then try to knock it down” is a good approach, but not 100% effective. I’m adding a section to that blog post entitled “don’t get tunnel vision” and talk about fixating on one theory and not considering other explanations;-)

Then again, some things are just hard some time.

Lamar.Diagnostics

Back in October I blogged about the new Oakton.AspNetCore package that extends the command line capabilities of standard .Net Core / ASP.Net Core applications with additional diagnostics. As a drop in extension to Oakton.AspNetCore, the new Lamar.Diagnostics package can be installed into a .Net Core application to give you ready access to all of Lamar’s built in diagnostics through the command line.

Your newly available commands from the root of your project with Lamar.Diagnostics are:

  1. dotnet run -- lamar-services — prints out the Container.WhatDoIHave() output to the console or a designated file
  2. dotnet run -- lamar-scanning — prints out the Container.WhatDidIScan() output to the console or a designated file
  3. dotnet run -- lamar-validate — runs the Container.AssertConfigurationIsValid() command

You can also opt into Lamar’s built in environment tests being used through Oakton.AspNetCore’s environment check capability.

In all cases, these commands work by calling IHostBuilder.Build() to build up your application — but doesn’t call Start() so none of your IHostedService objects will run — and calls the underlying Container methods. By doing this, you get ready access to the Lamar diagnostics against your application exacstly the way that your applicaiton is configured, without you having to add any additional code to your system to get at this diagnostic information.

 

 

My (Big) OSS Plans for 2020

It’s now a yearly thing for me to blog about my aspirations and plans for various OSS projects at the beginning of the year. I was mostly on the nose in 2018, and way, way off in 2019. I’m hoping and planning for a much bigger year in 2020 as I’ve gotten much more enthusiastic and energetic about some ongoing efforts recently.

Most of my time and ambition next year is going toward Jasper, Marten, and Storyteller:

Jasper

Jasper is a toolkit for common messaging scenarios between .Net applications with a robust in process command runner that can be used either with or without the messaging. Jasper wholeheartedly embraces the .Net Core 3.0 ecosystem rather than trying to be its own standalone framework.

Jasper has been gestating for years, I almost gave up on it completely early last year, and purposely set it aside until after .Net Core 3.0 was released. However, I came back to it a few months ago with fresh eyes and full of new ideas from doing a lot of messaging scenario work for Calavista clients. I’m very optimistic right now about Jasper from a purely technical perspective. I’m furiously updating documentation, building sample projects, and dealing with last minute API improvements in an effort to kick out the big v1.0 release sometime in January of 2020.

Marten

Marten is a library that allows .Net developers to treat the outstanding Postgresql database as both a document database and an event store. Marten is mostly supported by a core team, but I’m hoping to be much more involved again this year. The big thing is a proposed v4 release that looks like it’s mostly going to be focused on the event sourcing functionality. There’s an initial GitHub issue for the overall work here, and I want to write a bigger post soon on some ideas about the approach. There’s going to be some new functionality, but the general theme is to make the event sourcing be much more scalable. Marten is a team effort now, and there’s plenty of room for more contributors or collaborators.

For some synergy, I’m also planning on building out some reference applications that use Jasper to integrate Marten with cloud based queueing and load distribution for the asynchronous projection support. I’m excited about this work just to level up on my cloud computing skills.

Storyteller

Storyteller is a tool I’ve worked on and used for what I prefer to call executable specification in .Net (but other folks would call Behavior Driven Development, which I don’t like only because BDD is overloaded). I did quite a bit of preliminary work last year on a proposed Storyteller v6 that would bring it more squarely into the .Net Core world, I wrote a post last year called Spitballing the Future of Storyteller that laid out all my thoughts. I liked where it was heading, but I got distracted by other things.

For more synergy, Storyteller v6 will use Jasper a little bit for its communication between the user interface and the specification process. It also dovetails nicely with my need to update my Javascript UI skillset.

 

Smaller Projects

Lamar — the modern successor to StructureMap and ASP.Net Core compliant IoC container. I will be getting a medium sized maintenance release out very soon as I’ve let the issue list back up. I’m only focused on dealing with problems and questions as they come in.

EDIT 1/6/2020 –> There’s a Lamar 4.1 release out!

Alba — a library that wraps TestServer to make integration testing against ASP.Net Core HTTP endpoints easier. The big work late last year was making it support ASP.Net Core v3. I don’t have any plans to do much with it this year, but that could change quickly if I get to use it on a client project this year.

Oakton — yet another command line parser. It’s used by Jasper, Storyteller, and the Marten command line package. I feel like it’s basically done and said so last year, but I added some specific features for ASP.Net Core applications and might add more along those lines this year.

StructureMap — Same as last year. I answer questions here and there, but otherwise it’s finished/abandoned

FubuMVC — A fading memory, and I’m pleasantly surprised when someone mentions it to me about once or twice a year

 

The Very Last ALT.Net Retrospective I’ll Ever Write

I’m on the latest episode of DotNetRocks (episode 1655) mostly talking about the .Net Core ecosystem and catching them up on the latest on Marten. Richard caught me a little off guard when he asked me if I thought after all this time that ALT.Net had been a success considering how Microsoft has embraced Open Source development as a model for themselves and the community as a whole.

If you’re in .Net and wondering what the hell is this “ALT.Net” thing, check out my article in MSDN from 2008 called What is ALT.NET? (that I was pleasantly surprised was still online to be honest). In short, it was a community in .Net of vaguely like-minded folks who were mostly drawn from the .Net blogging world and nascent OSS ecosystem of that time. I still think that the original couple ALT.Net Open Spaces events in Austin and Seattle were the best technical events and most impressive group of folks I’ve ever gotten to be around. As Richard had to bring up on DotNetRocks, the “movement” crystallized in some part because several of us crashed a session at the MVP summit in 2007 where the very earliest version of Entity Framework was being demonstrated and it, um, wasn’t good. At all.

Quoting from David Laribee’s original “ALT.Net Manifesto”:

  1. You’re the type of developer who uses what works while keeping an eye out for a better way.
  2. You reach outside the mainstream to adopt the best of any community: Open Source, Agile, Java, Ruby, etc.
  3. You’re not content with the status quo. Things can always be better expressed, more elegant and simple, more mutable, higher quality, etc.
  4. You know tools are great, but they only take you so far. It’s the principles and knowledge that really matter. The best tools are those that embed the knowledge and encourage the principles 

If you’d asked me personally in 2007-2008 what I thought ALT.Net was about and could maybe achieve it would be something like:

  • Introducing better software practices like Continuous Integration and TDD/BDD pulled from Agile software development. Heck, honestly I’d say that it was getting the .Net world to adopt Agile techniques — and at that time it was going to take some significant changes to the way the .Net mainstream built applications because the Microsoft technology of the day was a very poor fit for Agile development techniques (*cough* WebForms *cough*).
  • Opening the .Net community as a whole from ideas and tooling from outside the strict Microsoft ecosystem. I remember a lot of commentary about the “Microsoft monoculture”
  • And yes, to make .Net be much more OSS friendly.
  • Have a .Net community of practitioners that wasn’t focused solely on Microsoft tools. I remember being very critical of the Microsoft MVP/Regional Director camp at the time, even though I was a C# MVP at that time.

Like I said earlier, as an active participant in ALT.Net it was a tremendous experience, I learned a lot, and met some great folks that I’m friends with to this day. We unfortunately had some toxic personalities involved and the backlash from the mainstream .Net world more or less did ALT.Net in. I’d argue that ALT.Net as a distinct thing and any hopes of being a vibrant, transformative force within .Net was pretty well destroyed when Scott Hanselman did this:

whysomean

I’ll have to admit that even after a decade I still have some hard feelings toward many .Net celebrities of that day.

So, after all this time, do I think ALT.Net was successful? 

Eh, maybe, sort of. I think it’s possible that the positive changes would have happened over time anyway. I think it’s much more likely that folks like Hanselman, Phil Haack, Brad Wilson, Glenn Block, Jon Galloway and many others working within Microsoft made a much bigger difference in the end than we ALT.Net rabble rousers did from the outside.

ALT.Net was very successful in terms of bringing some people together who are still among the technical community leaders, so that’s a positive. I still think the .Net community is a little too focused on Microsoft, but I like the technical stuff and guidance coming out of Redmond much, much better now than I did in ’07-’08 and I do think that ALT.Net had some impact on that.

I think you could claim that ALT.Net had some influence in moving things in better directions, but it required a lot more time than we’d anticipated and it maybe inevitably had to come about by Microsoft itself changing. For those of you who don’t know this, Scott Guthrie did the first public demonstration of what became ASP.Net MVC in Austin at the initial ALT.Net Open Spaces event, partially in reaction to how so many of us at that time were very negatively comparing WebForms of the day to what we were seeing in Ruby on Rails.

One of my big bugaboos at that time was how bad of a fit the mainstream .Net tools from Microsoft were for Agile Software Development practices like TDD and CI (you almost had to have Visual Studio.Net installed on your build server back in those days, and the dominant .Net tools of the day were horrible for testability). Looking at ASP.Net Core today, I think that the approaches they use are very much appropriate for modern Agile development practices.

I have mixed things to say about the state of OSS in .Net. Microsoft no longer tries to actively kill off OSS alternatives the way they did in the early days. Nuget obviously helps quite a bit. Microsoft themselves using GitHub and an OSS model for much of their own development helps tremendously. There’s still the issue of Microsoft frequently destroying OSS alternatives with their own tools instead of embracing things from the community. This is worth a later blog post, but I see a lot of things in .Net Core (the DI abstractions that we all hated at first, the IHostBuilder idea, the vastly better configuration model) that I think act as a very good foundation that makes it easier to OSS authors to integrate their tools into .Net Core applications — so that’s definitely good.

And yes, I think that the latest EF Core is fine as a heavy ORM today, after it basically ditched all the silly stuff we complained about in EF v1 back then and adopted a lot of thinking from OSS NHibernate (pssst, EF Core “code first” looks a lot like Fluent NHibernate that came out of the community many years earlier). But at this point I’d much rather use NoSQL tools like Marten, so I’m not sure you’re getting too much from me there;)

The .Net world and Microsoft itself is much more open to ideas that originate in other development communities, and I think that’s a strength of .Net at this point to be honest. Hell, a lot of what I really like about the dotnet cli and .Net Core is very clearly influenced by Node.js and other platforms.

.Net is still too Microsoft-centric in my opinion, but there’s more community now that isn’t just regurgitating MSDN documentation. I still think the MVP program is hurtful over all, but that might just be me. Compared to the world of 2007, now is much better.

Environment Checks and Better Command Line Abilities for your .Net Core Application

Oakton.AspNetCore is a new package built on top of the Oakton 2.0+ command line parser that adds extra functionality to the command line execution of ASP.Net Core and .Net Core 3.0 codebases. At the bottom of this blog post is a small section showing you how to set up Oakton.AspNetCore to run commands in your .Net Core application.

First though, you need to understand that when you use the dotnet run command to build and execute your ASP.Net Core application, you can pass arguments and flags both to dotnet run itself and to your application through the string[] args argument of Program.Main(). These two types of arguments or flags are separated by a double dash, like this example: dotnet run --framework netcoreapp2.0 -- ?. In this case, “–framework netcoreapp2.0” is used by dotnet run itself, and the values to the right of the “–” are passed into your application as the args array.

With that out of the way, let’s see what Oakton.AspNetCore brings to the table.

Extended “Run” Options

In the default ASP.Net Core templates, your application can be started with all its defaults by using dotnet run.  Oakton.AspNetCore retains that usage, but adds some new abilities with its “Run” command. To check the syntax options, type dotnet run -- ? run:

 Usages for 'run' (Runs the configured AspNetCore application)
  run [-c, --check] [-e, --environment <environment>] [-v, --verbose] [-l, --log-level <logleve>] [----config:<prop> <value>]

  ---------------------------------------------------------------------------------------------------------------------------------------
    Flags
  ---------------------------------------------------------------------------------------------------------------------------------------
                        [-c, --check] -> Run the environment checks before starting the host
    [-e, --environment <environment>] -> Use to override the ASP.Net Environment name
                      [-v, --verbose] -> Write out much more information at startup and enables console logging
          [-l, --log-level <logleve>] -> Override the log level
          [----config:<prop> <value>] -> Overwrite individual configuration items
  ---------------------------------------------------------------------------------------------------------------------------------------

To run your application under a different hosting environment name value, use a flag like so:

dotnet run -- --environment Testing

or

dotnet run -- -e Testing

To overwrite configuration key/value pairs, you’ve also got this option:

dotnet run -- --config:key1 value1 --config:key2 value2

which will overwrite the configuration keys for “key1” and “key2” to “value1” and “value2” respectively.

Lastly, you can have any configured environment checks for your application immediately before starting your application by using this flag:

dotnet run -- --check

More on this function in the next section.

 

Environment Checks

I’m a huge fan of building environment tests directly into your application. Environment tests allow your application to self-diagnose issues with deployment, configuration, or environmental dependencies upfront that would impact its ability to run.

As a very real world example, let’s say your ASP.Net Core application needs to access another web service that’s managed independently by other teams and maybe, just maybe your testers have occasionally tried to test your application when:

  • Your application configuration has the wrong Url for the other web service
  • The other web service isn’t running at all
  • There’s some kind of authentication issue between your application and the other web service

In the real world project that spawned the example above, we added a formal environment check that would try to touch the health check endpoint of the external web service and throw an exception if we couldn’t connect to the external system. The next step was to execute our application as it was configured and deployed with this environment check as part of our Continuous Deployment pipeline. If the environment check failed, the deployment itself failed and triggered off the normal set of failure alerts letting us know to go fix the environment rather than letting our testers waste time on a bad deployment.

With all that said, let’s look at what Oakton.AspNetCore does here to help you add environment checks. Let’s say your application uses a single Sql Server database, and the connection string should be configured in the “connectionString” key of your application’s connection. You would probably want an environment check just to verify at a minimum that you can successfully connect to your database as it’s configured.

In your ASP.Net Core Startup class, you could add a new service registration for an environment check like this example:

// This method gets called by the runtime. Use this method to add services to the container.
public void ConfigureServices(IServiceCollection services)
{
    // Other registrations we don't care about...
    
    // This extension method is in Oakton.AspNetCore
    services.CheckEnvironment<IConfiguration>("Can connect to the application database", config =>
    {
        var connectionString = config["connectionString"];
        using (var conn = new SqlConnection(connectionString))
        {
            // Just attempt to open the connection. If there's anything
            // wrong here, it's going to throw an exception
            conn.Open();
        }
    });
}

Now, during deployments or even just pulling down the code to run locally, we can run the environment checks on our application like so:

dotnet run -- check-env

Which in the case of our application above, blows up with output like this because I didn’t add configuration for the database in the first place:

Running Environment Checks
   1.) Failed: Can connect to the application database
System.InvalidOperationException: The ConnectionString property has not been initialized.
   at System.Data.SqlClient.SqlConnection.PermissionDemand()
   at System.Data.SqlClient.SqlConnectionFactory.Permissi
onDemand(DbConnection outerConnection)
   at System.Data.ProviderBase.DbConnectionInternal.TryOpenConnectionInternal(DbConnection outerConnection, DbConnectionFactory connectionFactory, TaskCompletionSource`1
 retry, DbConnectionOptions userOptions)
   at System.Data.ProviderBase.DbConnectionClosed.TryOpenConnection(DbConnection outerConnection, DbConnectionFactory connectionFactory, TaskCompletionSource`1 retry, 
DbConnectionOptions userOptions)
   at System.Data.SqlClient.SqlConnection.TryOpen(TaskCompletionSource`1 retry)
   at System.Data.SqlClient.SqlConnection.Open()
   at MvcApp.Startup.<>c.<ConfigureServices>b_
_4_0(IConfiguration config) in /Users/jeremydmiller/code/oakton/src/MvcApp/Startup.cs:line 41
   at Oakton.AspNetCore.Environment.EnvironmentCheckExtensions.<>c__DisplayClass2_0`1.<CheckEnvironment>b__0(IServ
iceProvider s, CancellationToken c) in /Users/jeremydmiller/code/oakton/src/Oakton.AspNetCore/Environment/EnvironmentCheckExtensions.cs:line 53
   at Oakton.AspNetCore.Environment.LambdaCheck.Assert(IServiceP
rovider services, CancellationToken cancellation) in /Users/jeremydmiller/code/oakton/src/Oakton.AspNetCore/Environment/LambdaCheck.cs:line 19
   at Oakton.AspNetCore.Environment.EnvironmentChecker.ExecuteAll
EnvironmentChecks(IServiceProvider services, CancellationToken token) in /Users/jeremydmiller/code/oakton/src/Oakton.AspNetCore/Environment/EnvironmentChecker.cs:line 31

If you ran this command during continuous deployment scripts, the command should cause your build to fail when it detects environment problems.

In some of Calavista’s current projects , we’ve been adding environment tests to our applications for items like:

  • Can our application read certain configured directories?
  • Can our application as it’s configured connect to databases?
  • Can your application reach other web services?
  • Are required configuration items specified? That’s been an issue as we’ve had to build out Continuous Deployment pipelines to many, many different server environments

I don’t see the idea of “Environment Tests” mentioned very often, and it might have other names I’m not aware of. I learned about the idea back in the Extreme Programming days from a blog post from Nat Pryce that I can’t find any longer, but there’s this paper from those days too.

 

Add Other Commands

I’ve frequently worked in projects where we’ve built parallel console applications that reproduce a lot of the same IoC and configuration setup to perform administrative tasks or add other diagnostics. It could be things like adding users, rebuilding an event store projection, executing database migrations, or loading some kind of data into the application’s database. What if instead, you could just add these directly to your .Net Core application as additional dotnet run -- [command] options? Fortunately, Oakton.AspNetCore let’s you do exactly that, and even allows you to package up reusable commands in other assemblies that could be distributed by Nuget.

If you use Lamar as your IoC container in an ASP.Net Core application (or .Net Core 3.0 console app using the new unified HostBuilder), we now have an add on Nuget called Lamar.Diagnostics that will add new Oakton commands to your application that give you access to Lamar’s diagnostic tools from the command line. As an example, this library adds a command to write out the “WhatDoIHave()” report for the underlying Lamar IoC container of your application to the command line or a file like this:

dotnet run --lamar-services

Now, using the command above as an example, to build or add your own commands start by decorating the assembly containing the command classes with this attribute:

[assembly:OaktonCommandAssembly]

Having this assembly tells Oakton.AspNetCore to search the assembly for additional Oakton commands. There is no other setup necessary.

If your command needs to use the application’s services or configuration, have the Oakton input type inherit from NetCoreInput type from Oakton.AspNetCore like so:

public class LamarServicesInput : NetCoreInput
{
    // Lots of other flags
}

Next, the new command for “lamar-services” is just this:

[Description("List all the registered Lamar services", Name = "lamar-services")]
public class LamarServicesCommand : OaktonCommand<LamarServicesInput>
{
    public override bool Execute(LamarServicesInput input)
    {
        // BuildHost() will return an IHost for your application
        // if you're using .Net Core 3.0, or IWebHost for
        // ASP.Net Core 2.*
        using (var host = input.BuildHost())
        {
            // The actual execution using host.Services
            // to get at the underlying Lamar Container
        }

        return true;
    }


}

Getting Started

In both cases I’m assuming that you’ve bootstrapped your application with one of the standard project templates like dotnet new webapi or dotnet new mvc. In both cases, you’ll first add a reference to the Oakton.AspNetCore Nuget. Next, break into the Program.Main()entry point method in your project and modify it like the following samples.

If you’re absolutely cutting edge and using ASP.Net Core 3.0:

public class Program
{
    public static Task<int> Main(string[] args)
    {
        return CreateHostBuilder(args)
            
            // This extension method replaces the calls to
            // IWebHost.Build() and Start()
            .RunOaktonCommands(args);
    }

    public static IHostBuilder CreateHostBuilder(string[] args) =>
        Host.CreateDefaultBuilder(args)
            .ConfigureWebHostDefaults(x => x.UseStartup<Startup>());
    
}

For what I would guess is most folks, the ASP.Net Core 2.* setup (and this would work as well for ASP.Net Core 3.0 as well):

public class Program
{
    public static Task<int> Main(string[] args)
    {
        return CreateWebHostBuilder(args)
            
            // This extension method replaces the calls to
            // IWebHost.Build() and Start()
            .RunOaktonCommands(args);
    }

    public static IWebHostBuilder CreateWebHostBuilder(string[] args) =>
        WebHost.CreateDefaultBuilder(args)
            .UseStartup<Startup>();
    
}

The two changes from the template defaults is to:

  1. Change the return value to Task<int>
  2. Replace the calls to IHost.Build() and IHost.Start() to use the RunOaktonCommands(args) extension method that hangs off IWebHostBuilder and the new unified IHostBuilder if you’re targeting netcoreapp3.0.

And that’s it, you’re off to the races.