Category Archives: Uncategorized

The Very Last ALT.Net Retrospective I’ll Ever Write

I’m on the latest episode of DotNetRocks (episode 1655) mostly talking about the .Net Core ecosystem and catching them up on the latest on Marten. Richard caught me a little off guard when he asked me if I thought after all this time that ALT.Net had been a success considering how Microsoft has embraced Open Source development as a model for themselves and the community as a whole.

If you’re in .Net and wondering what the hell is this “ALT.Net” thing, check out my article in MSDN from 2008 called What is ALT.NET? (that I was pleasantly surprised was still online to be honest). In short, it was a community in .Net of vaguely like-minded folks who were mostly drawn from the .Net blogging world and nascent OSS ecosystem of that time. I still think that the original couple ALT.Net Open Spaces events in Austin and Seattle were the best technical events and most impressive group of folks I’ve ever gotten to be around. As Richard had to bring up on DotNetRocks, the “movement” crystallized in some part because several of us crashed a session at the MVP summit in 2007 where the very earliest version of Entity Framework was being demonstrated and it, um, wasn’t good. At all.

Quoting from David Laribee’s original “ALT.Net Manifesto”:

  1. You’re the type of developer who uses what works while keeping an eye out for a better way.
  2. You reach outside the mainstream to adopt the best of any community: Open Source, Agile, Java, Ruby, etc.
  3. You’re not content with the status quo. Things can always be better expressed, more elegant and simple, more mutable, higher quality, etc.
  4. You know tools are great, but they only take you so far. It’s the principles and knowledge that really matter. The best tools are those that embed the knowledge and encourage the principles 

If you’d asked me personally in 2007-2008 what I thought ALT.Net was about and could maybe achieve it would be something like:

  • Introducing better software practices like Continuous Integration and TDD/BDD pulled from Agile software development. Heck, honestly I’d say that it was getting the .Net world to adopt Agile techniques — and at that time it was going to take some significant changes to the way the .Net mainstream built applications because the Microsoft technology of the day was a very poor fit for Agile development techniques (*cough* WebForms *cough*).
  • Opening the .Net community as a whole from ideas and tooling from outside the strict Microsoft ecosystem. I remember a lot of commentary about the “Microsoft monoculture”
  • And yes, to make .Net be much more OSS friendly.
  • Have a .Net community of practitioners that wasn’t focused solely on Microsoft tools. I remember being very critical of the Microsoft MVP/Regional Director camp at the time, even though I was a C# MVP at that time.

Like I said earlier, as an active participant in ALT.Net it was a tremendous experience, I learned a lot, and met some great folks that I’m friends with to this day. We unfortunately had some toxic personalities involved and the backlash from the mainstream .Net world more or less did ALT.Net in. I’d argue that ALT.Net as a distinct thing and any hopes of being a vibrant, transformative force within .Net was pretty well destroyed when Scott Hanselman did this:

whysomean

I’ll have to admit that even after a decade I still have some hard feelings toward many .Net celebrities of that day.

So, after all this time, do I think ALT.Net was successful? 

Eh, maybe, sort of. I think it’s possible that the positive changes would have happened over time anyway. I think it’s much more likely that folks like Hanselman, Phil Haack, Brad Wilson, Glenn Block, Jon Galloway and many others working within Microsoft made a much bigger difference in the end than we ALT.Net rabble rousers did from the outside.

ALT.Net was very successful in terms of bringing some people together who are still among the technical community leaders, so that’s a positive. I still think the .Net community is a little too focused on Microsoft, but I like the technical stuff and guidance coming out of Redmond much, much better now than I did in ’07-’08 and I do think that ALT.Net had some impact on that.

I think you could claim that ALT.Net had some influence in moving things in better directions, but it required a lot more time than we’d anticipated and it maybe inevitably had to come about by Microsoft itself changing. For those of you who don’t know this, Scott Guthrie did the first public demonstration of what became ASP.Net MVC in Austin at the initial ALT.Net Open Spaces event, partially in reaction to how so many of us at that time were very negatively comparing WebForms of the day to what we were seeing in Ruby on Rails.

One of my big bugaboos at that time was how bad of a fit the mainstream .Net tools from Microsoft were for Agile Software Development practices like TDD and CI (you almost had to have Visual Studio.Net installed on your build server back in those days, and the dominant .Net tools of the day were horrible for testability). Looking at ASP.Net Core today, I think that the approaches they use are very much appropriate for modern Agile development practices.

I have mixed things to say about the state of OSS in .Net. Microsoft no longer tries to actively kill off OSS alternatives the way they did in the early days. Nuget obviously helps quite a bit. Microsoft themselves using GitHub and an OSS model for much of their own development helps tremendously. There’s still the issue of Microsoft frequently destroying OSS alternatives with their own tools instead of embracing things from the community. This is worth a later blog post, but I see a lot of things in .Net Core (the DI abstractions that we all hated at first, the IHostBuilder idea, the vastly better configuration model) that I think act as a very good foundation that makes it easier to OSS authors to integrate their tools into .Net Core applications — so that’s definitely good.

And yes, I think that the latest EF Core is fine as a heavy ORM today, after it basically ditched all the silly stuff we complained about in EF v1 back then and adopted a lot of thinking from OSS NHibernate (pssst, EF Core “code first” looks a lot like Fluent NHibernate that came out of the community many years earlier). But at this point I’d much rather use NoSQL tools like Marten, so I’m not sure you’re getting too much from me there;)

The .Net world and Microsoft itself is much more open to ideas that originate in other development communities, and I think that’s a strength of .Net at this point to be honest. Hell, a lot of what I really like about the dotnet cli and .Net Core is very clearly influenced by Node.js and other platforms.

.Net is still too Microsoft-centric in my opinion, but there’s more community now that isn’t just regurgitating MSDN documentation. I still think the MVP program is hurtful over all, but that might just be me. Compared to the world of 2007, now is much better.

Environment Checks and Better Command Line Abilities for your .Net Core Application

Oakton.AspNetCore is a new package built on top of the Oakton 2.0+ command line parser that adds extra functionality to the command line execution of ASP.Net Core and .Net Core 3.0 codebases. At the bottom of this blog post is a small section showing you how to set up Oakton.AspNetCore to run commands in your .Net Core application.

First though, you need to understand that when you use the dotnet run command to build and execute your ASP.Net Core application, you can pass arguments and flags both to dotnet run itself and to your application through the string[] args argument of Program.Main(). These two types of arguments or flags are separated by a double dash, like this example: dotnet run --framework netcoreapp2.0 -- ?. In this case, “–framework netcoreapp2.0” is used by dotnet run itself, and the values to the right of the “–” are passed into your application as the args array.

With that out of the way, let’s see what Oakton.AspNetCore brings to the table.

Extended “Run” Options

In the default ASP.Net Core templates, your application can be started with all its defaults by using dotnet run.  Oakton.AspNetCore retains that usage, but adds some new abilities with its “Run” command. To check the syntax options, type dotnet run -- ? run:

 Usages for 'run' (Runs the configured AspNetCore application)
  run [-c, --check] [-e, --environment <environment>] [-v, --verbose] [-l, --log-level <logleve>] [----config:<prop> <value>]

  ---------------------------------------------------------------------------------------------------------------------------------------
    Flags
  ---------------------------------------------------------------------------------------------------------------------------------------
                        [-c, --check] -> Run the environment checks before starting the host
    [-e, --environment <environment>] -> Use to override the ASP.Net Environment name
                      [-v, --verbose] -> Write out much more information at startup and enables console logging
          [-l, --log-level <logleve>] -> Override the log level
          [----config:<prop> <value>] -> Overwrite individual configuration items
  ---------------------------------------------------------------------------------------------------------------------------------------

To run your application under a different hosting environment name value, use a flag like so:

dotnet run -- --environment Testing

or

dotnet run -- -e Testing

To overwrite configuration key/value pairs, you’ve also got this option:

dotnet run -- --config:key1 value1 --config:key2 value2

which will overwrite the configuration keys for “key1” and “key2” to “value1” and “value2” respectively.

Lastly, you can have any configured environment checks for your application immediately before starting your application by using this flag:

dotnet run -- --check

More on this function in the next section.

 

Environment Checks

I’m a huge fan of building environment tests directly into your application. Environment tests allow your application to self-diagnose issues with deployment, configuration, or environmental dependencies upfront that would impact its ability to run.

As a very real world example, let’s say your ASP.Net Core application needs to access another web service that’s managed independently by other teams and maybe, just maybe your testers have occasionally tried to test your application when:

  • Your application configuration has the wrong Url for the other web service
  • The other web service isn’t running at all
  • There’s some kind of authentication issue between your application and the other web service

In the real world project that spawned the example above, we added a formal environment check that would try to touch the health check endpoint of the external web service and throw an exception if we couldn’t connect to the external system. The next step was to execute our application as it was configured and deployed with this environment check as part of our Continuous Deployment pipeline. If the environment check failed, the deployment itself failed and triggered off the normal set of failure alerts letting us know to go fix the environment rather than letting our testers waste time on a bad deployment.

With all that said, let’s look at what Oakton.AspNetCore does here to help you add environment checks. Let’s say your application uses a single Sql Server database, and the connection string should be configured in the “connectionString” key of your application’s connection. You would probably want an environment check just to verify at a minimum that you can successfully connect to your database as it’s configured.

In your ASP.Net Core Startup class, you could add a new service registration for an environment check like this example:

// This method gets called by the runtime. Use this method to add services to the container.
public void ConfigureServices(IServiceCollection services)
{
    // Other registrations we don't care about...
    
    // This extension method is in Oakton.AspNetCore
    services.CheckEnvironment<IConfiguration>("Can connect to the application database", config =>
    {
        var connectionString = config["connectionString"];
        using (var conn = new SqlConnection(connectionString))
        {
            // Just attempt to open the connection. If there's anything
            // wrong here, it's going to throw an exception
            conn.Open();
        }
    });
}

Now, during deployments or even just pulling down the code to run locally, we can run the environment checks on our application like so:

dotnet run -- check-env

Which in the case of our application above, blows up with output like this because I didn’t add configuration for the database in the first place:

Running Environment Checks
   1.) Failed: Can connect to the application database
System.InvalidOperationException: The ConnectionString property has not been initialized.
   at System.Data.SqlClient.SqlConnection.PermissionDemand()
   at System.Data.SqlClient.SqlConnectionFactory.Permissi
onDemand(DbConnection outerConnection)
   at System.Data.ProviderBase.DbConnectionInternal.TryOpenConnectionInternal(DbConnection outerConnection, DbConnectionFactory connectionFactory, TaskCompletionSource`1
 retry, DbConnectionOptions userOptions)
   at System.Data.ProviderBase.DbConnectionClosed.TryOpenConnection(DbConnection outerConnection, DbConnectionFactory connectionFactory, TaskCompletionSource`1 retry, 
DbConnectionOptions userOptions)
   at System.Data.SqlClient.SqlConnection.TryOpen(TaskCompletionSource`1 retry)
   at System.Data.SqlClient.SqlConnection.Open()
   at MvcApp.Startup.<>c.<ConfigureServices>b_
_4_0(IConfiguration config) in /Users/jeremydmiller/code/oakton/src/MvcApp/Startup.cs:line 41
   at Oakton.AspNetCore.Environment.EnvironmentCheckExtensions.<>c__DisplayClass2_0`1.<CheckEnvironment>b__0(IServ
iceProvider s, CancellationToken c) in /Users/jeremydmiller/code/oakton/src/Oakton.AspNetCore/Environment/EnvironmentCheckExtensions.cs:line 53
   at Oakton.AspNetCore.Environment.LambdaCheck.Assert(IServiceP
rovider services, CancellationToken cancellation) in /Users/jeremydmiller/code/oakton/src/Oakton.AspNetCore/Environment/LambdaCheck.cs:line 19
   at Oakton.AspNetCore.Environment.EnvironmentChecker.ExecuteAll
EnvironmentChecks(IServiceProvider services, CancellationToken token) in /Users/jeremydmiller/code/oakton/src/Oakton.AspNetCore/Environment/EnvironmentChecker.cs:line 31

If you ran this command during continuous deployment scripts, the command should cause your build to fail when it detects environment problems.

In some of Calavista’s current projects , we’ve been adding environment tests to our applications for items like:

  • Can our application read certain configured directories?
  • Can our application as it’s configured connect to databases?
  • Can your application reach other web services?
  • Are required configuration items specified? That’s been an issue as we’ve had to build out Continuous Deployment pipelines to many, many different server environments

I don’t see the idea of “Environment Tests” mentioned very often, and it might have other names I’m not aware of. I learned about the idea back in the Extreme Programming days from a blog post from Nat Pryce that I can’t find any longer, but there’s this paper from those days too.

 

Add Other Commands

I’ve frequently worked in projects where we’ve built parallel console applications that reproduce a lot of the same IoC and configuration setup to perform administrative tasks or add other diagnostics. It could be things like adding users, rebuilding an event store projection, executing database migrations, or loading some kind of data into the application’s database. What if instead, you could just add these directly to your .Net Core application as additional dotnet run -- [command] options? Fortunately, Oakton.AspNetCore let’s you do exactly that, and even allows you to package up reusable commands in other assemblies that could be distributed by Nuget.

If you use Lamar as your IoC container in an ASP.Net Core application (or .Net Core 3.0 console app using the new unified HostBuilder), we now have an add on Nuget called Lamar.Diagnostics that will add new Oakton commands to your application that give you access to Lamar’s diagnostic tools from the command line. As an example, this library adds a command to write out the “WhatDoIHave()” report for the underlying Lamar IoC container of your application to the command line or a file like this:

dotnet run --lamar-services

Now, using the command above as an example, to build or add your own commands start by decorating the assembly containing the command classes with this attribute:

[assembly:OaktonCommandAssembly]

Having this assembly tells Oakton.AspNetCore to search the assembly for additional Oakton commands. There is no other setup necessary.

If your command needs to use the application’s services or configuration, have the Oakton input type inherit from NetCoreInput type from Oakton.AspNetCore like so:

public class LamarServicesInput : NetCoreInput
{
    // Lots of other flags
}

Next, the new command for “lamar-services” is just this:

[Description("List all the registered Lamar services", Name = "lamar-services")]
public class LamarServicesCommand : OaktonCommand<LamarServicesInput>
{
    public override bool Execute(LamarServicesInput input)
    {
        // BuildHost() will return an IHost for your application
        // if you're using .Net Core 3.0, or IWebHost for
        // ASP.Net Core 2.*
        using (var host = input.BuildHost())
        {
            // The actual execution using host.Services
            // to get at the underlying Lamar Container
        }

        return true;
    }


}

Getting Started

In both cases I’m assuming that you’ve bootstrapped your application with one of the standard project templates like dotnet new webapi or dotnet new mvc. In both cases, you’ll first add a reference to the Oakton.AspNetCore Nuget. Next, break into the Program.Main()entry point method in your project and modify it like the following samples.

If you’re absolutely cutting edge and using ASP.Net Core 3.0:

public class Program
{
    public static Task<int> Main(string[] args)
    {
        return CreateHostBuilder(args)
            
            // This extension method replaces the calls to
            // IWebHost.Build() and Start()
            .RunOaktonCommands(args);
    }

    public static IHostBuilder CreateHostBuilder(string[] args) =>
        Host.CreateDefaultBuilder(args)
            .ConfigureWebHostDefaults(x => x.UseStartup<Startup>());
    
}

For what I would guess is most folks, the ASP.Net Core 2.* setup (and this would work as well for ASP.Net Core 3.0 as well):

public class Program
{
    public static Task<int> Main(string[] args)
    {
        return CreateWebHostBuilder(args)
            
            // This extension method replaces the calls to
            // IWebHost.Build() and Start()
            .RunOaktonCommands(args);
    }

    public static IWebHostBuilder CreateWebHostBuilder(string[] args) =>
        WebHost.CreateDefaultBuilder(args)
            .UseStartup<Startup>();
    
}

The two changes from the template defaults is to:

  1. Change the return value to Task<int>
  2. Replace the calls to IHost.Build() and IHost.Start() to use the RunOaktonCommands(args) extension method that hangs off IWebHostBuilder and the new unified IHostBuilder if you’re targeting netcoreapp3.0.

And that’s it, you’re off to the races.

 

Alba 3.1 supercharges your ASP.Net Core HTTP Contract Testing

I was just able to push a Nuget for Alba 3.1 that adds support for ASP.Net Core 3.0 and updated the documentation website to reflect the mild additions. Big thanks are in order to Lauri Kotilainen and Jonathan Mezach for making this release happen.

If you’re not familiar with Alba, in its current incarnation it’s a wrapper around the ASP.Net Core TestServer that adds a whole lot of useful helpers to make your testing of ASP.Net Core HTTP services be much more declarative and easier than it is with TestServer alone.

Alba is descended from some built in support for HTTP contract testing in FubuMVC that I salvaged and adapted for usage with ASP.Net Core a few years back. Finally, Alba has used TestServer rather than its own homegrown HTTP runner since the big 3.0 release this January.

Lamar and Oakton join the .Net Core 3.0 Party

Like many other .Net OSS authors, I’ve been putting in some time this week making sure that various .Net tools support the brand spanking new .Net Core and ASP.Net Core 3.0 bits that were just released. First up is Oakton and Lamar, with the rest of the SW Missouri projects to follow soon.

Oakton

Oakton is yet another command line parser tool for .Net. The main Oakton 2.0.1 library adds support for the Netstandard2.1 target, but does not change in any other way. The Oakton.AspNetCore library got a full 2.0.0 release. If you’re using ASP.Net Core v2.0, your usage is unchanged. However, if you are targeting netcoreapp3.0, the extension methods now depend on the newly unified IHostBuilder rather than the older IWebHostBuilder and the bootstrapping looks like this now:

public class Program
{
    public static Task<int> Main(string[] args)
    {
        return CreateHostBuilder(args)
            
            // This extension method replaces the calls to
            // IWebHost.Build() and Start()
            .RunOaktonCommands(args);
    }

    public static IHostBuilder CreateHostBuilder(string[] args) =>
        Host.CreateDefaultBuilder(args)
            .ConfigureWebHostDefaults(x => x.UseStartup<Startup>());
    
}

 

Oakton.AspNetCore provides an improved and extensible command line experience for ASP.Net Core. I’ve been meaning to write a bigger blog post about it, but that’s gonna wait for another day.

Lamar

The 3.0 support in Lamar was unpleasant, because the targeting covers the spread from .Net 4.6.1 to netstandard2.0 to netstandard2.1, and the test library covered several .Net runtimes. After this, I’d really like to not have to type “#if/#else/#endif” ever, ever again (and I do use ReSharper/Rider’s ALT-CTRL-J surround with feature religiously that helps, but you get my point).

The new bits and releases are:

  • Lamar v3.2.0 adds a netstandard2.1 target
  • LamarCompiler v2.1.0 adds a netstandard2.1 target (probably only used by Jasper, but who knows who’s picked it up, so I updated it)
  • LamarCodeGeneration v1.1.0 adds a netstandard2.1 target but is otherwise unchanged
  • Lamar.Microsoft.DependencyInjection v4.0.0 — this is the adapter library to replace the built in DI container in ASP.Net Core with Lamar. This is a full point release because some of the method signatures changed. I deleted the special Lamar handling for IOption<T> and ILogger<T> because they no longer added any value. If your application targets netcoreapp3.0, the UseLamar() method and its overloads hang off of IHostBuilder instead of IWebHostBuilder. If you are remaining on ASP.Net Core v2.*, UseLamar() still hangs off of IWebHostBuilder
  • Lamar.Diagnostics v1.1.0 — assuming that you use the Oakton.AspNetCore command line adapter for your ASP.Net Core application, adding a reference to this library adds new commands to expose the Lamar diagnostic capabilities from the command line of your application. This version targets both netstandard2.0 for ASP.Net Core v2.* and netstandard2.1 for ASP.Net Core 3.*.

 

The challenges

The biggest problem was that in both of these projects, I wanted to retain support for .Net 4.6+ and netstandard2.0 or netcoreapp2.* runtime targets. That’s unfortunately meant a helluva lot of conditional compilation and conditional Nuget references per target framework. In some cases, the move from the old ASP.Net Core <=2.* IWebHostBuilder to the newer, unified IHostBuilder took some doing in both the real code and in the unit tests. Adding to the fun is that there are real differences sometimes between the old, full .Net framework, netcoreapp2.0, netcoreapp2.1/2, and certainly the brand new netcoreapp3.0.

Another little hiccup was that the dotnet pack pathing support was fixed to what it was originally in the project.json early days, but that broke all our build scripts and that had to be adjusted (the artifacts path is relative to the current directory now rather than to the binary target path like is was).

At least on AppVeyor, we had to force the existing image to install the latest .Net SDK as part of the build because the image we were using didn’t yet support the brand new .Net SDK. I’d assume that is very temporary and I can’t speak to other hosted CI providers. If you’re wondering how to do that, see this example from Lamar that I got from Jimmy Byrd.

 

Other Projects I’m Involved With

  • StructureMap — If someone feels like doing it, I’d take a pull request to StructureMap.Microsoft.DependencyInjection to put out a .Net Core 3.0 compatible version (really just means supporting the new IHostBuilder instead of or in addition to the old IWebHostBuilder).
  • Alba — Some other folks are working on that one, and I’m anticipating an Alba on ASP.Net Core 3.0 release in the next couple days. I’ll write a follow up blog post when that’s ready
  • Marten — I’m not anticipating much effort here, but we should at least have our testing libraries target netcoreapp3.0 and see what happens. We did have some issues with netcoreapp2.1, so we’ll see I guess
  • Jasper — I’ve admittedly had Jasper mostly on ice until .Net Core 3.0 was released. I’ll start work on Jasper next week, but in that is going to be a true conversion up to netcoreapp3.0 and some additional structural changes to finally get to a 1.0 release of that thing.
  • Storyteller — I’m not sure yet, but I’ve started doing a big overhaul of Storyteller that’s gotten derailed by the .Net Core 3.0 stuff

 

Lamar 3.1 – New Diagnostics and more

Lamar 3.1.0 was published today with a few fixes, new diagnostics, and a newly improved equivalent to StructureMap’s old “Forward” registration. The full details are here.

There’s one big bug fix on multi-threaded usage that was impacting folks using Lamar with MediatR (it’s worth a later blog post about debugging problems because this took several folks and at least two attempts from me to finally diagnose and solve).

“Inverse” Registration

For the not uncommon need to have one single object be the implementation for multiple role interfaces within the container, there’s the new “inverse” registration mechanism as shown below:

[Fact]
public void when_singleton_both_interfaces_give_same_instance()
{
    var container = new Container(services =>
    {
        services.Use<Implementation>()
            .Singleton()
            .For<IServiceA>()
            .For<IServiceB>();
    });

    var instanceA = container.GetInstance<IServiceA>();
    var instanceB = container.GetInstance<IServiceB>();

    instanceA.ShouldBeTheSameAs(instanceB);
}

What you see above is registering the concrete type Implementation as the shared, underlying service for both IServiceA and IServiceBregistrations.

HowDoIBuild

Lamar has been able to show you the “build plan” for registrations since the beginning, but now there’s a helper Container.HowDoIBuild() method that gives you easier access to this information that can be helpful for troubleshooting IoC configuration issues.

Lamar.Diagnostics

More on this one later, but there’s also a brand new package called Lamar.Diagnostics that, in conjunction with Oakton.AspNetCore, adds command line access to all the Lamar diagnostic capabilities directly to your ASP.Net Core application.

 

 

Oakton 2.0 — Improved command line parsing for ASP.Net Core

I published an update to Oakton last week that pushed it to 2.0. As a reminder, Oakton is yet another command line parsing tool for .Net, but differs a bit from other alternatives by separating the mechanics of command line parsing away from processing commands.

Some links:

At a high level, the feature set is:

  • Support for one or more distinct commands in a distinct command line tool ala the dotnet CLI (“run”, “new”, “test”, etc) or the git command line (“add”, “checkout”, “push”, “pull”, etc).
  • *nix style optional flags, including the single dash shorthand (think “git clean -xfd”)
  • Synchronous or asynchronous commands inside the same tool
  • Integrated command syntax help

And new for 2.0:

The “one model in” basic approach and most of the core code of Oakton was rescued and updated from the left over command line support originally written for FubuMVC back in about 2011 as I recall, so I think I can claim that Oakton is pretty battle tested.

 

Context is Important

I’ve been thinking recently about how Agile changed software development in important ways and getting back to some of the important lessons I felt that we as an industry learned from Extreme Programming back in the day. Depending on my time and ambition level, I might try to spit out some essays on team organization, feedback, engineering practices, and continuous delivery. It should be implied anyway, but I need to be clear that all the contents of this blog are my opinions alone and do not reflect that of my employer (Calavista). I.e., I’m the only one to blame;)

I’m hopefully near the end of what’s been a very frustrating project. There’s more to the story of course, but I ascribe many of the challenges we have faced on the decision to execute the project with a true waterfall lifecycle. Being forced back to waterfall development has naturally led me to reflect on what I think are the crucial differences between that model and Agile development techniques and remember just why Agile development was such a revolutionary movement (before it was all watered down by Scrum, but that’s a subject for another day). That reflection includes feedback cycles, team and organization mechanics, project management, and reversibility — but for now, let’s just talk about context.

Picking up the definition of “context” from Dictionary.com:

the set of circumstances or facts that surround a particular event, situation, etc.

In terms of software development, understanding the “context” of your week means knowing how your work fits into the greater tableau of the system and ecosystem. It also means understanding the “why” behind the technical decisions that have been made so far and the architectural approach.

It turns out that understanding the context around the software system you are building is very important. Thank you, Captain Obvious right?

obvious

Now, let’s try to use the word “context” in some sentences about software development:

  • As an architect, I make more effective decisions when I understand the context of how our work fits into the greater enterprise ecosystem. One of the things I think I’ll be doing in the next couple weeks is to give a little presentation to our client on how our dev lead and I think the architecture or our project should be streamlined in the future by consolidating responsibilities, simplifying the topology, and reducing integration chattiness. Why didn’t I and others just make better decisions upfront you ask? Conway’s Law* is a partial explanation, and our future recommendations assume that the organization in question will eliminate that artificial barrier in the future. Mostly though, we made decisions based very early on a limited view of a specific initiative in isolation. Now that we know much more about other system interactions and related projects going on at our client, we see a path for a much simpler and more reliable architecture — but we could only get to that point by seeing the bigger picture rather than dealing with “build this console app to move data from point A to point B.”
  • As any kind of technical leader, I can better set up other developers for success by explaining the context around their tasks. I learned a brutal lesson early as a technical lead. The more specific instructions I gave to the other developers working on our project, the worse the code they wrote came back. When I just had a discussion with developers about what we were trying to achieve and how their tasks tied into the greater workflow of the project, the results were very obviously better. It turns out that developers can much better make decisions when they understand the “why” of the work that they’re doing.
  • As an OSS author and maintainer, I do a much better job helping users with questions when I understand what the users are trying to accomplish rather than just trying to answer specific API questions. It’s time consuming to ask a lot of questions, but the results are usually better in the end. It does lead to a lot of “why would you possibly need to do that?” questions on my part before I understand what their scenario really is. It’s not uncommon to give much easier solutions to their real issues once I know the context behind the original question. That really depends on catching me at a moment when I’m not too busy or stressed out by other things and the other person has enough patience to go back and forth, but hey, it’s an imperfect world.
  • As a developer, I’m much faster fixing bugs and addressing questions when the context of the problem is fresh in my mind. I distinctly remember making the argument back in the early to mid-00’s that developers would be able to fix bugs much faster in an Agile process where testing happens simultaneously and the code is always fresh in your mind versus waterfall development where you’re frequently needing to address code you haven’t worked with in several months. After going back to waterfall development the past 12 months, I can attest to this argument being very much a true fact.
  • As a tester, I can more effectively troubleshoot problems and write actionable defect reports when I understand the environmental context of the functionality I’m testing. Things inevitably go wrong, and good testers help solve those problems when they can investigate the failures, find the relevant logs, and give developers the information they need in order to further diagnose or fix the problems.
  • As an architect or any kind of technical leader making technical recommendations and guidance to others, I should explain the “why” and context behind any guidance. This is absolutely vital because my experience has been that folks will eventually face scenarios where your guidance or advice does not apply very well or might even be flat out harmful. You can’t just say “always do this or that.” You need to explain why you gave the recommendations to hopefully let folks know when they should break out and look for other solutions rather than force fitting a “standard architecture.” My strong advice is to always include some context about why any particular reference architecture came about so that developers can be more effective rather than blindly following along.
  • As a developer (really as any possible role), I am less effective when I have to deal with a lot of context switching during the work day. I’ve long been in roles where I need to try to answer questions from other folks and I’ve always prided myself on my ability to quickly shift contexts and help folks out, but it definitely wears on me and I’m not as sharp as I am when it’s questions about something I’m actively working on at the time. Personally, I see my productivity lately going down the drain as I’m having to do an unusual amount of context switching between different projects.

Context and the Agile vs Waterfall Argument

I’m both a long time advocate of Agile methods and a long time critic of waterfall methods, and the issue of context and context shifting really drives home the differences to me. I feel like there’s both much more sharing of project context in Agile teams and less context shifting than you frequently see in waterfall organizations.

In an Agile team there’s simply much more face to face interaction (or at least should be) between different disciplines as opposed to waterfall style toss-the-documents-over-the-wall communication. You would hope that would lead to much more shared understanding of why decisions are made and understanding of the context around the team’s work.

In a typical Agile shop, you would try to create a self-contained, multi-disciplinary team that is completely focused on one stream of work. Developers, analysts, scrum masters, testers, and whatnot all working together and bearing down on a single backlog.

In an extreme waterfall shop**, a project team may really be a larger, virtual team of individuals who are all simultaneously working on other unrelated work streams. After all, waterfall theory says that the analysts should be able to hand over the requirements document and then move on to another project. Testers shouldn’t even come into play until all the coding is done, so they’re not even around while the requirements are being formulated and the initial coding is happening. The architect, if you have one, might not be fully engaged after the initial technical specification was handed off (dunno if any organization still tries to do that, but I experienced that in my first role as an architect).

The folks in a waterfall shop are probably doing a lot more harmful context shifting than their counterparts in an Agile shop. The waterfall folks may also be getting much less information about their project because it’s constrained through documentation rather than through higher bandwidth, interactive discussions between disciplines. If you’ve never encountered this before, I highly recommend taking a read through Alistair Cockburn’s paper on Communicating, Cooperative Teams.

 

* Just a reminder that Conway’s Law is neither good, bad, or an anti-pattern. It simple is. Better shops will be aware of Conway’s Law and purposely and intentionally organize themselves to support their desired enterprise architecture.

** My description of a waterfall shop may sound like a straw man argument, but it really exists even today and I’ve run into this model in multiple big shops now.