Putting SOLID into Perspective

This is the 3rd part of multi-part series where I’m formulating my thoughts about an ongoing initiative at MedeAnalytics. I started with a related post called On Giving Technical Guidance to Others that’s a synopsis of an impromptu lecture I gave our architecture team about all the things I wish I’d known before becoming any kind of technical leader. The first post was What Is Good Code? where I attempted to put a marker down on the inherent qualities we want in code before bothering to talk about ways to arrive at “good code.” As time permits, I’m training a rhetorical double barreled shotgun at the Onion or Clean architectures next in an attempt to see those completely banned in my shop, followed by some ranting about database abstractions.

I think many people have put the old SOLID Principles on a pedestal where the principles are actually rules. “SOLID as rules” has maybe become the goto standard for understanding or defining what is good code or architecture for a sizable segment of the software development community. I’ve frequently heard folks say that they “write SOLID code,” but even after having been exposed to these principles for almost 20 years I have no earthly idea what that really means. I even had a functional programming advocate tell me that I could use SOLID with functional programming, and I have even less of an idea about how 30 year old rules specific to class oriented programming have any relevance for FP other than maybe as a vague notion toward writing cohesive code.

Dan North recently made a tongue in cheek presentation saying that each SOLID principle was wrong and that you should just write simple code — whatever the hell that means.

As the section title says, I think we need to put the SOLID principles into a bit of perspective as neither a set of authoritative rules that can be used in isolation to judge the worthiness of your code nor something that is completely useless or wrong. Rather than throw the baby out with the bathwater, I would describe the SOLID principles as a sometimes helpful heuristic.

Heuristics are methods or strategies which often lead to a problem solution but are not guaranteed to succeed.

https://www.simplypsychology.org/what-is-a-heuristic.html

Rather than a hard and fast rule, the SOLID principles can be used as a mental tool to think through the consequences of a coding design or to detect potential problems seeping into your code. One of the realizations I’ve made over the years is that there’s a wide variance in how developers think about coding problems and the types of techniques that fit these different mental models. I find SOLID to be somewhat useful, while others find it to be a stuffy set of rules that bear no resemblance to anything they themselves think about in terms of quality code.

Let’s run through the principles and I’ll do my best to tell you what I think they mean and how applicable or useful they are:

Single Responsibility Principle (SRP) — “There should never be more than one reason for a class to change.”

It’s really a restatement of the quality of cohesion, which I certainly think is important. As many others have pointed out over the years though, this is vaguely worded, prone to a wide range of interpretation, and frequently leads to idiotic, masturbatory “how many Angels can dance on the head of a pin” arguments about how finely sliced the code should be and what a “responsibility” actually is. I think this is really a “by feel” kind of test and very highly subjective.

Another old rule of thumb is to just ask yourself if every responsibility of a piece of code directly relates to its name. Except that also starts another argument about what exactly is a responsibility. Sigh, let’s move on for now, but in a later section I will talk about Responsibility Driven Design as an actually effective way to decide on what a responsibility actually is.

Open Closed Principle (OCP) — “Software entities … should be open for extension, but closed for modification.”

I wrote an article about this way, way back in 2008 for MSDN that I think is still relevant. Just think on this for a bit, is it easier to go in to make modifications to some existing code to add new behavior or change the way it works today, or to write all new code that’s relatively decoupled from existing code and hence your new code will have fewer potential unintended side effects? I think this comes up much more in designing software frameworks than day to day feature code, but it’s still something I use as a consideration putting together code. In usage it’s just looking for ways to structure your code in order to make the addition of new features be mostly done by adding all new code files.

Consider building a web API of some sort. If you use an MVC framework like ASP.NET Core MVC that can auto-discover new Controller methods at startup time, you’re able to add new APIs without changing the code in other controller files. However, if you’re naively using a Sinatra-flavored approach, you may have to continuously break into the same routing definition file to make changes for every single new API route. The first approach is “OCP-compliant”, but the second approach could easily be considered to be simpler, and hence better in many cases.

Once again, OCP is a useful tool to think through possible designs in code, but not really any kind of inviolable rule. Moreover, I’d say that OCP more or less comes out a lot of time as “pluggability,” which is a double-edged sword that’s both helped and hindered anyone who’s been a developer for any length of time.

Liskov Substitution Principle (LSP) — “Functions that use pointers or references to base classes must be able to use objects of derived classes without knowing it.”

A casual reading of that will just lead you to a restatement of polymorphism, which is fine I guess, but doesn’t really help us necessarily write better code. Going a little deeper I’d say that what is important is that the client code for any interface or published API should not be making any assumptions about the underlying implementation and therefore less likely to break if using a new implementation of the same interface. If you want another way to think about this, maybe the leaky abstraction anti-pattern is an easier heuristic.

Interface Segregation Principle (ISP) — “Clients should not be forced to depend upon interfaces that they do not use.”

I mostly interpret this as another way to say Role Interface, which is an exhortation to make interfaces be focused to just the needs of a client and only expose a single role to that client. I do pay attention to this in the course of my work on OSS projects that are meant to be used by other developers.

You could make the case that ISP is somewhat a way to optimize the usage of Intellisense or code completion features for folks consuming your API in an IDE, and I think that’s a perfectly valid goal that improves usability.

As an example from my own work, the Jasper project has an important interface called IExecutionContext that currently contains some members meant to be exposed to Jasper message handler code. And it also currently contains some members that are strictly for the usage of Jasper internals and could cause harm or unintended consequences if used inappropriately by developers using Jasper in their own code. ISP suggests that that interface should be changed or split up based on intended roles, and in this particular case, I would independently agree with ISP and I definitely intend to address that at some point soon.

I see ISP coming up far more often when building infrastructure code, but occasionally in other code just where it’s valuable to separate the interface for mutating an object and a separate interface for consumers of data. I’ve never understood why this principle made the SOLID canon when more important heuristics did not — other than the authors really needed to say “Pat, I’d like to buy a vowel” to make the acronym work.

Dependency Inversion Principle — “Depend upon abstractions, [not] concretions.”

For some background for those of you who stumble into this and have no idea who I am, I’m the author of StructureMap, the original, production capable IoC tool in the .NET ecosystem (and its modern successor Lamar) — the one single development environment that most embraced IoC tools in all their glory and folly. By saying all of this, you would expect me to be the one person in the entire world who would go to bat for this principle.

But nope, I’m mostly indifferent to this other than I probably follow it mostly out of inertia. Sometimes it’s absolutely advantageous to build up an interface by developing the client first, then happily generate the concrete stubs for the interface with the IDE of your choice. It’s of course valuable to allow for swapping out implementations when you really do have multiple implementations of a single interface. I’d really urge folks though to avoid building unnecessary abstractions for things like domain model types or message bodies.

To sum up the principles and their usefulness:

  • SRP — Separation of concerns is important in code, but the SRP is too vaguely worded by itself to be hugely helpful
  • OCP — It’s occasionally helpful for thinking through an intended architecture or adjusting an architecture that’s proving hard to change. I don’t think it really comes up too often
  • LSP — Leaky abstractions can be harmful, so no argument from me here, but like all things, the impact is pretty variable and I wouldn’t necessarily make this a hard rule
  • ISP — Important here and there if you’re building APIs for other developers, but probably not applicable on a daily basis
  • DIP — Overblown, and probably causes a little more harm than good to folks that over apply this

All told, I think SOLID is still somewhat useful as a set of sometimes applicable heuristic, but very lacking as an all encompassing strategy for writing good code all by itself and absurd to use as a set of inviolate rules. So let’s move on to some other heuristic tools that I actually use more often myself.

But what about CUPID?!?

Since it’s the new shiny object, and admittedly one of the reasons I finally got around to writing my own post, let’s talk about Dan North’s new CUPID properties he proposed as a “joyful” replacement or successor to SOLID. To be honest, I at first blew off CUPID as yet another example of celebrity programmers who are highly entertaining, engaging, and personable but don’t really bring a lot of actual intellectual content to the discussion. That’s most likely unfair, so I made myself take CUPID a little more seriously while writing this post and read it much more carefully the second time around.

I will happily recommend reading the CUPID paper. I don’t find it to be specific enough to be actionable, but as a philosophical starting point it’s pretty solid (no pun intended). As an over worked supporter of a heavily used OSS library, I very much appreciate his emphasis on writing code within the idioms of the language, toolset, and codebase you’re in rather than trying to force code to fit your preconceived notions of how it should be. A very large proportion of the nastier problems I help OSS users with is due to stepping outside of the intended idiomatic usage of libraries, the programming language, or the application frameworks they’re using.

Other Heuristics I Personally Like

When I first joined my current company, my boss asked me to do an internal presentation about the SOLID Principles as a way to improve our internal development. I did indeed do that, but only as a larger presentation on different software design heuristics to include other models that I personally find frankly more useful than SOLID. I’d simply recommend that you give mental tools like this a try to see if it fits with the way you work, but certainly don’t restrict yourself to my arbitrary list or force yourself to try to use a mental tool that doesn’t work for you.

Responsibility Driven Design

To over simplify software design, it’s the act of taking a big, amorphous set of intended functionality and dividing that into achievable chunks of code that somehow makes sense when it’s all put together. To that end, the single most useful mental tool in my career has been Responsibility Driven Design (RDD).

I highly recommend Rebecca Wirfs-Brock’s A Brief Tour of Responsibility-Driven Design slide deck. In particular, I find her description of Object Role Stereotypes a very useful way of discovering and assigning responsibilities to code artifacts within a system.

GRASP Patterns

Similar to RDD is the GRASP patterns from Craig Larman that again can be used to help you decide how to split and distribute responsibilities within your code. At least in OOP, I especially use the Information Expert pattern as a guide to assign responsibilities in code.

Command Query Separation

I’m referring to the older coding concept rather than the later, much larger CQRS style of architecture. I’m going to be lazy again and just refer to Fowler’s explanation. I would say that I’d pay attention to this as a way of making sure your code is more predictable and falls inline with my concern about being careful about when and where you mutate state within your codebase.

Don’t Repeat Yourself (DRY) or Once and Only Once

From the Pragmatic Programmer (still on my bookshelf after 20 years of moves):

Every piece of knowledge must have a single, unambiguous, authoritative representation within a system

https://en.wikipedia.org/wiki/Don%27t_repeat_yourself

It’s an imperfect world folks. Duplication in code can easily be problematic when business rules or technologies need to change. Or when you’ve copy/pasted or implemented the same bug all over the code.

Unfortunately, people have used DRY as the motivation behind doing horrendously harmful things with abstractions, frameworks, and generics (and frequently come to discussion boards wanting my help making their custom framework work with my OSS tools). Somewhat because of that, there’s a nasty backlash against DRY. I’d again urge folks to not throw the DRY baby out with the bathwater. I would urge you to try to be mindful about duplication from your code, but back off of that if the effort to remove duplication when that adds complexity that feels like it’s more harmful than helpful.

“A-Frame Architecture”

I’ll happily recommend Jim Shore’s Testing Without Mocks: A Pattern Language paper, but I’d like to specifically draw your attention to what he terms the “A-Frame Architecture” as a way to decouple business logic from infrastructure, maximize testability, but also avoid going into unnecessarily complex abstractions.

Code Smells

Take the time to read through the chapter on Code Smells in the Refactoring book some time. Code smells are an easy mental tool to notice possible problems in your code. It doesn’t necessarily mean that your code is bad, but rather just something that might require more of your attention.

More content on Wikipedia.

Anti-Patterns

Similar to code smells, anti-patterns are previously identified ideas, habits, or practices that are known to lead to bad outcomes. I’d spend more time on this, but it’s showing my age because the AntiPatterns website was very clearly written with FrontPage ’97/’98!

Tell, Don’t Ask

I’m running out of steam, so I’m going to refer you to Martin Fowler’s TellDontAsk. This is a shorthand test that will help you improve encapsulation and coupling between elements of code. It’s a complement to “Information Expert” from the GRASP patterns or “feature envy” from code smells. As usual, there’s a lot of ways to try to express the same idea in coding, and just use whatever metaphor or heuristic works best for you.

Next time on “Jeremy expands on his Twitter rants…”

It’s finally time to explain why I think prescriptive architectural styles like Clean or Onion are problematic and why I’m trying to pull rank at work to ban these styles in new development.

Resetting Marten Database State Between Tests

TL;DR: Marten has a new method in V5 called ResetAllData() that’s very handy for rolling back database state to a known point in automated tests.

I’m a big believer in utilizing intermediate level integration tests. By this I mean the middle layer of the typical automated testing pyramid where you’re most definitely testing through your application’s infrastructure, but not necessarily running the system end to end.

Now, any remotely successful test automation strategy means that you have to be able to exert some level of control over the state of the system leading into a test because all automated tests need the combination of known inputs and expected outcomes. To that end, Marten has built in support for completely rolling back the state of a Marten-ized database between tests that I’ll be demonstrating in this post.

When I’m working on a system that uses a relational database, I’m a fan of using Respawn from Jimmy Bogard that helps you rollback the state of a database to its beginning point as part of integration test setup. Likewise, Marten has the “clean” functionality for the same purpose:

public async Task clean_out_documents(IDocumentStore store)
{
    // Completely remove all the database schema objects related
    // to the User document type
    await store.Advanced.Clean.CompletelyRemoveAsync(typeof(User));

    // Tear down and remove all Marten related database schema objects
    await store.Advanced.Clean.CompletelyRemoveAllAsync();

    // Deletes all the documents stored in a Marten database
    await store.Advanced.Clean.DeleteAllDocumentsAsync();

    // Deletes all of the persisted User documents
    await store.Advanced.Clean.DeleteDocumentsByTypeAsync(typeof(User));

    // For cases where you may want to keep some document types,
    // but eliminate everything else. This is here specifically to support
    // automated testing scenarios where you have some static data that can
    // be safely reused across tests
    await store.Advanced.Clean.DeleteDocumentsExceptAsync(typeof(Company), typeof(User));
    
    // And get at event storage too!
    await store.Advanced.Clean.DeleteAllEventDataAsync();
}

So that’s tearing down data, but many if not most systems will need some baseline reference data to function. We’re still in business though, because Marten has long had a concept of initial data applied to a document store on its start up with the IInitialData interface. To illustrate that interface, here’s a small sample implementation:

    internal class BaselineUsers: IInitialData
    {
        public async Task Populate(IDocumentStore store, CancellationToken cancellation)
        {
            using var session = store.LightweightSession();
            session.Store(new User
            {
                UserName = "magic",
                FirstName = "Earvin",
                LastName = "Johnson"
            });

            session.Store(new User
            {
                UserName = "sircharles",
                FirstName = "Charles",
                LastName = "Barkley"
            });

            await session.SaveChangesAsync(cancellation);
        }
    }

And the BaselineUsers type could be applied like this during initial application configuration:

using var host = await Host.CreateDefaultBuilder()
    .ConfigureServices(services =>
    {
        services.AddMarten(opts =>
        {
            opts.Connection("some connection string");
        }).InitializeWith<BaselineUsers>();
    }).StartAsync();

Or, maybe a little more likely, if you have some reference data that’s only applicable for your automated testing, we can attach our BaselineUsers data set to Marten, but **only in our test harness** with usage like this:

// First, delegate to your system under test project's
// Program.CreateHostBuilder() method to get the normal system configuration
var host = Program.CreateHostBuilder(Array.Empty<string>())

    // But next, apply initial data to Marten that we need just for testing
    .ConfigureServices(services =>
    {
        // This will add the initial data to the DocumentStore
        // on application startup
        services.InitializeMartenWith<BaselineUsers>();
    }).StartAsync();

For some background, as of V5 the mechanics for the initial data set feature moved to executing in an IHostedService so there’s no more issue of asynchronous code being called from synchronous code with the dreaded “will it dead lock or am I feeling lucky?” GetAwaiter().GetResult() mechanics.

Putting it all together with xUnit

The way I like to do integration testing with xUnit (the NUnit mechanics would involve static members, but the same concepts of lifetime still apply) is to have a “fixture” class that will bootstrap and hold on to a shared IHost instance for the system under test between tests like this one:

    public class MyAppFixture: IAsyncLifetime
    {
        public IHost Host { get; private set; }

        public async Task InitializeAsync()
        {
            // First, delegate to your system under test project's
            // Program.CreateHostBuilder() method to get the normal system configuration
            Host = await Program.CreateHostBuilder(Array.Empty<string>())

                // But next, apply initial data to Marten that we need just for testing
                .ConfigureServices(services =>
                {
                    services.InitializeMartenWith<BaselineUsers>();
                }).StartAsync();
            }

        public async Task DisposeAsync()
        {
            await Host.StopAsync();
        }
    }

Next, I like to have a base class for integration tests that in this case will consume the MyAppFixture above, but also reset the Marten database between tests with the new V5 IDocumentStore.Advanced.ResetAllStore() like this one:

    public abstract class IntegrationContext : IAsyncLifetime
    {
        protected IntegrationContext(MyAppFixture fixture)
        {
            Services = fixture.Host.Services;
        }

        public IServiceProvider Services { get; set; }

        public Task InitializeAsync()
        {
            var store = Services.GetRequiredService<IDocumentStore>();

            // This cleans out all existing data, and reapplies
            // the initial data set before all tests
            return store.Advanced.ResetAllData();
        }

        public virtual Task DisposeAsync()
        {
            return Task.CompletedTask;
        }
    }

Do note that I left out some xUnit ICollectionFixture mechanics that you might need to do to make sure that MyAppFixture is really shared between tests. See xUnit’s Shared Context documentation.

A way too early discussion of “Jasper”

After determining that I wasn’t going to be able to easily move the old FubuMVC codebase to the CoreCLR, I’ve been furiously working on the long proposed and delayed successor to FubuMVC that’s going to be called “Jasper.” I’m trying to get in front of a team doing CoreCLR development at work with a working MVP feature set in the next couple weeks. I’m needing to bring a couple other folks from my shop on to help out and a few folks have been asking what I’m up to just because of the sudden flurry of Github activity, so here’s a big ol’ braindump of the roadmap and architectural direction so far.

First, why do this at all instead of switching to another existing service bus?

  1. We’re happy with how FubuMVC’s service bus support has worked out
  2. We need to be “wire compatible” with FubuMVC
  3. We want to do CoreCLR development right now, and NSB/MassTransit isn’t there yet
  4. Jasper will be “xcopy deployable,” which we’ve found to be very advantageous for both development and automated testing
  5. Because I want to — but don’t let my boss hear that

The Vision

Jasper is a next generation application development framework for distributed server side development in .Net (think service bus now and HTTP services later). Jasper is being built on the CoreCLR as a replacement for a small subset of the older FubuMVC tooling. Roughly stated, Jasper intends to keep the things that have been successful in FubuMVC, ditch the things that weren’t, and make the runtime pipeline be much more performant. Oh, and make the stack traces from failures within the runtime pipeline be a whole lot simpler to read — and yes, that’s absolutely worth being one of the main goals.

The current thinking is that we’d have these libraries/Nugets:

  1. Jasper – The core assembly that will handle bootstrapping, configuration, and the Roslyn code generation tooling
  2. JasperBus – The service bus features from FubuMVC and an alternative to MediatR
  3. JasperDiagnostics – Runtime diagnostics meant for development and testing
  4. JasperStoryteller – Support for hosting Jasper applications within Storyteller specification projects.
  5. JasperHttp (later) – Build HTTP micro-services on top of ASP.Net Core in a FubuMVC-esque way.
  6. JasperQueues (later) – JasperBus is going to use LightningQueues as its
    primary transport mechanism, but I’d possibly like to re-architect that code to a new library inside of Jasper. This library will not have any references or coupling to any other Jasper project.
  7. JasperScheduler (proposed for much later) – Scheduled or polling job support on top of JasperBus

The Core Pipeline and Roslyn

The basic goal of Jasper is to provide a much more efficient and improved version of the older FubuMVC architecture for CoreCLR development that is also “wire compatible” with our existing FubuMVC 3 services on .Net 4.6.

The original, core concept of FubuMVC was what we called the Russion Doll Model and is now mostly refered to as middleware. The Russian Doll Model architecture makes it relatively easy for developers to reuse code for cross cutting concerns like validation or security without having to write nearly so much explicit code. At this point, many other .Net frameworks support some kind of Russian Doll Model architecture like ASP.Net Core’s middleware or the Behavior model in NServiceBus.

In FubuMVC, that consisted of a couple parts:

  • A runtime abstraction for middleware called IActionBehavior for every step in the runtime pipeline for processing an HTTP request or service bus message. Behavior’s were a linked list chain from outermost behavior to innermost. This model was also adapted from FubuMVC into NServiceBus.
  • A configuration time model we called the BehaviorGraph that expressed all the routes and service bus message handling chains of behaviors in the system. This configuration time model made it possible to apply conventions and policies that established what exact middleware ran in what order for each message type or HTTP route. This configuration model also allowed FubuMVC to expose diagnostic visualizations about each chain that was valuable for troubleshooting problems or just flat out understanding what was in the system to begin with.

Great, lots of flexibility and some unusual diagnostics, but the FubuMVC model gets a lot uglier when you go to an “async by default” execution pipeline. Maybe more importantly, it suffers from too many object allocations because of all the little objects getting created on every message or HTTP request that hurt performance and scalability. Lastly, it makes for some truly awful stack traces when things go wrong because of all the bouncing between behaviors in the nested handler chain.

For Jasper, we’re going to keep the configuration model (but simplified), but this time around we’re doing some code generation at runtime to “bake” the execution pipeline in a much tighter package, then use the new runtime code compilation capabilitites in Roslyn to generate assemblies on the fly.

As part of that, we’re trying every possible trick we can think of to reduce object allocations and minimize the work being done at runtime by the underlying IoC container. The NServiceBus team did something very similar with their version of middleware and claimed an absolutely humongous improvement in throughput, so we’re very optimistic about this approach.

What’s with the name?

I think that FubuMVC turned some people off by its name (“for us, by us”). This time around I was going for an unassuming name that was easy to remember and just named it after my hometown (Jasper, MO).

JasperBus

The initial feature set looks to be:

  • Running decoupled commands ala MediatR
  • In memory transport
  • LightningQueues based transport
  • Publish/Subscribe messaging
  • Request/Reply messaging patterns
  • Dead letter queue mechanics
  • Configurable error handling rules
  • The “cascading messages” feature from FubuMVC
  • Static message routing rules
  • Subscriptions for dynamic routing — this time we’re looking at using [Consul(https://www.consul.io/)] for the underlying storage
  • Delayed messages
  • Batch message processing
  • Saga support (later) — but this is going to be a complete rewrite from FubuMVC

There is no intention to add the polling or scheduled job functionality that was in FubuMVC to Jasper.

JasperDiagnostics

We haven’t detailed this one out much, but I’m thinking it’s going to be a completely encapsulated ASP.Net Core application using Kestrel to serve some diagnostic views of a running Jasper application. As much as anything, I think this project is going to be a test bed for my shop’s approach to React/Redux and an excuse to experiment with the Apollo client with or without GraphQL. The diagnostics should expose both a static view of the application’s configuration and a live tracing of messages or HTTP requests being handled.

JasperStoryteller

This library won’t do too much, but we’ll at least want a recipe for being able to bootstrap and teardown a Jasper application in Storyteller test harnesses. At a minimum, I’d like to expose a bit of diagnostics on the service bus activity during a Storyteller specification run like we did with FubuMVC in the Storyteller specification results HTML.

JasperHttp

We’re embracing ASP.net Core MVC at work, so this might just be a side project for fun down the road. The goal here is just to provide a mechanism for writing micro-services that expose HTTP endpoints. The I think the potential benefits over MVC are:

  • Less ceremony in writing HTTP endpoints (fewer attributes, no required base classes, no marker interfaces, no fluent interfaces)
  • The runtime model will be much leaner. We think that we can make Jasper about as efficient as writing purely explicit, bespoke code directly on top of ASP.Net Core
  • Easier testability

A couple folks have asked me about the timing on this one, but I think mid-summer is the earliest I’d be able to do anything about it.

JasperScheduler

If necessary, we’ll have another “Feature” library that extends JasperBus with the ability to schedule user supplied jobs. The intention this time around is to just use Quartz as the actual scheduler.

JasperQueues

This is a giant TBD

IoC Usage Plans

Right now, it’s going to be StructureMap 4.4+ only. While this will drive some folks away, it makes the tool much easier to build. Besides, Jasper is already using some StructureMap functionality for its own configuration. I think that we’re only positioning Jasper for greenfield projects (and migration from FubuMVC) anyway.

Regardless, the IoC usage in Jasper is going to be simplistic compared to what we did in FubuMVC and certainly less entailed than the IoC abstractions in ASP.net MVC Core. We theorize that this should make it possible to slip in the IoC container of your choice later.

Thoughts on Agile Database Development

I’m flying out to our main office next week and one of the big things on my agenda is talking over our practices around databases in our software projects. This blog post is just me getting my thoughts and talking points together beforehand. There are two general themes here, how I’d do things in a perfect world and how to make things better within the constraints of the organization and software architecture that have now.

I’ve been a big proponent of Agile development processes and practices going back to the early days of Extreme Programming (before Scrum came along and ruined everything about the way that Scrappy ruined Scooby Doo cartoons for me as a child). If I’m working in an Agile way, I want:

  1. Strong project and testing automation as feedback cycles that run against all changes to the system
  2. Some kind of easy traceability from a built or deployed system to exactly the version of the code and its dependencies , preferably automated through your source control processes
  3. Technologies, tools, and frameworks that provide high reversibility to ease the cost of doing evolutionary software design.

From the get go, relational databases have been one of the biggest challenges in the usage of Agile software practices. They’re laborious to use in automated testing, often expensive in time or money to install or deploy, the change management is a bit harder because you can’t just replace the existing database objects the way we can with other code, and I absolutely think it’s reduces reversibility in your system architecture compared to other options. That being said, there are some practices and processes I think you should adopt so that your Agile development process doesn’t crash and burn when a relational database is involved.

Keep Business Logic out of the Database, Period.

I’m strongly against having any business logic tightly coupled to the underlying database, but not everyone feels the same way. For one reason, stored procedure languages (tSQL, PL/SQL, etc.) are very limited in their constructs and tooling compared to the languages we use in our application code (basically anything else). Mostly though, I avoid coupling business logic to the database because having to test through the database is almost inevitably more expensive both in developer effort and test run times than it would be otherwise.

Some folks will suggest that you might want to change out your database later, but to be honest, the only time I’ve ever done that in real life is when we moved from RavenDb to Marten where it had little impact on the existing structure of the code.

In practice this means that I try to:

  1. Eschew usage of stored procedures. Yes, I think there are still some valid reasons to use sprocs, but I think that they are a “guilty until proven innocent” choice in almost any scenario
  2. Pull business logic away from the database persistence altogether whenever possible. I think I’ll be going back over some of my old designing for testability blog posts from the Codebetter/ALT.Net days to try to explain to our teams that “wrap the database in an interface and mock it” isn’t always the best solution in every case for testability
  3. Favor persistence tools that invert the control between the business logic and the database over tooling like Active Record that creates a tight coupling to the database. What this means is that instead of having business logic code directly reading and writing to the database, something else (Dapper if we can, EF if we absolutely have to) is responsible for loading and persisting application state back and forth between the domain in code and the underlying database. The point is to be able to completely test your business logic in complete isolation from the database.

I would make exceptions for use cases where using the database engine to do set based logic in a stored procedure is a more efficient way to solve the problem, but I haven’t been involved in systems like that for a long time.

 

Database per Developer/Tester/Environment

My very strong preference and recommendation is to have each developer, tester, and automated testing environment using a completely separate database. The key reason is to isolate each thread of team activity to avoid simultaneous operations or database changes from interfering with each other. Sharing the database makes automated testing much less effective because you often get false negatives or false positives from database activity going on somewhere else at the same time — and yes, this really does happen and I’ve got the scars to prove it.

Additionally, it’s really important for automated testing to be able to tightly control the inputs to a test. While there are some techniques you can use to do this in a shared database (multi-tenancy usage, randomized data), it’s far easier mechanically to just have an isolated database that you can easily control.

Lastly, I really like being able to look through the state of the database after a failed test. That’s certainly possible with a shared database, but it’s much easier in my opinion to look through an isolated database where it’s much more obvious how your code and tests changed the database state.

I should say that I’m concerned here with logical separation between different threads of activity. If you do that with truly separate databases or separate schemas in the same database, it serves the same goal.

“The” Database vs. Application Persistence

There are two basic development paradigms to how we think about databases as part of a software system:

  1. The database is the system and any other code is just a conduit to get data back and forth from the database and  its consumers
  2. The database is merely the state persistence subsystem of the application

I strongly prefer and recommend the 2nd way of looking at that, and act accordingly. That’s a admittedly a major shift in thinking from traditional software development or database centric teams.

In practice, this generally means that I very strongly favor the concept of an application database that is only accessed by one application and can be considered to be just part of the application. In this case, I would opt to have all of the database DDL scripts and migrations in the source control repository for the application. This has a lot of benefits for development teams:

  1. It makes it dirt simple to correlate the database schema changes to the rest of the application code because they’re all versioned together
  2. Automated testing is easier within continuous integration builds becomes easier because you know exactly what scripts to apply to the database before running the tests
  3. No need for elaborate cascading builds in your continuous integration setup because it’s just all together

In contrast, a shared database that’s accessed by multiple applications is a lot more potential friction. The version tracking between the two moving parts is harder to understand and it harms your ability to do effective automated testing. Moreover, it’s wretchedly nasty to allow lots of different applications to float on top of the same database in what I call the “pond scum anti-pattern” because it inevitably causes nasty coupling issues that will almost result in regression bugs due to it being so much harder to understand how changes in the database will ripple out to the applications sharing the database. A much, much younger version of myself walked into a meeting and asked our “operational data store” folks to add a column to a single view and got screamed at for 30 minutes straight on why that was going to be impossible and do you know how much work it’s going to be to test everything that uses that view young man?

Assuming that you absolutely have to continue to use a shared database like my shop does, I’d at least try to ameliorate that by:

  • Make damn sure that all changes to that shared database schema are captured in source control somewhere so that you have a chance at effective change tracking
  • Having a continuous integration build for the shared database that runs some level of regression tests and then subsequently cascades to all of the applications that touch that database being automatically updated and tested against the latest version of the shared database. I’m expecting some screaming when I recommend that in the office next week;-)
  • At the least, have some mechanism for standing up a local copy of the up to date database schema with any necessary baseline data on demand for isolated testing
  • Some way to know when I’m running or testing the dependent applications exactly what version of the database schema repository I’m currently using. Git submodules? Distribute the DB via Nuget? Finally do something useful with Docker, distribute the DB as a versioned Docker image, and brag about that to any developer we meet?

The key here is that I want automated builds constantly running as feedback mechanisms to know when and what database changes potentially break (or fix too!) one of our applications. Because of some bad experiences in the past, I’m hesitant to use cascading builds between separate repositories, but it’s definitely warranted in this case until we can get the big central database split up.

At the end of the day, I still think that the shared database architecture is a huge anti-pattern that most shops should try to avoid and I’d certainly like to see us start moving away from that model more and more.

 

Document Databases over Relational Databases

I’ve definitely put my money where my mouth is on this (RavenDb early on, and now Marten). In my mind, evolutionary or incremental software design is much easier with document databases for a couple reasons:

  • Far fewer changes in the application code result in database schema changes
  • It’s much less work to keep the application and database in sync because the storage just reflects the application model
  • Less work in the application code to transform the database storage to structures that are more appropriate for the business logic. I.e., relational databases really aren’t great when your domain model is logically hierarchical rather than flat
  • It’s a lot less work to tear down and set up known test input states in document databases. With a relational database you frequently end up having to deal with extraneous data you don’t really care about just to satisfy relational integrity concerns. Likewise, tearing down relational database state takes more care and thought than it does with a document database.

I would still opt to use a relational database for reporting or if there’s a lot of set based logic in your application. For simpler CRUD applications, I think you’re fine with just about any model and I don’t object to relational databases in those cases either.

It sounds trivial, but it does help tremendously if your relational database tables are configured to use cascading deletes when you’re trying to set a database into a known state for tests.

Team Organization

My strong preference is to have a completely self-contained team that has the ability and authority to make any and all changes to their application database, and that’s most definitely been valid in my experience. Have the database managed and owned separately from the development team is a frequent source of friction and definitely a major hit to your reversibility that forces you to do more potentially wrong, upfront design work. It’s much worse when that separate team does not share your priorities or simply works on a very different release schedule. I think it’s far better for a team to own their database — or at the very worst, have someone who is allowed to touch the database in the team room and team standup’s.

If I had full control over an organization, I would not have a separate database team. Keeping developers and database folks on separate team makes your team have to spend more time on inter-team coordination, takes away from the team’s flexibility in deciding what they can deliver, and almost inevitably causes a bottleneck constraint for projects. Even worse in my mind is when neither the developers nor the database team really understand how their work impacts the other team.

Even if we say that we have a matrix organization, I want the project teams to have primacy over functional teams. To go farther, I’d opt to make functional teams (developers, testers, DBA’s) be virtual teams solely for the purpose of skill acquisition, knowledge sharing, and career growth. My early work experience was being an engineer within large petrochemical project teams, and the project team dominant matrix organization worked a helluva lot better than it did at my next job in enterprise IT that focused more on functional teams.

As an architect now rather than a front line programmer, I constantly worry about not being able to feel the “pain” that my decisions and shared libraries cause developers because that pain is an important feedback mechanism to improve the usability of our shared infrastructure or application architecture. Likewise, I worry that having a separate database team creates a situation where they’re not very aware of the impact of their decisions on developers or vice versa. One of the very important lessons I was taught as an engineer was that it was very important to understand how other engineering disciplines work and what they needed so that we could work better with them.

Now though, I do work in a shop that has historically centralized the control of the database in a centralized database team. To mitigate the problems that naturally arise from this organizational model, we’re trying to have much more bilateral conversations with that team. If we can get away with this, I’d really like to see members of that team spend more time in the project team rooms. I’d also love it if we could steal a page from my original engineering job (Bechtel)  and suggest some temporary rotations between the database and developer teams to better appreciate how the other half of that relationship works and what their needs are.

 

 

 

Succeeding with Automated Integration Tests

tl;dr This post is an attempt to codify my thoughts about how to succeed with end to end integration testing. A toned down version of this post is part of the Storyteller 3 documentation

About six months ago the development teams at my shop came together in kind of a town hall to talk about the current state of our automated integration testing approach. We have a pretty deep investment in test automation and I think we can claim some significant success, but we also have had some problems with test instability, brittleness, performance, and the time it takes to author new tests or debug existing tests that have failed.

Some of the problems have since been ameliorated by tightening up on our practices — but that still left quite a bit of technical friction and that’s where this post comes in. Since that meeting, I’ve been essentially rewriting our old Storyteller testing tool in an attempt to address many of the technical issues in our automated testing. As part of the rollout of the new Storyteller 3 to our ecosystem, I thought it was worth a post on how I think teams can be more successful at automated end to end testing.

Test Stability

I’ve worked in far too many environments and codebases where the automated tests were “flakey” or unreliable:

  • Teams that do all of their development against a single shared, development database such that the data setup is hard to control
  • Web applications with a lot of asynchronous behavior are notoriously hard to test and the tests can be flakey with timing issues — even with all the “wait for this condition on the page to be true” discipline in the world.
  • Distributed architectures can be difficult to test because you may need to control, coordinate, or observe multiple processes at one time.
  • Deployment issues or technologies that tend to hang on to file locks, tie up ports, or generally lock up resources that your automated tests need to use

To be effective, automated tests have to be reliable and repeatable. Otherwise, you’re either going to spend all your time trying to discern if a test failure is “real” or not, or you’re most likely going to completely ignore your automated tests altogether as you lose faith in them.

I think you have several strategies to try to make your automated, end to end tests more reliable:

  1. Favor white box testing over black box testing (more on this below)
  2. Closely related to #1, replace hard to control infrastructure dependencies with stub services, even in functional testing. I know some folks absolutely hate this idea, but my shop is having a lot of success in using an IoC tool to swap out dependencies on external databases or web services in functional testing that are completely out of our control.
  3. Isolate infrastructure to the test harness. For example, if your system accesses a relational database, use an isolated schema for the testing that is only used by the test harness. Shared databases can be one of the worst impediments to successful test automation. It’s both important to be able to set up known state in your tests and to not get “false” failures because some other process happened to alter the state of your system while the test is running. Did I mention that I think shared databases are a bad idea yet?*
  4. Completely control system state setup in your tests or whatever build automation you have to deploy the system in testing.
  5. Collapse a distributed application down to a single process for automated functional testing rather than try to run the test harness in a different process than the application. In our functional tests, we will run the test harness, an embedded web server, and even an embedded database in the same process. For distributed applications, we have been using additional .Net AppDomain’s to load related services and using some infrastructure in our OSS projects to coordinate the setup, teardown, and even activity in these services during testing time.
  6. As a last resort for a test that is vulnerable to timing issues and race conditions, allow the test runner to retry the test

Failing all of those things, I definitely think that if a test that is so unstable and unreliable that it renders your automated build useless that you just delete that test. I think a reliable test suite with less coverage is more useful to a team than a more expansive test suite that is not reliable.

You Gotta Have Continuous Integration

This section isn’t the kind of pound on the table, Uncle Bob-style of “you must do this or you’re incompetent” kind of rant that causes the Rob Conery’s of the world have conniptions. Large scale automation testing simply does not work if the automated tests are not running regularly as the system continues to evolve.

Automated tests that are never or seldom executed can even be a burden on a development team that still try to keep that test code up to date with architectural changes. Even worse, automated tests that are not constantly executed are not trustworthy because you no longer know if test failures are real or just because the application structure changed.

Assuming that your automated tests are legitimately detecting regression problems, you need to determine what recent change introduced the problem — and it’s far easier to do that if you have a smaller list of possible changes and those changes are still fresh in the developer’s mind. If you are only occasionally running those automated tests, diagnosing failing tests can be a lot like finding the proverbial needle in the haystack.

I strongly prefer to have all of the automated tests running as part of a team’s continuous integration (CI) strategy — even the heavier, slower end to end kind of tests. If the test suite gets too slow (we have a suite that’s currently taking 40+ minutes), I like the “fast tests, slow tests” strategy of keeping one main build that executes the quicker tests (usually just unit tests) to give the team reasonable confidence that things are okay. The slower tests would be executed in a cascading build triggered whenever the main build completes successfully. Ideally, you’d like to have all the automated tests running against every push to source control, but even running the slower tests suites in a nightly or weekly scheduled build is better than nothing.

Make the Tests Easy to Run Locally

I think the section title is self-explanatory, but I’ve gotten this very wrong in the past in my own work. Ideally, you would have a task in your build script (I still prefer Rake, but substitute MSBuild, Fake, Make, Gulp, NAnt, whatever you like) that completely sets up the system under test on your machine and runs whatever the test harness. In a less perfect world a developer has to jump through hoops to find hidden dependencies and take several poorly described steps in order to run the automated tests. I think this issue is much less problematic than it was earlier in my career as we’ve adopted much more project build automation and moved to technologies that are easier to automate in deployment. I haven’t gotten to use container technologies like Docker myself yet, but I sure hope that those tools will make doing the environment setup for automating tests easier in the future.

Whitebox vs. Blackbox Testing

I strongly believe that teams should generally invest much more time and effort into whitebox tests than blackbox tests. Throughout my career, I have found that whitebox tests are frequently more effective in finding problems in your system – especially for functional testing – because they tend to be much more focused in scope and are usually much faster to execute than the corresponding black box test. White box tests can also be much easier to write because there’s simply far less technical stuff (databases, external web services, service buses, you name it) to configure or set up.

I do believe that there is value in having some blackbox tests, but I think that these blackbox tests should be focused on finding problems in technical integrations and infrastructure whereas the whitebox tests should be used to verify the desired functionality.

Especially at the beginning of my career, I frequently worked with software testers and developers who just did not believe that any test was truly useful unless the testing deployment was exactly the same as production. I think that attitude is inefficient. My philosophy is that you write automated tests to find and remove problems from your system, but not to prove that the system is perfect. Adopting that philosophy, favoring white box over black box testing makes much more sense.

Choose the Quickest, Useful Feedback Mechanism

Automating tests against a user interface has to be one of the most difficult and complex undertakings in all of software development. While teams have been successful with test automation using tools like WebDriver, I very strongly recommend that you do not test business logic and rules through your UI if you don’t have to. For that matter, try hard to avoid testing business logic without using the database. What does this mean? For example:

  • Test complex logic by calling into a service layer instead of the UI. That’s a big issue for one of the teams I work with who really needs to replace a subsystem behind http json services without necessarily changing the user interface that consumes those services. Today the only integration testing involving that subsystem is done completely end to end against the full stack. We have plenty of unit test coverage on the internals of that subsystem, but I’m pretty certain that those unit tests are too coupled to the implementation to be useful as regression or characterization tests when that team tries to improve or replace that subsystem. I’m strongly recommending that that team write a new suite of tests against the gateway facade service to that subsystem for faster feedback than the end to end tests could ever possibly be.
  • Use Subcutaneous Tests even to test some UI behavior if your application architecture supports that
  • Make HTTP calls directly against the endpoints in a web application instead of trying to automate the browser if that can be useful to test out the backend.
  • Consider testing user interface behavior with tightly controlled stub services instead of the real backend

The general rule we encourage in test automation is to use the “quickest feedback cycle that tells you something useful about your code” — and user interface testing can easily be much slower and more brittle than other types of automated testing. Remember too that we’re trying to find problems in our system with our tests instead of trying to prove that the system is perfect.

Setting up State in Automated Tests

I wrote a lot about this topic a couple years ago in My Opinions on Data Setup for Functional Tests, and I don’t have anything new to say since then;) To sum it up:

  • Use self-contained tests that set up all the state that a test needs.
  • Be very cautious using shared test data
  • Use the application services to set up state rather than some kind of “shadow data access” layer
  • Don’t couple test data setup to implementation details. I.e., I’d really rather not see gobs of SQL statements in my automated test code
  • Try to make the test data setup declarative and as terse as possible

Test Automation has to be a factor in Architecture

I once had an interview for a company that makes development tools. I knew going in that their product had some serious deficiencies in their automated testing strategy. When I told my interviewer that I was confident that I could help that company make their automated testing support much better, I was told that testing was just a “process issue.” Last I knew, it is still weak for its support for automating tests against systems that use that tool.

Automated testing is not merely a “process issue,” but should be a first class citizen in selecting technologies and shaping your system architecture. I feel like my shop is far above average for our test automation and that is in no small part because we have purposely architected our applications in such a way to make functional, automated testing easier. The work I described in sections above to collapse a distributed system into one process for easier testing, using a compositional architecture effectively composed by an IoC tool, and isolatating business rules from the database in our systems has been vital to what success we have had with automated testing. In other places we have purposely added logging infrastructure or hooks in our application code for no other reason than to make it easier for test automation infrastructure to observe or control the application.

Other Stuff for later…

I don’t think that in 10 years of blogging I’ve ever finished a blog series, but I might get around to blogging about how we coordinate multiple services in distributed messaging architectures during automated tests or how we’re integrating much more diagnostics in our automated functional tests to spot and prevent performance problems from creeping into application.

* There are some strategies to use in testing if you absolutely have no other choice in using a shared database, but I’m not a fan. The one approach that I want to pursue in the future is utilizing multi-tenancy data access designs to create a fake tenant on each test run to keep the data isolated for the test even if the damn database is shared. I’d still rather smack the DBA types around until they get their project automation act together so we could all get isolated databases.

Long Lived Codebases: The Challenges

I did a talk at CodeMash 2015 called “Lessons Learned from a Long Lived Codebase” that I thought went very well and I promised to turn into a series of blog posts. I’m not exactly sure how many posts it’s going to be yet, but I’m going to try to get them all out by the end of January. This is the first of maybe 4-5 theoretical posts on my experience evolving and supporting the StructureMap codebase over the past 11-12 years.

Some Background

In 2002 the big corporate IT shop I was working in underwent a massive “Dilbert-esque” reorganization that effectively trapped me in a non-coding architect role that I hated. I could claim 3-4 years of development experience and had some significant technical successes under my belt in that short time, but I’d mostly worked with the old COM-based Windows DNA platform (VB6, MTS, ADO, MSXML, ASP) and Oracle technologies right as J2EE and the forthcoming .Net framework seemed certain to dominate enterprise software development for the foreseeable future.

I was afraid that I was in danger of being made obsolete in my new role. I looked for some kind of project I could do out in the open that I could use to both level up on the newer technologies and prove to potential employers that “yes, I can code.” Being a pretty heavy duty relational database kinda guy back then, I decided that I was going to build the greatest ORM tool the world had ever seen on the new .Net platform. I was going to call it “StructureMap” to reflect its purpose of mapping the database to object structures. I read white papers, doodled UML diagrams like crazy, and finally started writing some code — but got bogged down trying to write an over-engineered configuration and modularity layer that would effectively allow you to configure object graphs in Xml. No matter, I managed to land a job with ThoughtWorks (TW) and off I went to be a real developer again.

During the short time that I worked at ThoughtWorks, Martin Fowler published his paper about Dependency Injection and Inversion of Control Containers and other folks at the company built an IoC container in Java called PicoContainer that was getting some buzz on internal message boards. I came to TW in hopes of being one of the cool kids too, so I dusted off the configuration code for my abandoned ORM tool and transformed that it into an IoC library for .Net during my weekly flights between Austin and Chicago. StructureMap was put into a production application in early 2004 and publicly released on SourceForge in June of 2004 as the very first production ready IoC tool on the .Net platform (yes, StructureMap is actually older than Windsor or Spring.Net even though they were much better known for many years).

Flash forward to today and there’s something like two dozen OSS IoC containers for .Net (all claiming to be a special snowflake that’s easier to use than the others while being mostly about the same as the others), at least three (Unity, MEF, and the original ObjectBuilder) from Microsoft itself with yet another brand new one coming in the vNext platform. I’m still working with and on StructureMap all these years later after the very substantial improvements for 3.o last year — but at this point very little remains unchanged from the early code. I’m not going to waste your time trying to sell you on StructureMap, especially since I’m going to spend so much time talking about the mistakes I’ve made during its development. This series is about the journey, not the tool itself.

What’s Changed around Me

Being 11 years old and counting, StructureMap has gone through a lot of churn as the technologies have changed and approaches have gone in and out of favor. If you maintain a big codebase over time, you’re very likely going to have to migrate it to newer versions of your dependencies, use completely different dependencies, or you’ll want to take advantage of newer programming language features. In no particular order:

  • StructureMap was originally written against .Net 1.1, but at the time of this post targets .Net 4.0 with the PCL compliance profile.
    • Newer elements of the .Net runtime like Task and Lazy<T> have simplified the code internals.
    • Lambdas as introduced in .Net 3.5 made a tremendous difference in the coding internals and had a big impact on the usage of the tool itself.
    • As I’ll discuss in a later post, the introduction of generics support into StructureMap 2.0 was like the world’s brightest spotlight shining on all the structural mistakes I made in the initial code structure of early StructureMap, but I’ll still claim that the introduction of generic types has made for huge improvements in StructureMap’s usability — and also one of the main reason why I think that the IoC tools in .Net are generally more usable than those in Java or Scala.
  • The build automation was originally done with NAnt, NUnit, and NMock. As my tolerance for Xml and coding ceremony decreased, StructureMap moved to using Rake and RhinoMocks. For various reasons, I’m looking to change the automation tooling yet again to modernize the StructureMap development experience.
  • StructureMap was originally hosted on SourceForge with Subversion source control. Releases were done in the byzantine fashion that SourceForge required way back then. Today, StructureMap is hosted on GitHub and distributed as Nuget packages. Nuget packages are generated as an artifact of each continuous integration build and manually promoted to Nuget.org whenever it’s time to do a public release. Nuget is an obvious improvement in distribution over manually created zip files. It is my opinion that GitHub is the single best thing to ever happen for Open Source Software development. StructureMap has received vastly more community contribution since moving to GitHub. I’m on record as being critical of the .Net community for being too passive and not being participatory in regards to .Net community tooling. I’m pleasantly surprised with how much help I’ve received from StructureMap users since the 3.0 release last year to fix bugs and fill in usability gaps.
  • The usage patterns and the architectures that folks build using StructureMap. In a later post I’ll do a deep dive on the evolution of the nested container feature.
  • Developer aesthetics and preferences, again, in a later post

 

Other People

Let’s face it, you and I are perfectly fine, but the “other” developers are the problem. In the particular case of a widely used library, you frequently find out that other developers use your tool in ways that you did not expect or anticipate. Frameworks that abstract the IoC container with some sort of adapter library have been some of the worst offenders in this regard.

The feedback I’ve gotten from user problems has led to many changes over the years:

  • All new features. The interception capabilities were originally to support AOP scenarios that I don’t generally use myself.
  • Changing the API to improve usability when verbiage is wrong
  • Lots and lots of work tweaking the internals of StructureMap as users describe architectural strategies that I would never think of, but do turn out to be useful — usually, but not always, involving open generic types in some fashion
  • New conventions and policies to remove repetitive code in the tool usage
  • Additional diagnostics to explain the outcome of the new conventions and policies from above
  • Adding more defensive programming checks to find potential problems faster. My attitude toward defensive programming is much more positive after supporting StructureMap over the years. This might apply more to tools that are configuration intense like say, an IoC tool.
  • A lot of work to improve exception messages (more on this later maybe)

 

One thing that should happen is to publish and maintain best practice recommendations for StructureMap. I have been upset with the developers of a popular .Net OSS tool who did, in my opinion, a wretched job of integrating StructureMap in their adapter library (to the point where I advise users of that framework to adopt a different IoC tool). Until I actually manage to publish the best practice advice to avoid the very problems they caused in their StructureMap usage, those problems are probably on me. Trying to wean users off of using StructureMap as a static service locator and being a little too extreme in applying a certain hexagonal architecture style have been constant problems on the user group over the years.

I’m not sure why this is so, but I’ve learned over the years that the more vitriolic a user is being toward you online when they’re having trouble with your tool, the more likely it is that they themselves are just doing something very stupid that’s not necessarily a poor reflection on your tool. If you ever publish an OSS tool, keep that in mind before you make the mistake of opening a column in your Twitter client just to spot references to your project or a keyword search in StackOverflow. I’ve also learned that users who have uncovered very real problems in StructureMap can be reasonable and even helpful if you engage them as collaborators in fixing the issue instead of being defensive. As I said earlier about the introduction of GitHub, I have routinely gotten much more assistance from StructureMap users in reproducing, diagnosing, and fixing problems in StructureMap over the past year than I ever had before.

 

Pull, not Push for New Features

In early 2008 I was preparing the grand StructureMap 2.5 release as the purported “Python 3000” release that was going to fix all the usability and performance issues in StructureMap once and for all time (Jimmy Bogard dubbed it the Duke Nukem Forever release too, but the 3.0 release took even longer;)). At the same time, Microsoft was gearing up for not one, but two new IoC tools (Unity from P&P and MEF from a different team). I swore that I wasn’t going down without a fight as Microsoft stomped all over my OSS tool, so I kicked into high gear and started stuffing StructureMap with new features and usability improvements. Those new things roughly fell into two piles:

  • Features or usability improvements I made based on my experience with using StructureMap on real projects that I knew would remove some friction from day to day usage. These features introduced in the 2.5 release have largely survived until today and I’d declare that many of them were successful
  • Things that I just thought would be cool, but which I had no immediate usage in my own work. You’ve already called it, much of this work was unsuccessful and later removed because it was either in the way, confusing to use, easily done in other ways, or most especially, a pain in the neck for me to support online because it wasn’t well thought out in the first place.

You have to understand that any feature you introduce is effectively inventory you have to support, document, and keep from breaking in future work. To reaffirm one of the things that the Lean Programming people have told us for years, it’s better to “pull” new features into your tool based on a demonstrated need and usage than it is to “push” a newly conceived feature in the hope that someone might find useful later.

 

Yet to come…

I tend to struggle to complete these kinds of blog series, but I do have the presentation and all of the code samples, so maybe I pull it off. I think that the candidates for following posts are something like:

  • A short discussion on backward compatibility
  • My documentation travails and how I’m trying to fix that
  • “Crimes against Computer Science” — the story of the nested container feature, how it went badly at first, and what I learned while fixing it in 3.0
  • “The Great Refactoring of Aught Eight”
  • API Usage Now and Then
  • Diagnostics and Exceptions

How We Do Strong Typed Configuration

TL;DR: I’ve used a form of “strong typed configuration” for the past 5-6 years that I think has some big advantages over the traditional approach to configuration in .Net. I still like this approach and I’m seeing something similar showing up in ASP.Net vNext as well. I’m NOT trying to sell you on any particular tool here, just the technique and concepts.

What’s Wrong with Traditional .Net Configuration?

About a decade ago I came late into a project where we were retrofitting some sort of new security framework onto a very large .Net codebase.* The codebase had no significant automated test coverage, so before we tried to monkey around with its infrastructure, I volunteered to create a set of characterization tests**. My first task was to stand up a big chunk of the application architecture on my own box with a local sql server database. My first and biggest challenge was dealing with all the configuration needs of that particular architecture (I’m wanting to say it was >75 different items in the appSettings collection).

As I recall, the specific problems with configuration were:

  1. It was difficult to easily know what configuration items any subset of the code (assemblies, classes, or subsystems) needed in order to run without painstakingly tracing the code
  2. It was somewhat difficult to understand how items in very large configuration files were consumed by the running code. Better naming conventions probably would have helped.
  3. There was no way to define configuration items on the fly in code. The only option was to change entries in the giant web.config Xml file because the code that used the old System.Configuration namespace “pulled” its configuration items when it needed configuration. What I wanted to do was to “push” configuration to the code to change database connections or file paths in test scenarios or really any time I just needed to repurpose the code. In a more general sense, I called this the Pull, Don’t Push rule in my CodeBetter days.

From that moment on, I’ve been a big advocate of approaches that make it much easier to both trace configuration items to the code that consumes it and also to make application code somehow declare what configuration it needs.

 

Strong Typed Configuration with “Settings” Classes 

For the past 5-6 years I’ve used an approach where configuration items like file paths, switches, url’s, or connection information is modeled on simple POCO classes that are suffixed with “Settings.” As an example, in the FubuPersistence package we use to integrate RavenDb with FubuMVC, we have a simple class called RavenDbSettings that holds the basic information you would need to connect to a RavenDb database, partially shown below:

    public class RavenDbSettings
    {
        public string DataDirectory { get; set; }
        public bool RunInMemory { get; set; }
        public string Url { get; set; }
        public bool UseEmbeddedHttpServer { get; set; }


        [ConnectionString]
        public string ConnectionString { get; set; }

        // And some methods to create in memory datastore's
        // or connect to the specified external datastore
    }

Setting aside how that object is built up for the moment, the DocumentStoreBuilder class that needs the configuration above just gets this object through simple constructor injection like so: new DocumentStoreBuilder(new RavenDbSettings{}, ...);. The advantages of this approach for me are:

  • The consumer of the configuration information is no longer coupled to how that information is resolved or stored in any way. As long as it’s giving the RavenDbSettings object in its constructor, DocumentStoreBuilder can happily go about its business.
  • I’m a big fan of using constructor injection as a way to create traceability and insight into the dependencies of a class, and injecting the configuration makes the dependency on configuration from a class much more transparent than the older “pull” forms of configuration.
  • I think it’s easier to trace back and forth between the configuration items and the code that depends on that configuration. I also feel like it makes the code “declare” what configuration items it needs through the signature of the Settings classes

 

Serving up the Settings with an IoC Container

I’m sure you’ve already guessed that we just use StructureMap to inject the Settings objects into constructor functions. I know that many of you are going to have a visceral reaction to the usage of an IoC container, and while I actually do respect that opinion, it’s worked out very well for us in practice. Using StructureMap (I think most of the other IoC containers could do this as well) we get a couple big benefits in regards to default configuration and the ability to swap out configuration at runtime (mostly for testing).

Since the Settings classes are generally concrete classes with no argument constructors, StructureMap can happily build them out for you even if StructureMap has no explicit registration for that type. That means that you can forgo any external configuration or StructureMap configuration and your code can still work as long as the default values of the Settings class is useful.  To use the example of RavenDbSettings from the previous section, calling new RavenDbSettings() creates a configuration that will connect to a new embedded RavenDb database that stores its data to the file system in a folder called /data parallel to the project directory (you can see the code here).

The result of the design above is that a FubuMVC/FubuTransportation application was completely connected to a working RavenDb database by simply installing the FubuMVC.RavenDb nuget with zero additional configuration.

I demoed that several times at conferences last year and the audiences seemed to be very unimpressed and disinterested. Either that’s not nearly as impressive as I thought it was, too much magic,  I’m not a good presenter, or they don’t remember what a PITA it used to be just to install and configure everything you needed just to get a blank development database going. I still thought it was cool.

The other huge advantage to using an IoC container to deliver all the configuration to consumers is how easy that makes it to swap out configuration at runtime. Again going to the RavenDbSettings example, we can build out our entire application and swap out the RavenDb connection at will without digging into Xml or Json files of any kind. The main usage has been in testing to get a clean database per test when we do end to end testing, but it’s also been helpful in other ways.

 

So where does the Setting data come from?

Alright, so the actual data has to come from somewhere outside the codebase at some point (like every developer of my age I have a couple good war stories from development teams hard coding database connection strings directly into compiled code). We generally put the raw data into the normal appSettings key/value pairs with the naming convention “[SettingsClassName].[PropertyName].” The first time a Settings object is needed within StructureMap, we read the raw key/value data from the configuration file and use FubuCore’s model binding support to create the object and do all the type coercion necessary to create the Settings object. An early version of this approach was described by my former colleague Josh Flanagan way back in 2009. The actual mechanics are in the FubuMVC.StructureMap code in the main fubumvc repository.

Some other points:

  • The data source for the model binding in this configuration setup was pluggable, so you could use external files or anything else that could be exposed as key/value pairs. We generally just use the appSettings support now, but on a previous project we were able to centralize the configuration files to a common location for several related processes
  • The model binding is flexible enough to support deep objects, enumerable properties, and just about anything you would need to use in your Settings objects
  • We also supported a model where you could combine key/value information from multiple sources, but layer the data in precedence to enable overrides to the basic configuration. My goal with that support was to avoid making all customer or environment specific configuration be in the form of separate override files without having to duplicate so much boilerplate configuration. In this setup, when the Settings objects were bound to the raw data, profile or environment specific information just got a higher precedence than the default configuration.
  • As of FubuMVC 2.0, if you’re using the StructureMap integration, you no longer have to do anything to setup this settings provider infrastructure. If StructureMap encounters an unknown dependency that’s a concrete type suffixed by “Settings,” it will try to resolve it from the model binding via a custom StructureMap policy.

 

Programmatic Configuration in FubuMVC 

We also used the “Settings” configuration idea to programmatically specify configuration for features inside the application configuration by applying alterations to a Setting object like this code from one of our active projects:

// Sets up a custom authorization rule on any diagnostic 
// pages
AlterSettings<DiagnosticsSettings>(x => {
	x.AuthorizationRights.Add(new AuthorizationCheckPolicy<InternalUserPolicy>());
});

// Directs FubuMVC to only look for client assets
// in the /public folder
AlterSettings<AssetSettings>(x =>
{
	x.Mode = SearchMode.PublicFolderOnly;
});

The DiagnosticSettings and AssetSettings objects used above are injected into the application container as the application bootstraps, but only after the alterations show above are applied. Behind the scenes, FubuMVC will first resolve the objects using any data found in the appSettings key/value pairs, then apply the alternation overrides in the code above. Using the alteration lambdas instead of just injecting the Settings objects directly allowed us to also embed settings overrides in external plugins, but ensure that the main application overrides always “win” in the case of conflicts.

I’m still happy with how this has turned out in real usage and I’ve since noticed that ASP.Net vNext uses a remarkably similar mechanism to configure options in their IApplicationBuilder scheme (think SetupOptions<MvcOptions>(Action<MvcOptions>)). I’m interested to see if the ASP.Net team will try to exploit that capability to provide much better modularity than the older ASP.Net MVC & Web API frameworks.

 

 

Other Links

 

* That short project was probably one of the strongest teams I’ve ever been a part of in terms of talented individuals, but it also spawned several of my favorite development horror stories (massive stored procedures, non-coding architects, the centralized architect team anti-pattern, harmful coding standards, you name it). In my career I’ve strangely seen little correlation between the technical strength of the development team (beyond basic competence anyway) and the resulting success of the project. Environment, the support or lack thereof from the business, and the simple wisdom of doing the project in the first place seem to be much more important.

** As an aside, that effort to create characterization tests as a crude regression test suite did pay off and we it did find some regression errors after we started making the bigger changes with that test suite. I think the Feathers’ playbook for legacy systems, where I got the inspiration for those characterization tests, is still very relevant some 10+ years later.

Building an EventStore with User Defined Projections on top of Postgresql and Node.js

EDIT 3/28/2016: Since this blog post still gets plenty of reads, here’s an update. The work described here never got used in a real project (#sadtrombone), but fear not, the projection support is going to live again in a new project called Marten that seeks to turn Postgresql into a very functional Document Db and Event Store with projections.

I did an internal talk on the tooling and concepts in this post at our Salt Lake City office a couple months ago. The recording, for what it’s worth, is here. I’m assuming that you’re at least somewhat familiar with the concepts of Event Sourcing and CQRS, but if you’re not, there are links to descriptive explanations of these concepts in the body of the post.

For most of the 2000’s my goto strategy for application persistence was to use some sort of object relational mapping to persist and read the object structures that I wanted to work with in my code. Sometimes I used hand rolled code to do the mapping, and other times my teams used NHibernate. In the past couple years I’ve been on projects that used the RavenDb document database with mixed success. I’ve also worked on a couple codebases that used an event sourcing strategy to persist meaningful business events, sometimes with RavenDb as the underlying storage engine and another project that uses an older version of NEventStore with Sql Server as the storage mechanism.

For various reasons, we’ve chosen to use a Node.js based stack to rewrite an old WPF application that is a suitable candidate for event sourcing on the backend (Corey Kaylor explained his take on this decision in a blog post). Since we already wanted to replace Sql Server (and probably RavenDb) with Postgresql in the long run, at Corey’s suggestion I have been working on and off to try leveraging  to create a new event store suitable for Node.js development that supports user-defined projections. Lacking all originality, I’m calling this new library “pg-events.”  You can find pg-events hosted under my GitHub account (my very first foray back into OSS post-FubuMVC).

 

Feature Set

  • Support the basic event sourcing pattern by appending the raw business events as JSON to the event store
  • Track events by a “stream” of related events that probably relates directly to some kind of business concept or workflow
  • Support user-defined projections of the raw event data to create “read side” views for clients
  • Support aggregated views of a stream (really just another projection). Use a basic snapshotting strategy of the aggregate state for efficiency
  • Build time tooling to initialize a postgresql database with the custom schema objects and import javascript libraries to postgresql
  • A crude, partial implementation of CommonJS that runs within postgresql

 

Conceptual Architecture

The first thing to know is that we’re making a very large bet on the portability of Javascript code and the ability to run at least a subset of this new event store code hosted in Postgresql, Node.js, embedded in other programming, or even potentially in a browser. The user-defined projections could potentially be executed in any of the pieces below, and we think that flexibility will pay off down the road for both performance and scalability tuning.

So far, I think the end state is going to consist of these four pieces:

 

pg-events

  1. Postgresql Database
    1. Custom schema objects largely based on Greg Young’s Building an Event Store paper to store events and stream metadata.
    2. Tables to persist projected views. Most projection views will be persisted to separate tables instead of one giant “pge_views” table for better query performance
    3. Stored procedures to update and query data in the event storage tables, mostly using postgresql’s Javascript support
    4. Executes and updates aggregate snapshots and synchronous projections. More on that in the following section.
  2. A Node.js Client
    1. Exposes methods to append events to the store
    2. Exposes methods to query for projected views, aggregate snapshots, and raw event stream data
    3. See https://github.com/jeremydmiller/pg-events/wiki/Client for more information
  3. Admin CLI Tool
    1. Build the necessary schema objects into a Postgresql
    2. Loads the user defined projections and other pg-events libraries into the database
    3. Can reset the event storage and projected view tables to an empty state for testing
    4. Eventually, this tool will also support “recapitulation” to rebuild projection data from the raw events when the definition of a projection changes
  4. Background Projection Runner
    1. Executes and updates projected views in a background process. This is my very next coding adventure. I’m going to build it out first with Node.js, then try my hand at implementing it again with a standalone Golang executable that uses an embedded V8 engine to execute the projections. Expect my twitter feed to be entertaining when I’m able to start that work. I’ll blog about this later when I know what it’s going to actually look like;)

 

 

User Defined Projections 

We looked at EventStore at first and I definitely liked their first class support for user defined projections. Our implementation of projections is very obviously influenced by EventStore’s.

I think that the event sourcing efforts I’ve been a part of have been successful overall, but “projecting” the raw event stream into a persisted read side or view model has been challenging. For pg-events, we’re expressing the projections with simple transformation functions that will take in the initial state and the raw event data and simply return the new state (it’s a logical fold left operation for the projections that work across multiple events).

For a sample event sourcing domain, I’ve been using the idea of a quest from the way too many fantasy books I’ve read over my lifetime. During a quest, our heroes might record events like “QuestStarted”, “MembersJoined”, “MembersDeparted”, or “TownReached.” To know or understand the exact composition of a quest party at any time, we need to replay some of the change events (Gandalf stayed behind to fight the Balrog, Boromir was killed, Frodo and Sam ran off, Gollum joined up, etc.) for the quest.

Say we write a projection for a new view across the events in a single quest called “Party” just to understand the membership. From the unit tests, that projection looks like:

require("../../lib/projections")
	.projectStream({
		name: 'Party',
		stream: 'Quest', 
		mode: 'sync',

		$init: function(){
			return {
				active: true,
				traveled: 0,
				location: null,
				members: []
			}
		},

		QuestStarted: function(state, evt){
			state.active = true;
			state.location = evt.location;
			state.members = evt.members.slice(0);

			state.members.sort();
		},

		TownReached: function(state, evt){
			state.location = evt.location;
			state.traveled += evt.traveled;
		},

		EndOfDay: function(state, evt){
			state.traveled += evt.traveled;
		},

		QuestEnded: function(state, evt){
			state.active = false;
			state.location = evt.location;
		},

		MembersJoined: function(state, evt){
			state.members = state.members.concat(evt.members);
			state.members.sort();
		},

		MembersDeparted: function(state, evt){
			state.location = evt.location;

			for (var i = 0; i < evt.members.length; i++){
				var index = state.members.indexOf(evt.members[i]);
				state.members.splice(index, 1);
			}

			state.members.sort();
		}


	});

You’ll notice that there’s a field called “mode” with a value of “sync.” Using the portability of Javascript, we’re planning for these modes:

  1. sync – A ‘sync’ projection will be executed synchronously inside postgresql within the same transaction as the event capture
  2. async – In progress. An ‘async’ projection will be calculated in a background process instead of at event capture time (Eventual Consistency).
  3. live – Forthcoming. These projections will only be calculated upon demand. I’m not yet sure if we’ll do the actual projection transformations within the database or the Node.js client. I guess we could allow two different “live” modes if there’s value in doing that.

So, eventual consistency killing you in your current event sourcing efforts because you hit errors by querying off of stale data? Opt for synchronous projections. Have lots of writes, but relatively few reads? Use asynchronous or even live projections that are only calculated on demand. Have lots of reads but very few writes? I think I would again opt for synchronous projections.

I worked on a system a couple years ago in a failed startup that ran projections in a browser to do historical point in time simulations. I don’t see any reason why we couldn’t do something similar in pg-events if that is ever valuable.

Believe it or not, I have a decent start on documenting the projection support at https://github.com/jeremydmiller/pg-events/wiki/Projections.

 

Why Postgresql? 

Postgresql is the people pleaser of database engines. Want all the normal RDBMS capabilities? Would you be more productive using Postgresql as a document database? Want to write stored procedures with a language that closely resembles Oracle’s PL/SQL (no, a thousand times no, never again)?  Would you even want to use Javascript inside the database itself? Regardless of how you answered any of those questions, Postgresql is trying really hard to be what you want. In our case, I like that we can use postgresql as an event store, a document database for things that don’t fit into the event sourcing model, and a classic RDBMS if that’s what we want in some circumstances.

Mostly though, we like that Postgresql has a proven track record and we suspect that the DevOps support tools will be more effective than we’ve experienced with other OSS database tools.

Of course, the only reason why pg-events is viable in the first place is that postgresql has outstanding JSON support and the ability to author stored procedures with Javascript using Google’s V8 engine. With our project timeline, it’s also safe to assume that Postgresql 9.4 with its significant improvements to the JSON storage will be available before we go live to production.

 

Why CQRS isn’t crazy

I’ll feely admit that the first time I saw Greg Young talking about the Command Query Responsibility Segregation (CQRS) style of architecture in 2008 I thought it was nuts. Specifically, I was afraid that doing the transformations between the “write side” model and the “read side” model consumed by the clients would lead to far too much repetitive “left hand, right hand” code. The reality is, of course, is that I was already doing a lot of work to map database tables to object graphs, transforming domain model objects to DTO’s to send over the wire, and crafting database views to transform our raw data into something more conducive to reporting requirements. In a way, CQRS just explicitly calls out a large part of software development efforts that is often overlooked. If we simply accept the idea that different consumers and producers of the persisted state in our system naturally have different needs as far as how the same information is written, structured, and consumed, CQRS isn’t really “crazy talk” or extra work. One of the biggest differences is that with event sourcing + CQRS you probably try to pre-build and persist the read side views instead of trying to create views or DTO’s on the fly from the “one true database model.”

 

Some thoughts on Relational Databases

I’m very much in the camp that says that the database is strictly for persistence and your business logic and/or user interface should never be tightly coupled to whatever the database is, so the idea of just consuming the raw database tabular data in business logic code is a non-starter for me — not to mention that a flat database table structure is very rarely the exact structure that you’d want in your business logic code outside of CRUD-centric applications. I’ve been a part of technical arguments with database-centric folks for so long that I’m simply happy to say “agree to disagree” on these issues and let us all go on our way.

There’s a tremendous amount of inertia and investment in tooling in our industry in regards to the usage of relational databases as the de facto standard for just about all persistence needs. Additionally, most developers, testers, and even the business people seem to naturally understand the relational database model. Even so, as alternative models like document or graph databases build up more tooling, acceptance, and developer familiarity, I think that relational databases will eventually be consigned to reporting applications or pure CRUD applications (but even then I prefer document databases).

That being said, I think that the future really is “polyglot persistence” and that our children are going to laugh at us in decades to come when we explain how we built systems against relational databases.

Continuous Design and Reversibility at Agile Vancouver (video)

In November I got to come out of speaking retirement at Agile Vancouver.  Over a couple days I finally got to meet Mike Stockdale in person, have some fun arguments with Adam Dymitruk, see some beautiful scenery, and generally annoy the crap out of folks who are hoarding way too much relational database cheese in my talk called Continuous Design and Reversibility (video via the link).

I think the quality of reversibility in your architecture is a very big deal, especially if you have the slightest interest in effectively doing continuous design.  Roughly defined, reversibility is your ability to alter or delay elements of your software architecture.  Low reversibility means that you’re more or less forced to get things right upfront and it’s expensive to be wrong — and sorry, but you will be wrong about many things in your architecture on any non-trivial project.  By contrast, using techniques and technologies that have higher reversibility qualities vastly improves my ability to delay technical decisions so that I can focus on one thing at a time like say, building out the user interface for a feature to get vital user feedback quickly without having to first lay down every single bit of my architecture for data access, security or logging first.  In the talk, I gave several concrete examples from my project work including the usage of document databases instead of relational databases.

Last Responsible Moment

I think we can all conceptually agree with the idea of the “Last Responsible Moment,” meaning that the best time to make a decision is as late in the project as possible when you have the most information about your real needs.  How “late” your last responsible moment is for any given architectural decision is largely a matter of reversibility.

For the old timers reading this, consider the move from VB6 with COM to .Net a decade and change ago.  With COM, adding a new public method to an existing class or changing the signature of an existing public method could easily break the binary compatibility, meaning that you’d have to recompile any downstream COM components that used the first COM component.  In that scenario, it behooved you to get the public signatures locked down and stable as fast as possible to avoid the clumsiness and instability with downstream components — and let me tell you youngsters, that’s a brittle situation because you always find reasons to change the API’s when you get deep into your requirements and start stumbling into edge cases that weren’t obvious upfront.  Knowing that you can happily add new public members to .Net classes without breaking downstream compatibility, my last responsible moment for locking down public API’s in upstream components is much later than it was in the VB6 days.

The original abstract:

From a purely technical perspective, you can almost say that Extreme Programming was a rebellion against the traditional concept of “Big Design Upfront.” We spent so much time explaining why BDUF was bad that we might have missed a better conversation on just how to responsibly and reliably design and architect applications and systems in an evolutionary way.

I believe that the key to successful continuous or evolutionary design is architectural “reversibility,” the ability to reverse or change technical decisions in the code. Designing for reversibility helps a team push back the “Last Responsible Moment” to make more informed technical decisions.

I work on a very small technical team building a large system with quite a bit of technical complexity. In this talk I’ll elaborate on how we’ve purposely exploited the concept of reversibility to minimize the complexity we have to deal with at any given time. More importantly, I’ll talk about how reversibility led us to choose technologies like document databases, how we heavily exploit conventions in the user interface, and the testing process that made it all possible. And finally, just to make the talk more interesting, I’ll share the times when delaying technical decisions blew up in our faces.