Software Development Horror Stories and Scar Tissue

My best friend (also a developer) and I meet for lunch almost every Friday. As experienced developers often do, we got to swapping horror stories this past Friday. I had so much fun telling some of my favorite stories that I thought I’d repeat some of them here just to break my blogging drought. As I started doing just that, it occurred to me that there’s a deeper theme here about how we process our bad experiences, what we learn from them, and maybe being a little more nuanced instead of making a knee jerk “guard rail to guard rail” decision about whatever we deem to be the cause of those bad experiences.

Just think, how many times have you said or heard another developer say some form of the following two statements:

  1. I’ll never use [IoC containers/NoSql/Design Patterns/mock objects] ever again because this one time at band camp I saw some people on a project (not me, of course) try to use that and it was horrible! Never again!
  2. We threw the baby out with the bathwater on [UML/Pair Programming/Stored Procedures/something that’s valuable sometimes but not always]

UML here is the obvious example of a technique that was horribly abused in the past and now largely discredited. While I lived through the age of absurd UML overuse and I’ve made fun of UML only architects, I still think that some knowledge of UML can be very useful to a developer. Specifically, I’ll still sketch class diagrams to understand a subsystem or a sequence diagram to think through a tricky interaction between classes or subsystems. With multi core processors and the push for so much more parallelization, I think that activity diagrams come back as a way to think through the timing and coordination between tasks running in different threads.

The Centralized Architecture Team

Not long after I started my current job we had a huge phone call to discuss some potential changes to our development methods and projects. Someone suggested that we form a dedicated, centralized architecture team and I shot it down vehemently and colorfully. Why you ask? Because…

The most useless I’ve ever felt in my career was the 18 months I spent as the junior most member of a centralized architecture team in a large enterprise IT organization. My shop at the time had decided to double down on formal waterfall processes with very well defined divisions of specialized responsibility. Architects made high level designs, technical leads made lower level designs, and the developers coded what they were told. As you’ve probably guessed, we architects were not to be doing any actual coding. As you’ve probably also guessed, it didn’t work at all and most teams faked their compliance with the official process in order to get things done.

I firmly believe that at the core of all successful software development is a healthy set of feedback mechanisms to nudge a team toward better approaches. A non-coding architect who doesn’t stick around for the actual coding doesn’t have any real feedback mechanism to act as a corrective function on his or her grandiose technical visions.

I witnessed this first hand when I was asked to come behind one of my architect peers on what should have been a relatively simple ETL project. My peer had decided that the project should consist of a brand new VB6 application with lots of elaborate plugin points and Xml configuration to accommodate later features. The team who actually had to deliver the project wanted to just use the old Sql Server DTS infrastructure to configure the ETL exchanges and be done with it in a matter of days — but my peer had told them that they couldn’t do that because of corporate standards and they’d have to build the new infrastructure he’d designed on paper.

When I took over as the “architect,” the team was struggling to build this new ETL framework from scratch. I was able to clear the usage of DTS with one quick phone call and sheepishly told the project manager that he had my official blessing to do the project in the way that he and his team already knew was the easiest way forward. I felt like an ass. The team had the project well in hand and would have already been done if not for the “guidance” of my architecture team.

I’ve had other bad experiences with centralized architecture teams over the years in several other large companies. In each negative case I think there was a common theme — the centralized architecture team was too far removed from the actual day to day work (and many of the members of those teams knew that full well and worked in constant frustration) . There’s some obvious potential goodness in sharing technical knowledge and strategy across projects, but my opinion is that this is best done with a virtual team of technical leaders from the various project teams in an organization sharing solutions and problems rather than a dedicated team off by themselves, brown bag presentations, and just rotating developers between teams once in a while.

I ended up replacing my non-coding architect peer on a couple later projects and every single time I walked into a situation where the new team hated the very idea of an “architect” because of their scar tissue. Good times.

 

Web Services Everywhere!

The undisputed technical leader of the same centralized architecture team had a grandiose SOA vision of exposing all of our business processes as loosely coupled SOAP web services. I was dubious of some of the technical details at the time (see Lipstick on a Pig), but there was a definite kernel of solid thinking behind the strategy and I didn’t have any choice but to get on board the SOA train anyway.

A couple months in, our SOA visionary gave a presentation to the rest of the team upon the completion of the first strategic building block web service that we should be using for all new development from then on. The new standard data service exposed an HTTP endpoint that accepted SOAP compliant Xml with a single body element that would contain a SQL string. The web service would execute the SQL command against the “encapsulated” database and return a SOAP compliant Xml response representing the data or status returned by the SQL statement in the request Xml.

That’s right, instead of making our applications couple themselves to well understood, relatively efficient technologies like JDBC or ODBC and an application database, we were logically going to follow the approach of using one big damn database that would only be accessed by Xml over HTTP and each application would still be tightly coupled to the structure of that database (kind of what I call the “pond scum application” anti-pattern where little applications hang off a huge shared database).

My older, more experienced colleagues mostly nodded sagaciously murmuring that this was a good approach (I’m sure now that they all thought that the approach sounded nuts but were intimidated by our visionary and didn’t call out the quite naked emperor in the room). I sputtered and tried to explain later to my boss that this was the worst possible approach we could possibly take, but I was told to change my attitude and get with the program.

My lesson learned from that experience? The most experienced person in the room isn’t automatically right and the new shiny development idea isn’t magically going to make things better until you really understand it. And yes, dear colleagues reading this, I do realize that now *I’m* the most experienced person in the room and I’m not always automatically right when we disagree.

Would I use SOA after I saw it done in the dumbest way possible? Yes, in certain situations like this one — but I still contend that it’s better to focus on building well factored code that’s easy to extend without the overhead of distributed services (easier testing, easier debugging, just flat out less code) when you can get away with it.

 

Database Batch Programming Woes 

Most of my horror stories come from my very first “real” IT job. My original team was hired en masse to be the support team for a new set of supply chain automation applications built by a very large consulting company (I’ll give you one guess who I’m mocking here). Just before the consultants rolled off the project I was enlisted to help their technical lead try to speed up a critical database batch script that was too slow and getting much slower a mere 6 weeks after the project was put into production. So what was wrong that made my database guru friend sputter when I told this story? As best I recall:

  1. The 3rd party ERP system that fed the batch process had some custom Java code with a nasty N+1 problem where they were doing a logical database join in memory
  2. The batch process ran several times a day and started by first inserting data into a table from the result of a cross join query (still the only time I’ve ever seen that used in the wild) of every possible part our factories used, all of our offsite logistics centers, and each factory line where the parts would be needed — regardless of whether or not that permutation made any sense whatsoever.
  3. That giant table built from the cross joins was never archived or cleaned up, and the very expensive full table scan queries in the next stage of the batch process against that table were becoming exponentially slower just a couple months into the life of the system.
  4. Somewhere in this batch process was a big query where two tables had to be joined by calculated values, one created by string concatenation of 3 fields in table #1 and a matching string created by concatenating 5 different fields in table #2 to make the correct inner join (I’m not making this up).

Long story short, within about 6 months my new team wrote a new replacement system from scratch with two orders of magnitude throughput over the original consultant’s code that didn’t suffer from degrading performance over time by simply being a bit smarter about our database code. Our team got a bit of acclaim and some internal awards because of how happy the business was with our new system (as a first time technical lead I made my own share of WTF’s on that project too, but they were less damaging). Lessons learned?

  • Success is easiest when you start with a low bar
  • Building a quality system is so much easier to do well when you have some sort of prior art to learn from — even if that prior art is poorly done
  • Don’t be egregiously stupid about how you do your database access
  • Being smart isn’t perfectly helpful without a modicum of technical knowledge or experience (the very young consultants responsible for the horrible database designs were all very sharp, but very green).

I think you can interpret that second lesson in a couple useful ways. One, it might help to lighten up on whatever team went before you. Two, don’t be so hard on yourself when you’re having to go back over and replace some kind of software design you did previously with something much better. Doing it wrong first might have taught you how to it better the second time around.

I left that company a couple years later to go be a fancy pants traveling consultant and I very distinctly remember seeing that consulting company’s huge ad with Tiger Woods plastered all over O’Hare airport in Chicago talking about how they’d helped my former employer with their supply chain automation and grumbling how *I’d* built that, not them.

 

 

FubuMVC Lessons Learned — “fubu new”, Standardization, and Polyglot Programming

EDIT:  Someone on Twitter got upset about my “or an inferior IoC tool” comment.  Most of you probably know that I’m also the author of StructureMap, and I did mean that as a joke.  In a later post about IoC integration I’ll happily admit that making FubuMVC too StructureMap-centric was a problem with adoption (we did support Autofac and had Windsor support all but done as well).  To be blunt, I think that the IoC tools in .Net are largely an interchangeable commodity item and any of the mainstream ones will get the job done just fine.

I’m still up for doing this series of technical lessons learned about FubuMVC to wring some value out of the whole process, but I feel like I’ve said enough mea culpas about our failure on documentation. From now on, instead of dog piling on me in the comments, could you just say “Docs sucked++” and then get on with whatever else it is you want to say?

Today’s blogpost is partially about the danger of deviating from .Net orthodoxy, but mostly a lamentation on my part on missed opportunities and unrealized potential.  I’m actually quite proud and happy with most of what I’m describing in this post, but it was too late to matter much and might not have ever been widely accepted.

Quickstart from the Command Line

I’ve always admired elements of Ruby on Rails, especially the almost magical “rails new” project skeleton creation that sets you up with an entire code tree with a standard build script that exposes common build tasks for testing, database migrations, and even deployment recipes — but the word “standard” in that last sentence is a very key concept.  Much of the value of “rails new” is enabled by standardizing the layout and naming conventions of a Rails codebase to make it cheaper to write reusable command line tooling.

We knew from the very beginning that we’d eventually want our very own “fubu new” analogue.  Our community built a simple one initially that would just clone a pre-canned Git repository and do some simple string replacement for your project name.  Great, and it added value immediately.  However, the FubuMVC ecosystem got bigger as we built:

  • Ripple as a better command line manager for Nuget packages in continuous integration
  • Bottles as our modularization strategy, including a command line bottles tool to package up web content files as a pre-compile step in your build script
  • FubuDocs as a command line tool to author and publish technical documentation
  • fubu run as a Katana based development server that’s more efficient in development than anything IIS or VS based
  • FubuTransportation as our FubuMVC based message bus
  • A slew of command line testing tools

To unlock the usefulness of all those tools above to new users, not to mention just getting a simple application codebase up and running fast, we embarked on a new effort to create a vastly improved “fubu new” story that would allow you to mix and match options, different project types, and integrate many of those tools I listed above.

At the time of this post, you can stand up a brand new FubuMVC application using the Spark view engine from scratch to do grown up development by following these steps at the command line assuming that you already have Ruby 1.9.3+ installed:

  1. gem install fubu
  2. fubu new MyApp –options spark

If you do this, you’ll see a flurry of activity as it:

  1. Builds a new VS.Net solution file
  2. Csproj files for the main application and a matching test library
  3. Creates the necessary classes to bootstrap and run a minimal FubuMVC application
  4. Invokes gems to install Rake, FubuRake, Ripple, FubuDocs, and the bottles command line tool
  5. Invokes Ripple’s equivalent to Nuget Package Restore to bring down all the necessary Nugets
  6. Opens the new Visual Studio solution

The result is a code tree that’s completely ready to do grown up development with a build script containing these tasks:

rake ci                # Target used for the CI server
rake clean             # Prepares the working directory for a new build
rake compile           # Compiles the solution src/MyApp.sln
rake compile:debug     # Compiles the solution in Debug mode
rake compile:release   # Compiles the solution in Release mode
rake default           # **Default**, compiles and runs tests
rake docs:bottle       # 'Bottles' up a single project in the solution with...
rake docs:run          # Runs a documentation project hosted in FubuWorld
rake docs:run_chrome   # Runs the documentation projects in this solution i...
rake docs:run_firefox  # Runs the documentation projects in this solution i...
rake docs:snippets     # Gathers up code snippets from the solution into th...
rake myapp:alias       # Add the alias for src/MyApp
rake myapp:chrome      # run the application with Katana hosting and 'watch...
rake myapp:firefox     # run the application with Katana hosting and 'watch...
rake myapp:restart     # touch the web.config file to force ASP.Net hosting...
rake myapp:run         # run the application with Katana hosting
rake ripple:history    # creates a history file for nuget dependencies
rake ripple:package    # packages the nuget files from the nuspec files in ...
rake ripple:restore    # Restores nuget package files and updates all float...
rake ripple:update     # Updates nuget package files to the latest
rake sln               # Open solution src/MyApp.sln
rake unit_test         # Runs unit tests for MyApp.Testing
rake version           # Update the version information for the build

This entire rake script is (I’ve added some explanatory comments for this blog post):

require 'fuburake'

@solution = FubuRake::Solution.new do |sln|
        # This is unnecessary if there's only one sln file in the code
	sln.compile = {
		:solutionfile => 'src/MyApp.sln'
	}
				 
        # This feeds the CommonAssemblyInfo.cs file we use
        # to embed version information into compiled assemblies
        # on the build server
	sln.assembly_info = {
		:product_name => "MyApp",
		:copyright => 'Copyright 2013. All rights reserved.'
	}
	
        # These are defaults now, but I still left it in the
        # template
	sln.ripple_enabled = true
	sln.fubudocs_enabled = true
end

# This one line of code below creates rake tasks as a convenience
# to run our development server in various modes
FubuRake::MvcApp.new({:directory => 'src/MyApp', :name => 'myapp'})

 

The terseness of the rake script above relies very heavily on a standardized code tree layout and naming conventions.  As long as you could accept the FubuMVC idiomatic code tree layout, I feel like we succeeded in making our ragbag collection of command line tools easy to setup and use.  Great, awesome, it’s done wonders for the Rails community and I’ve now seen and used working analogues with Scala’s Play framework and Mimosa.js’s “mimosa new” command.  A couple problems though:

  • There’s never been a successful effort in .Net to identify and codify idioms for laying out a code repository and we’ve found that many folks are loathe to change their own preferences (“src” versus “source”, “lib” vs “bin”, and so on).  It might be easier if TreeSurgeon had succeeded way back when.
  • I think it was just too little, too late.  I think that OSS projects, once announced, have a short window of opportunity to be awesome or else.  I felt good about the improved “fubu new” experience just in time for NDC London this past December — but that was almost 4 years after FubuMVC started.
  • VS.Net templating was a lot of work and dealing with different project types, VS versions, and versions of .Net added overhead
  • Nuget and our own Ripple tool are still problematic
  • While common in other development communities, this is not a common approach for .Net developers

 

Why not Visual Studio Templates or Project Template Nugets?

We did support Visual Studio project templates, but I always felt that would only be for folks that just want to play with the framework a little bit.  The feedback we got from other OSS projects that did invest heavily in VS templates was uniformly negative and I wasn’t enthusiastic about doing much with them.  It was a tremendous amount of work to build our templating engine (FubuCsProjFile), but at least what we got was something easy to author and possible to cover with automated testing (a never-ending shortcoming of so many tools originating from Redmond).

We supported a project template with Nuget very early on and even finagled out a “install nuget, press F5” experience, but I always found it to be very limited and problematic.  Again, testability was an issue.  If I had it to all do over again, I’d still choose our command line approach with the mix and match selection of options (Spark or Razor?  StructureMap or an inferior IoC tool?  Want RavenDb support?  Bootstrap?  Html Conventions?).

 

The Polyglot Thing is a Double Edged Sword

There’s always some sort of meme running around in developer circles that’s meant to make the cool edgy kids feel superior to the unwashed masses.  A couple years ago there was a lot of hype about polyglot programming where you would happily mix and match different programming languages and paradigms in a single solution based on their relative strengths.

For better or worse, the FubuMVC projects were a mild example of polyglot programming and that repeatedly scared people off.  We used Ruby’s Rake tool for our project automation, we partially replaced Nuget with gems for distributing binaries that we used in the build scripts.  For 2.0, we’re happily ditching our original asset pipeline in favor of the Node.js based Mimosa.js tool.  At other times we also used the Python-based Sphinx for some early documentation efforts.

While I think that Rake is an outstanding tool and very easy to use, the simple need for a Ruby tool drove many people away — which is a shame because many other .Net OSS tools besides FubuMVC prefer Rake-based build scripts over the other alternatives.

It’s not hard to understand people being hesitant about having to learn non-mainstream .Net tools, it’s unfortunate.  One of the big problems with trying to do ambitious OSS work on .Net is that .Net simply isn’t the home of the best tools.  We repeatedly found Nuget to be lacking (a blog post for a later day) and MSBuild is, in my opinion, a complete non-starter for build automation.

As an aside, I’ve frequently seen members of the ASP.Net team complain on Twitter about having to install Ruby just to build some OSS project when they turn right around and require you to install all kinds of Visual Studio.Net add ons and templates just to build their code.

 

Aborted Plans for FubuMVC 2.0:

For 2.0 I wanted us to push forward with more templating options and introduce a new standalone web application that you could run from some sort of “fubu new –interactive” mode to visually discover and select the options you wanted to use in your own solution.  I also wanted to stop bundling the template library into the fubu.exe tool itself in favor of using a git repository that could be easily auto-updated by fubu.exe as we extended or improved our templates.

I thought we had a strong concept for the bootstrapping, but after getting one hard look at Typesafe Activator in Scala, it’s pretty obvious that we would never have been able to match that level of polish.

I also wanted to upgrade our standard testing tools from whatever old version of NUnit we’ve been using for years to something based on the excellent Fixie tool.

 

Conclusion

In a lot of ways, I think the .Net ecosystem — even though it’s over a decade old — is immature compared to other development platforms.  I feel like we had to do way too much bespoke infrastructure (Bottles for packaging web content, Ripple for a more usable Nuget story, FubuCsProjFile for csproj file manipulation) to pull off the “fubu new” story.  I wonder a bit if what we did might be easier in a couple years when Nuget matures and the new OneGet package manager gains traction.

I feel like Visual Studio.Net was a significant hurdle in everything we tried to do with our bootstrapping story.  I think .Net would be able to innovate much faster if our community would be much more accepting of lighter weight command line tools instead of demanding much more time intensive Visual Studio integration.

My colleagues at work and I are likely moving to the Play/Akka stack on Scala and a very common refrain around the office this week is excitement over being able to use lighter weight tools like Sublime and SBT instead of being forced to work with a heavyweight IDE like VS.

 

 

StructureMap 3 is gonna tell you what’s wrong and where it hurts

tl;dr:  StructureMap 3 introduces some cool new diagnostics, improves the old diagnostics, and makes the exception messages a lot better.  If nothing else scroll to the very bottom to see the new “Build Plan” visualization that I’m going to claim is unmatched in any other IoC container.

I’ve had several goals in mind with the work on the shortly forthcoming StructureMap 3.0 release.  Make it run FubuMVC/FubuTransportation applications faster, remove some clumsy limitations, make the registration DSL consistent, and make the StructureMap exception messages and diagnostic tools completely awesome so I don’t have to do so much work answering questions in the user list users will have a much better experience.  To that last end, I’ve invested a lot of energy into improving the diagnostic abilities that StructureMap exposes and adding a lot more explanatory information to exceptions when they do happen.

First, let’s say that we have these simple classes and interfaces that we want to configure in a StructureMap Container:

    public interface IDevice{}
    public class ADevice : IDevice{}
    public class BDevice : IDevice{}
    public class CDevice : IDevice{}
    public class DefaultDevice : IDevice{}

    public class DeviceUser
    {
        // Depends on IDevice
        public DeviceUser(IDevice device)
        {
        }
    }

    public class DeviceUserUser
    {
        // Depends on DeviceUser, which depends
        // on IDevice
        public DeviceUserUser(DeviceUser user)
        {
        }
    }

    public class BadDecorator : IDevice
    {
        public BadDecorator(IDevice inner)
        {
            throw new DivideByZeroException("No can do!");
        }
    }

Contextual Exceptions

Originally, StructureMap used the System.Reflection.Emit classes to create dynamic assemblies on the fly to call constructor functions and setter properties for better performance over reflection alone.  Almost by accident, having those generated classes made for a decently revealing stack trace when things went wrong.  When I switched StructureMap to using dynamically generated Expression‘s, I got a much easier model to work with for me inside of StructureMap code, but the stack trace on runtime exceptions became effectively worthless because it was nothing but a huge series of nonsensical Lambda classes.

As part of the effort for StructureMap 3, we’ve made the Expression building much, much more sophisticated to create a contextual stack trace as part of the StructureMapException message to explain what the container was trying to do when it blew up and how it got there.  The contextual stack can tell you, from inner to outer steps like:

  1. The signature of any constructor function running
  2. Setter properties being called
  3. Lambda expressions or Func’s getting called (you have to supply the description yourself for the Func, but SM can use an Expression to generate a description)
  4. Decorators
  5. Activation interceptors
  6. Which Instance is being constructed including the description and any explicit name
  7. The lifecycle (scoping like Singleton/Transient/etc.) being used to retrieve a dependency or the root Instance

So now, let’s say that we have this container configuration that experienced StructureMap users know is going to fail when we try to fetch the DeviceUserUser object:

        [Test]
        public void no_configuration_at_all()
        {
            var container = new Container(x => {
                // Notice that there's no default
                // for IDevice
            });

            // Gonna blow up
            container.GetInstance<DeviceUserUser>();
        }

will give you this exception message telling you that there is no configuration at all for IDevice.

One of the things that trips up StructureMap users is that in the case of having multiple registrations for the same plugin type (what you’re asking for), StructureMap has to be explicitly told which one is the default (where other containers will give you the first one and others will give you the last one in).  In this case:

        [Test]
        public void no_configuration_at_all()
        {
            var container = new Container(x => {
                // Notice that there's no default
                // for IDevice
            });

            // Gonna blow up
            container.GetInstance<DeviceUserUser>();
        }

Running the NUnit test will give you an exception with this exception message (in Gist).

One last example, say you get a runtime exception in the constructor function of a decorating type.  That’s way out of the obvious way, so let’s see what StructureMap will tell us now.  Running this test:

        [Test]
        public void decorator_blows_up()
        {
            var container = new Container(x => {
                x.For<IDevice>().DecorateAllWith<BadDecorator>();
                x.For<IDevice>().Use<ADevice>();
            });

            // Gonna blow up because the decorator
            // on IDevice blows up
            container.GetInstance<DeviceUserUser>();
        }

generates this exception.

Container.WhatDoIHave()

StructureMap has had a textual report of its configuration for quite a while, but the WhatDoIHave() feature gets some better formatting and the ability to filter the results by plugin type, assembly, or namespace to get you to the configuration that you want when you want it.

This unit test:

        [Test]
        public void what_do_I_have()
        {
            var container = new Container(x => {
                x.For<IDevice>().AddInstances(o => {
                    o.Type<ADevice>().Named("A");
                    o.Type<BDevice>().Named("B").LifecycleIs<ThreadLocalStorageLifecycle>();
                    o.Type<CDevice>().Named("C").Singleton();
                });

                x.For<IDevice>().UseIfNone<DefaultDevice>();
            });

            Debug.WriteLine(container.WhatDoIHave());

            Debug.WriteLine(container.WhatDoIHave(pluginType:typeof(IDevice)));
        }

will generate this output.

The WhatDoIHave() report will list each PluginType matching the filter, all the Instance’s for that PluginType including a description, the lifecycle, and any explicit name.  This report will also tell you about the “on missing named Instance” and the new “fallback” Instance for a PluginType if one exists.

It’s not shown in this blog post, but all of the information that feeds the WhatDoIHave() report is query able from the Container.Model property.

Container.AssertConfigurationIsValid()

At application startup time, you can verify that your StructureMap container is not missing any required configuration and generally run environment tests with the Container.AssertConfigurationIsValid() method.  If anything is wrong, this method throws an exception with a report of all the problems it found (build exceptions, missing primitive arguments, undeterminable service dependencies, etc.).

For an example, this unit test with a missing IDevice configuration…

        [Test]
        public void assert_container_is_valid()
        {
            var container = new Container(x => {
                x.ForConcreteType<DeviceUserUser>()
                    .Configure.Singleton();
            });

            // Gonna blow up
            container.AssertConfigurationIsValid();
        }

…will blow up with this report.

Show me the Build Plan!

I saved the best for last.  At any time, you can interrogate a StructureMap container to see what the entire “build plan” for an Instance is going to be.  The build plan is going to tell you every single thing that StructureMap is going to do in order to build that particular Instance.  You can generate the build plan as either a shallow representation showing the immediate Instance and any inline dependencies, or a deep representation that recursively shows all of its dependencies and their dependencies.

This unit test:

        [Test]
        public void show_me_the_build_plan()
        {
            var container = new Container(x =>
            {
                x.For<IDevice>().DecorateAllWith<BadDecorator>();
                x.For<IDevice>().Use<ADevice>();

            });

            var shallow = container.Model
                .For<DeviceUserUser>()
                .Default
                .DescribeBuildPlan();

            Debug.WriteLine("This is the shallow representation");
            Debug.WriteLine(shallow);

            var deep = container.Model
                .For<DeviceUserUser>()
                .Default
                .DescribeBuildPlan(3);

            Debug.WriteLine("This is the recursive representation");
            Debug.WriteLine(deep);
        }

generates this representation.

StructureMap 3.0 is very nearly done (no, seriously)

StructureMap 3.0, the next version of the original IoC/DI Container for .Net is almost done and now is a great time to speak up about any improvements and/or changes you’d like to have in SM3.  You can see a list of previous updates (and a shameful pattern of stopping and starting on my part) here.  To be honest, my primary goal at this moment — and why I’m able to work on this during day job hours this week — is to improve the performance of our FubuMVC or FubuTransportation applications with a secondary goal is to improve StructureMap’s diagnostic ability to explain what’s happening when things go wrong.

Big Changes and Improvements:

  • The exception messages provide contextual information about what StructureMap was trying to do when things went wrong
  • The nested container implementation is vastly improved, much faster, and doesn’t have the massive singleton behavior bug from 2.6.*
  • All old [Obsolete] 2.5 registration syntax has been removed, and there’s been a major effort to enforce consistency throughout the registration API’s
  • The original StructureMap.dll has been broken up into a couple pieces.  The main assembly will be targeting PCL compliance thanks to the diligent efforts of Frank Quednau, and that means that Xml configuration and anything to do with ASP.Net has been devolved into separate assemblies and eventually into different Nuget packages.  This means that StructureMap will theoretically support WP8 and other versions of .Net for the very first time.  God help me.
  • The strong naming has been removed.  My thought is to distribute separate Nuget packages with unsigned versions for sane folks and signed versions for enterprise-y folks
  • Lifecycle (scope) can be set individually on each Instance (stupid limitation left over from the very early days)
  • Constructor selection can be specified per Instance
  • Improved diagnostics, both at runtime and for the container configuration (still in progress)
  • Improved runtime performance, especially for deep object graphs with inline dependencies (i.e., FubuMVC behavior chains)
  • The interception model has been completely redesigned
  • The ancient attribute model for StructureMap configuration has been mostly removed
  • The “Profile” model has been much improved

What’s Next?

You can take the pre-release builds of StructureMap 3.0 out for a spin at any time from the fubu TeamCity Nuget feed at http://build.fubu-project.org/guestAuth/app/nuget/v1/FeedService.svc.  A public push could come as early as February 1st, 2014, but I’m not pushing to the public Nuget feed until the stuff in the next paragraph is done.  My thought is that the initial release will be the core StructureMap package, StructureMap.AutoMocking, and StructureMap.AutoFactory.  The new Xml configuration package and a new ASP.Net support package will come later when and if there’s a demand for that.

The issue list is getting shorter and more specific, so I’m hopeful that development is almost to a close.  I’m adding a lot of new explanatory acceptance tests as I write the new documentation (with FubuDocs!).  Frank is going to push through the PCL compliance and that’ll inevitably lead to some new complexity in how we build and create the Nuget’s in our CI builds.

I’m also going to take the new bits out for a spin with a new FubuMVC application and use that to test out what the new exception messages and diagnostics look like.  The forthcoming FubuMVC.StructureMap3 package will embed new diagnostic capabilities.

Early next week, I’m going to try to use StructureMap 3 in a bigger application at Extend Health with an eye toward measuring the new performance versus 2.6.3.

Introducing FubuDocs for “Living Documentation”

TL;DR:  The FubuMVC community is finally getting its technical documentation act together with a new tool called FubuDocs.

The Wrong Way

About 5 years ago I released StructureMap 2.5 with the idea that it would permanently lock the public registration API’s into a new, shiny fluent interface that everyone would love using from now on.  As part of that release, I wrote comprehensive documentation with lots of embedded code samples painstakingly copied into the static html files and published a pure HTML website. Then I started using StructureMap 2.5 in daily work, found out that I hated using the new fluent interface, and immediately changed the public API’s in subsequent releases to smooth out the usability problems.  Unsurprisingly, I never got around to updating the now defunct documentation code samples.  

Moreover, the documentation that I did write wasn’t always helpful to users because the organization of content on the site did not make sense to them and they weren’t always able to find the right content.

Fast forward several years and the FubuMVC community has built a tremendous number of potentially useful libraries, features, and frameworks that nobody knows about mostly because I’m nearly allergic to writing documentation.  To give all our hard work an actual chance to be successful, Josh Arnold and I envisioned and built a new tool for creating and publishing living documentation we fittingly called FubuDocs (the FubuDocs documentation at this link is created and published with FubuDocs itself).

FubuDocs Highlights

  • The documentation lives side by side with the real code
  • We “slurp” sample code directly out of the real code and automated tests so the sample code cannot get out of synch with the current API to avoid the headaches I had earlier with StructureMap documentation.
  • Heavily inspired by readthedocs.org, FubuDocs determines the navigation structure and navigation page elements for you based on the files in your documentation project
  • You can run a FubuDocs project website interactively using the fubudocs tool distributed as a gem.
  • The fubudocs interactive mode exposes a topic manager tool you can use to extend, reorder, and modify the documentation outline.
  • You can use a combination of Markdown syntax and custom html elements to author content
  • Exports the final content to static HTML (we are just publishing to GitHub Pages).
  • It’s “skinnable” — in theory, works on my box, nobody else has tried that yet

 

In a later post, I’ll talk about how we have automated the publishing and versioning of technical documentation within our continuous integration infrastructure.

Would I use RavenDb again?

EDIT on 2/12/2016: This is almost a 3 year old post, but still gets quite a few reads. For an update, I’m part of a project called Marten that is seeking to use Postgresql as a document database that we intend to use as a replacement for RavenDb in our architecture. While I’m still a fan of most of the RavenDb development experience, the reliability, performance, and resource utilization in production has been lacking. At this point, I would not recommend adopting RavenDb for new projects.

I’m mostly finished with a fairly complicated project that used RavenDb and all is not quite well.  All too frequently in the past month I’ve had to answer the question “was it a mistake to use RavenDb?” and the more Jeremy’s ego-bruising “should we scrap RavenDb and rebuild this on a different architecture?”  Long story short, we made it work and I think we’ve got an architecture that can allow us to scale later, but the past month was miserable and RavenDb and our usage of RavenDb was the main culprit.

Some Context

Our system is a problem resolution system for an automated data exchange between our company and our clients.  The data exchange has long suffered from data quality issues and hence, we were tasked with building an online system to ameliorate the current manual heavy process for resolving the data issues.  We communicate with the upstream system by receiving and sending flat files dropped into a folder (boo!).  The files can be very large, and the shape of the data is conceptually different than how our application displays and processes events in our system.  As part of processing the data we receive we have to do a fuzzy comparison to the existing data for each logical document because we don’t have any correlation identifier from the upstream system (this was obviously a severe flaw in the process, but I don’t have much control over this issue).  The challenge for us with RavenDb was that we would have to process large bursts of data that involved both heavy reads and writes.

On the read side to support the web UI, the data was very hierarchical and using a document database was a huge advantage in my opinion.

First, some Good Stuff

  • RavenDb has to be the easiest persistence strategy in all of software development to get up and running on day one.  Granted that you’ll have to change settings for production later, but you can spin up a new project using RavenDb as an embedded database and start writing an application with persistence in nothing flat.  I’ve told some of my ex-.Net/now Rails friends that I think I can spin up a FubuMVC app that uses RavenDb for persistence faster than they can with Rails and ActiveRecord.  The combination of a document database and static typed document classes is dramatically lower friction in my opinion than using static typed domain entities with NHibernate or EF as well.
  • I love, love, love being able to dump and rebuild a clean database from scratch in automated testing scenarios
  • I’m still very high on document database’s, especially in the read side of an application.  RavenDb might have fallen down for us in terms of write’s, but there were several places where storing a hierarchical document is just so much easier than dealing with relational database joins across multiple tables
  • No DB migrations necessary
  • Being able to drop down to Lucene queries helped us considerably in the UI
  • I like the paging support in RavenDb
  • RavenDb’s ability to batch up read’s was a big advantage when we were optimizing our application.  I really like the lazy request feature and the IDocumentSession.Load(array of id’s) functions.

Memory Utilization

We had several memory usage problems that we ultimately attributed to RavenDb and its out of the box settings.  In the first case, we had to turn off all of the 2nd level caching because it never seemed to release objects, or at least not before our application fell over from OutOfMemoryExceptions.  In our case, the 2nd level cache would not have provided much value anyway except for a handful of little entities, so we just turned it off across the board.  I think I would recommend that you only use caching with a whitelist of documents.

Also be aware that the implementations of IDocumentSession seem to be very much optimized for short transactions with limited activity at any one time.  Unfortunately we were almost a batch driven system and our logical transactions became quite large and potentially involved a lot of reads against contextual information.  After examining our application with a memory profiler, we determined that IDocumentSession was hanging on to the data we only read.  We solved that issue by explicitly calling Evict() to remove objects from an IDocumentSession’s cache.

Don’t Abstract RavenDb Too Much

To be blunt, I really don’t agree with many of Ayende’s opinions about software development, but in regards to abstractions for RavenDb you have to play by his rules.  We have a fubu project named FubuPersistence that adds common persistence capabilities like multi-tenancy and soft deletes on top of RavenDb in an easy to use way.  That’s great and all, but we had to throw a lot of that goodness away because you so frequently have to get down to the metal with RavenDb to either tighten up performance or avoid stale data.  We were able to happily spin up a database on the fly for testing scenarios, so you might look to do that more often than trying to swap out RavenDb for mocks, stubs, or 100% in memory repositories.  Those tests are still slower than what you’d get with mocks or stubs, but you don’t have any choice when you start having to muck with RavenDb’s low level API’s.

Bulk Inserts

I think RavenDb is weak in terms of dealing with large batches of updates or inserts.  We tried using the BulkInsert functionality, and while it was a definite improvement in performance, we found it to be buggy and probably just immature (it is a recent feature).  We first hit problems with map/reduce operations not finishing after processing a batch.  We updated to a later version of RavenDb (2330), then had to retreat back to our original version (2230) with problems using Windows authentication in combination with the BulkInsert feature.  We saw the same issues with the edge version of RavenDb as well.  We also noticed that BulkInsert did not seem to honor the batch size settings and had several QA bugs under load because of this.  We eventually solved the BulkInsert problems by sending batches of 200 documents for processing through our service bus and putting retry semantics around the BulkInsert to get around occasional hiccups.

The Eventual Consistency Thing

If you’re not familiar with Eventual Consistency and its implications, you shouldn’t even dream of putting a system based on RavenDb into production.  The key with RavenDb is that query/command separation is pretty well built in.  Writes are transactional, and reads by the document id will always give you the latest information, but other queries execute against indexes that are built in background threads as a result of writes.  What this means to you is a chance of receiving stale results from queries against anything but a document id.  There’s a real set of rationale behind this decision, but it’s still a major complication in your life with RavenDb.

With our lack of correlation identifiers from upstream, we were forced to issue a lot of queries against “natural key” data and we frequently ran into trouble with stale indexes in certain circumstances.  Depending on circumstances, we fixed or prevented these issues by:

  • Introducing a static index instead of relying on dynamic indexes.  I think I’d push you to try to use a static index wherever possible.
  • Judiciously using the WaitForNonStaleResults****** methods.  Be careful with this one though, because it can have negative repercussions as well
  • In a few cases we introduced an in-memory cache for certain documents.  You *might* be able to utilize the 2nd level cache instead
  • In another case or two, we switched from using surrogate keys to using natural keys because you always get the latest results when loading by the document id.  User and login documents are the examples of this that I remember offhand.

The stale index problem is far more common in automated testing scenarios, so don’t panic when it happens.

Conclusion

I’m still very high on RavenDb’s future potential, but there’s a significant learning curve you need to be aware of.  The most important thing to know about RavenDb in my opinion is that you can’t just use it, you’re going to have to spend some energy and time learning how it works and what some of the knobs and levers are because it doesn’t just work.  On one hand, RavenDb has several features and capabilities that an RDBMS doesn’t and you’ll want to exploit those abilities.  On the other hand, I do not believe that you can get away with using RavenDb with all of its default settings on a project with larger data sets.

Honestly, I think the single biggest problem on this project was in not doing the heavy load testing earlier instead of the last moment, but everybody involved with the project has already hung their heads in shame over that one and vowed to never do that again.  Doing something challenging and doing something challenging right up against a deadline are too very different things.  It is my opinion that while we did struggle with RavenDb that we would have had at least some struggle to optimize the performance if we’d built with an RDBMS and the user interface would have been much more challenging.

Knowing what I know now, I think it’s 50/50 that I would use RavenDb for a similar project again.  If they get their story fixed for bigger transactions though, I’m all in.

“Code Complete” is a polite fiction, “Done, done, done” is the hard truth

This is a mostly rewritten version of an old blog post of mine from ’06, but the content is still important and I don’t see folks talking about it very often.  Don’t you think for one second that I’ve done this perfectly on every project I’ve worked on for the past decade, but it’s still an ideal I’d like to aim for.

Before you flame me, I’m not talking about the canonical book by Steve McConnell.  What I mean is that this statement is a lie – “we’re code complete on feature xyz.”  That phrase is a lie, or at least misleading, because you really aren’t complete with the feature.  Code complete doesn’t tell you anything definitive about the quality of the code, or most importantly, the readiness of the code for actual deployment.  Code complete just means that the developers have reached a point where they’re ready to turn the code over for testing.  You say the phrase “code complete” to mark off a gate on a schedule or claim earned credit for coding work done on a project schedule.  Using “code complete” to claim earned value is tempting, yet dangerous because it doesn’t translate into business value.  It could have lots of bugs and issues yet to be uncovered by the testers.  If the code hasn’t gone through user acceptance testing, it might not even be the right functionality.

One of my favorite aspects of eXtreme Programming from back in the day was the emphasis on creating working software instead of obsessing over intermediate progress gates and deliverables.  In direct contrast to “Code Complete,”  XP teams used the phrase “Done, done, done” to describe a feature as complete.  “Done, done, done” means the feature is 100% ready to deploy to production.

There’s quite a bit of variance from project to project, but the “story wall” statuses that I prefer to use for a Kanban type approach would go something like:

    1. On deck/not started/make sure it’s ready to be worked on
    2. In development
    3. In testing
    4. Ready for review
    5. Done, done, done

The other columns besides “done, done, done” are just intermediate stages that help the team coordinate activities and handoffs between team members.  The burndown chart informs management on the state of the iteration and helps spot problems and changes in the iteration plan, but the authoritative progress meter is the number stories crossing the line into the “done” column.

That workflow above is a little bit like playing the game of Sorry! as a kid (or parent of a kid about that age).  If you don’t remember or never played Sorry!, the goal of the game was to get your tokens into the home area (production).  There was also a “safe zone” where your tokens are almost to home base, but once in awhile you manage to draw cards that force you to send your tokens back into the danger area.

Just like the game of Sorry!, you don’t “win” at your software project until you push all your stories into a deployable state.

So, how do I use this “knowledge?”

I can’t claim to be any kind of Kanban expert, but I do know that myself and my teams bog down badly when we have too many balls up in the air.  We always seem to do best when we’re working a finite number of features and working them to completion serially rather than having more parallel efforts running simultaneously in various states of not quite done.  By the same token, I also know that I’m much, much quicker solving problems in the code I just worked on than in code I worked on last month.  That’s a roundabout way of saying that I want the testing and user approval of a new feature or user story to happen as close to my development effort for that feature as possible.  In a perfect world this translates to more or less keeping the developers, testers, and maybe even the UX designers and customers focused on the same user story or feature at any given time.

Digging into another old blog post, I strongly recommend Michael Feather’s old blog post on Iteration Slop, in specific what he describes as “trailer hitched QA.”  Right now, I don’t think my current team is doing a good enough job preventing the “trailer hitched QA” problem.  We’re trying to cut into this problem by doing more upfront “executable specifications” to bring the testers and developers onto the same page before working too much on a story.  We’re also changing from a formal iterative approach that’s tempted us into just getting to “code complete” for all the stories we guessed, I’m sorry, estimated that we could do in an iteration to a continuous flow Kanban style.  My hope is that we stop worrying so much about artificial deadlines and focus more on delivering production quality features one or two features at a time.  I’m also hoping that this leads to us and the testers working more smoothly together.

Presenting at CodeMash 2013

In my continuing bid to rejoin the development world, I’m going to be co-presenting two workshops at CodeMash 2013 with Corey Kaylor and Josh Arnold.

Making Test Automation with Web Applications Work

Let’s assume you’ve accepted the arguments in favor of automating at least part of the testing against your web application and you’ve generally nailed down all the soft fuzzy process and collaboration issues, now you’re simply left with the very hard problem of doing effective automated testing — and that’s what this workshop will concentrate on.  We’re going to be light on software process issues but very heavy on concrete technical problems and solutions.  We’ll talk about how we try to make our automated tests more reliable, faster, able to accept changes in the user interface, and less work to author.   We will be showing examples using our own .Net and FubuMVC flavored stack of Storyteller2, Serenity, WebDriver, and Jasmine, but I think that the concepts and strategies should directly transfer to other platforms and tools.

Fully Operational FubuMVC 1.0

I’m very consciously using CodeMash as a forcing function to make FubuMVC arrive at a 1.0 release — documentation, new nugets, tutorials, diagnostics and stable API’s.  I think we’re going to be able to make a pretty compelling case for why FubuMVC is worth exploring even in a world crowded with Ruby on Rails, Play, Lift, Node.js, and Sinatra.

If we manage to pull off a healthy fraction of the demos that we have planned, I’m going to do the “Now witness the firepower of this fully ARMED and OPERATIONAL battle station!” thing, but hopefully without getting thrown down an inexplicably placed well by Darth Vader afterward.  Seriously, why was there a giant, uncovered hole right in the emporer’s throne room?

Neither of these workshops will be filmed, but sometime within the next couple of months we will release Camtasia recordings of the same demo’s as part of our 1.0 release.

Once I’m done with the two workshops I’m going to rest my voice, take in as many talks as I can, catch up with some friends I haven’t seen in quite a while, and just generally mingle.  In particular I’m looking forward to the sessions on Continuous Deployment, client side web development, Clojure, and I want to see the Play framework in action.

See you all there in January.

Abstractions and Models aren’t Infallible

Last week I made a comment on Twitter as a little reminder to myself (link) that you have to occasionally challenge and even change the basic abstractions and domain model of your application.  I was working to extend the new FubuMVC.Authentication project so that we could use Windows and form based authentication on the same application at work.  The core of FubuMVC.Authentication was harvested from Josh and I’s previous project that had much simpler requirements.  I tried for far too long to push the new square shaped functionality through the existing round-holed model.  Once I took a step back and laid out the requirements and how functionality did and did not vary from Windows to form based to Twitter/Facebook/OAuth authentication, the basic abstractions changed quite a bit and I finally got some stuff done that had me stuck.

In the same week, we started to do some detailed analysis for a new user story that everybody thought could be tricky.  Once we got the business partners to give us concrete scenarios of the problems we faced, my team realized that we flat out have to change the core of our Domain Model and the relationships between entities to avoid turning our code into the kind of thing that makes me snarky on Twitter.  In this case it’s not a terrible thing because it won’t break the existing end to end acceptance tests or even much about the user interaction design.

I think we’re going to be just fine with both the authentication and the Domain Model, but for the hat trick last week, I griped on Twitter about a silly little API usability problem with an OSS tool that we use and had just upgraded.  I’ve spent a quite bit of time looking through this OSS tool’s codebase because we interact with its internals extensively in another FubuMVC related project.  Without getting too detailed, I think that the way this OSS project models its problem domain internally makes the code more complicated and certainly harder to use for us than it could be if the basic abstractions were changed to another design.  In this case, I’m familiar with the project’s history and it’s easy to see how its internal model probably worked very well with the initial, relatively simple use cases from the first release.  This project is very successful by any standards (even the fact that people gripe about it so frequently in my twitter feed is a testament to how used the tool really is), but they still have to be paying an opportunity cost in building out their newer features.

Re-thinking previously made decisions isn’t an obvious move in most cases, but it’s still something you’re going to have to do as a software developer.

When I’m most productive

I have days and even weeks when working code just bursts onto the screen with seemingly no effort and I pop out of bed the next morning ready to go again (drives my wife up the wall). Unfortunately, there are those other days when it feels like you just cannot make things move and I feel drained and burned out at the end of the day. It’s an axiom that “mama always said there’ll be days like this,” but what if we can just pay attention to what makes the good days good, the bad days bad, and use what we’ve learned to change our environment and habits for the better?

<boring caveats>

  • I’m being subjective about what I’m calling “productivity” and I know it.  Give me any kind of pseudo-scientific sort of metric and I’ll happily shoot holes into why it’s an imperfect measure*
  • Yes, you should optimize the whole, it’s not just about cutting code blah, blah, blah, preach, preach, preach, “I have people skills, don’t you understand!”  Actually, I think that’s all important too, but I’m making the assumption here that development really does mean coded/tested/approved rather than the imaginary “code complete” status.

</boring caveats>

Quick Twitch Development

I think I’m far more productive when I’m able to make very granular commits several times an hour with a clean local build for each commit.  Granted, this could be interpreted as just an impression of productivity, but I think there’s some reality to the micro-commits as an indicator of productivity.  Correlation is certainly not causation, so let’s work backwards and see what’s typically the situation when I’m able to do “quick twitch development:”

  • I’m coding within an isolated codebase and process.  Writing any kind of code that crosses process boundaries, code repositories, machines, or even just major subsystems within a big system can be much less productive in my experience.  More on that later.
  • My development tasks need to be small so that I can quickly flow from unit test to completed code to the next task
  • My unit tests and the build script in general needs to run quickly so that I have a short feedback cycle between writing a bit of code and knowing that it does what I want it to do
  • My unit tests tend to cover small areas of the code and achieve small goals such that it’s rare that I need to use a debugger to understand and solve problems
  • I need to understand my problem and technical domain well enough to be able to quickly identify the tasks and steps in whatever user story or feature I’m trying to build.  I’m naturally going to be much slower when I have to get out my notebook to doodle UML or CRC cards, go to the whiteboard with a coworker, or step away from the keyboard just to think my way through the problem.

So what can we do to make development be more like the list above?  As much as I scoff at much of the hot air devoted to Domain Driven Design, paying attention to the idea of “Bounded Contexts” and trying to do most of your coding work inside one context can help.  For my part, I’ve tried to organize the FubuMVC ecosystem into more cohesive repositories and solutions to get smaller codebases where the unit test cycle and automated build cycles become much, much faster — and after the dust settled down from the churn, I’d argue that I see a much higher throughput.

As far as being able to work in small, atomic tasks, I cannot strongly enough recommend the pursuit of Orthogonal Code.  The end result should be faster unit testing feedback cycles and smaller coding tasks.  Working with monolithic blobs of code tends to make tests slower, harder to write, and more likely to push you into needing the debugger more often.

If you’re new to a project, I think you’re going to have to invest some time looking through the codebase to understand the organization of the code, the coding style the team uses, the key abstractions, and the way that responsibilities and roles are assigned within classes or functions of the codebase.  Doing this can help you be more successful in breaking down bigger coding goals into small, achievable tasks and unit tests.

My Personal Involvement with the “Goal”

Years ago I read an article from Joel Spolsky telling us about how wonderful their requirements process was for FogBugz.  I rolled my eyes and thought to myself “of course you’re doing a great job, you’re building a system you use yourself to solve your own problems.  Let’s see you do that at my job where you’re working in a domain you don’t know.”  My sarcasm aside, I think there’s a lot of value in thinking through this statement.  I know that I’m far more productive in projects where I’m:

  • Heavily invested in the success of the project (like, say, making the FubuMVC 1.0 release in January)
  • Deeply knowledgeable and somewhat enthusiastic about solving the problem (this is why “Shadow IT” is so prevalent and even successful in big companies where much more qualified IT personnel struggle to complete the same projects)
  • Getting a lot of active collaboration from the real domain experts.  I can happily derive enthusiasm for a project if the project stakeholders themselves are enthusiastic about the project and heavily engaged.  My most successful project as a professional developer was a technical mess, but the domain expert spent a lot of time with a very green technical lead and helped create a strong vision that did actually make a real difference when we rolled it out to the factory floors.

On the other hand, things just won’t go that well when nobody really believes in the project, the theoretical business partners won’t interact much with you, and it’s “just a job.”  I don’t know what it’s like where you’re at, but the job market in Austin for developers is so hot right now that I think you’re crazy for staying in a job you don’t like.

I understand the terrain

When other developers ask me “what should I learn?,” I will invariably advise them to concentrate first on technology agnostic software fundamentals and only learn technologies or frameworks as you need to.  However, there’s something to be said for having a deep understanding of the technical stack, languages and tools that you’re working on in any given time.

The sad truth is that in any given set of tools there are going to be plenty of times when you go off the rails, be it “Ruby can’t find this gem, .Net can’t resolve this assembly version, or the dreaded ‘you need new Guid’s from long ago.'”  At a stand up meeting a couple years ago, one of my colleagues was struggling with a null reference exception coming from NHibernate on startup.*  I correctly guessed the root cause and helped him fix it quickly only because:

  • I just happened to know that NHibernate threw that NullReferenceException as a side effect of a configuration key being missing
  • I knew that the code he was using was being executed from an isolated AppDomain
  • I know that when you programmatically spin up a new AppDomain you usually (always?) need to specify what the configuration file is for the app settings

As much as I think that conceptual ideas like design patterns and design fundamentals are more valuable on the whole than technical trivia, I would have been at a complete loss if I didn’t understand the inner workings of the tooling we were using at that time.  On the other hand, I’ve struggled mightily when I’ve had to work with technologies where I’m not as familiar — the last time I coded on the Java stack, x509 certificates, the first week we used RavenDb last year.

Next time..

There’s a lot more to talk about, but my battery is almost dead and I have to take my son to go play Laser Tag for his birthday.  The single biggest thing that drags down *my* productivity is dealing with development environment dependencies when those upstream dependencies are changing underneath me.  I’m going to promote that to its own blog post.

* My personal metric of coding production is the number of unit and integration tests for a codebase, but it’s only useful in blocks of 25 or larger over significant lengths of time.

** “But Jeremy, that was probably just the case of a bad exception that should have been fixed in that API.”  And I’d say that I agree with you and you should endeavor to make exception messages as clear as possible when you’re writing API’s for other developers.  That said, it’s an imperfect world — especially if you’re gonna go off and use someone else’s code.