Skip to content

StructureMap in 2015/16

I’ve been absurdly bogged down most of this calendar year rewriting the Storyteller tool we use at work for acceptance and regression testing. Now that I’m past that crunch, I’m finally able to focus on other things like StructureMap, this blog, and the still unfinished StructureMap documentation. That being said, here’s what’s going on with StructureMap now and in the near future.

Documentation

StructureMap and I have earned a bad reputation over the years for being under-documented and having out of date documentation. As of about an hour ago, I finished converting the existing StructureMap 3.0 documentation site to the far better “living documentation” tooling I built as part of my Storyteller work earlier this year. There are still far too many holes in that site, but I’m hoping to knock out a couple topics a week until they’re complete. With the new tooling, I think it would be much easier for other folks to contribute to the documentation effort.

Strong Naming and StructureMap. Sigh.

My position on strong naming with StructureMap 3 and beyond is that I do not want the main packages to be strong named. I have said that I would consider supporting a signed version of StructureMap 3 in a parallel nuget if someone did the pull request for that support. Lo and behold, someone finally did that, so look for the structuremap-signed package if you absolutely have to have a strong named version of StructureMap. That being said, I still fervently recommend that you do not use the signed version unless you absolutely have to in your situation.

StructureMap 3.2 in 2015

I’m just about to start some work on some new features and internal improvements for StructureMap that will make up a 3.2 release. Using Semantic Versioning rules, there should be no breaking API changes from 3.* to 3.2. Right now, I think the major changes are:

  1. Optimize the type scanning and convention registration. This was the one big subsystem that I left essentially untouched in the push to 3.0 and sure enough, it’s not holding up well for some users. I have some ideas about how to improve the performance and usability of the type scanning that I have already started in a private branch.
  2. Optimize and tighten up the “TryGetInstance()” runtime behavior under heavy multi-threaded loads. I don’t use this feature myself and I try to discourage its usage overall, but all the ASP.Net frameworks (MVC & Web API) use it in their integration and that’s been problematic to a couple folks.
  3. Some new syntactical sugar to verify specific container registrations
  4. New types of Container wide conventions or policies that act against the entire container
  5. Take advantage of default parameter values
  6. Features to make StructureMap easier to configure from mix and match sources for highly modular architectures

StructureMap 4 in 2016(?)

I *think* that StructureMap 4.0 is going to be all about the new stuff:

  • Use Roslyn runtime code generation in place of the current strategy of building then compiling Expression’s at runtime. I don’t know that this is going to result in faster code, but I am hopeful that it makes the guts of StructureMap’s internals more approachable. Really though, that’s because I want to use the Roslyn functionality on a proposed replacement for FubuMVC next year.
  • Maybe use Roslyn’s support for compiler symbols in place of the existing type scanning?
  • Support CoreCLR

Re-tooled the Codebase

I changed up the build automation tooling for StructureMap a couple weeks ago in an attempt to make the code easier to work with for more mainstream .Net developers.

I started from:

  • Rake (Ruby) as a build scripting tool
  • The old Ripple tool from the FubuMVC family of projects for Nuget support
  • The build script had to be executed at least once before working with the code to generate some required files and fetch all the necessary nuget requirements
  • Good ol’ fashioned NUnit for unit testing

The usage of Ruby for build automation has been off-putting to many .Net developers and the mass majority of .Net developers seem to prefer being able to open Visual Studio.Net and just go to work. Based on the twin desires to make the StructureMap build faster and easier for most .Net developers, the new build tooling changed to:

  • Use Paket as the nuget management tool with its auto-restore capability
  • After some conversations with the NancyFx team over the years, I stole their idea of using Rake but making it completely optional for potential contributors
  • Replaced NUnit with Fixie, a much better and faster testing library from some of the Headspring folks here in Austin

Succeeding with Automated Integration Tests

tl;dr This post is an attempt to codify my thoughts about how to succeed with end to end integration testing. A toned down version of this post is part of the Storyteller 3 documentation

About six months ago the development teams at my shop came together in kind of a town hall to talk about the current state of our automated integration testing approach. We have a pretty deep investment in test automation and I think we can claim some significant success, but we also have had some problems with test instability, brittleness, performance, and the time it takes to author new tests or debug existing tests that have failed.

Some of the problems have since been ameliorated by tightening up on our practices — but that still left quite a bit of technical friction and that’s where this post comes in. Since that meeting, I’ve been essentially rewriting our old Storyteller testing tool in an attempt to address many of the technical issues in our automated testing. As part of the rollout of the new Storyteller 3 to our ecosystem, I thought it was worth a post on how I think teams can be more successful at automated end to end testing.

Test Stability

I’ve worked in far too many environments and codebases where the automated tests were “flakey” or unreliable:

  • Teams that do all of their development against a single shared, development database such that the data setup is hard to control
  • Web applications with a lot of asynchronous behavior are notoriously hard to test and the tests can be flakey with timing issues — even with all the “wait for this condition on the page to be true” discipline in the world.
  • Distributed architectures can be difficult to test because you may need to control, coordinate, or observe multiple processes at one time.
  • Deployment issues or technologies that tend to hang on to file locks, tie up ports, or generally lock up resources that your automated tests need to use

To be effective, automated tests have to be reliable and repeatable. Otherwise, you’re either going to spend all your time trying to discern if a test failure is “real” or not, or you’re most likely going to completely ignore your automated tests altogether as you lose faith in them.

I think you have several strategies to try to make your automated, end to end tests more reliable:

  1. Favor white box testing over black box testing (more on this below)
  2. Closely related to #1, replace hard to control infrastructure dependencies with stub services, even in functional testing. I know some folks absolutely hate this idea, but my shop is having a lot of success in using an IoC tool to swap out dependencies on external databases or web services in functional testing that are completely out of our control.
  3. Isolate infrastructure to the test harness. For example, if your system accesses a relational database, use an isolated schema for the testing that is only used by the test harness. Shared databases can be one of the worst impediments to successful test automation. It’s both important to be able to set up known state in your tests and to not get “false” failures because some other process happened to alter the state of your system while the test is running. Did I mention that I think shared databases are a bad idea yet?*
  4. Completely control system state setup in your tests or whatever build automation you have to deploy the system in testing.
  5. Collapse a distributed application down to a single process for automated functional testing rather than try to run the test harness in a different process than the application. In our functional tests, we will run the test harness, an embedded web server, and even an embedded database in the same process. For distributed applications, we have been using additional .Net AppDomain’s to load related services and using some infrastructure in our OSS projects to coordinate the setup, teardown, and even activity in these services during testing time.
  6. As a last resort for a test that is vulnerable to timing issues and race conditions, allow the test runner to retry the test

Failing all of those things, I definitely think that if a test that is so unstable and unreliable that it renders your automated build useless that you just delete that test. I think a reliable test suite with less coverage is more useful to a team than a more expansive test suite that is not reliable.

You Gotta Have Continuous Integration

This section isn’t the kind of pound on the table, Uncle Bob-style of “you must do this or you’re incompetent” kind of rant that causes the Rob Conery’s of the world have conniptions. Large scale automation testing simply does not work if the automated tests are not running regularly as the system continues to evolve.

Automated tests that are never or seldom executed can even be a burden on a development team that still try to keep that test code up to date with architectural changes. Even worse, automated tests that are not constantly executed are not trustworthy because you no longer know if test failures are real or just because the application structure changed.

Assuming that your automated tests are legitimately detecting regression problems, you need to determine what recent change introduced the problem — and it’s far easier to do that if you have a smaller list of possible changes and those changes are still fresh in the developer’s mind. If you are only occasionally running those automated tests, diagnosing failing tests can be a lot like finding the proverbial needle in the haystack.

I strongly prefer to have all of the automated tests running as part of a team’s continuous integration (CI) strategy — even the heavier, slower end to end kind of tests. If the test suite gets too slow (we have a suite that’s currently taking 40+ minutes), I like the “fast tests, slow tests” strategy of keeping one main build that executes the quicker tests (usually just unit tests) to give the team reasonable confidence that things are okay. The slower tests would be executed in a cascading build triggered whenever the main build completes successfully. Ideally, you’d like to have all the automated tests running against every push to source control, but even running the slower tests suites in a nightly or weekly scheduled build is better than nothing.

Make the Tests Easy to Run Locally

I think the section title is self-explanatory, but I’ve gotten this very wrong in the past in my own work. Ideally, you would have a task in your build script (I still prefer Rake, but substitute MSBuild, Fake, Make, Gulp, NAnt, whatever you like) that completely sets up the system under test on your machine and runs whatever the test harness. In a less perfect world a developer has to jump through hoops to find hidden dependencies and take several poorly described steps in order to run the automated tests. I think this issue is much less problematic than it was earlier in my career as we’ve adopted much more project build automation and moved to technologies that are easier to automate in deployment. I haven’t gotten to use container technologies like Docker myself yet, but I sure hope that those tools will make doing the environment setup for automating tests easier in the future.

Whitebox vs. Blackbox Testing

I strongly believe that teams should generally invest much more time and effort into whitebox tests than blackbox tests. Throughout my career, I have found that whitebox tests are frequently more effective in finding problems in your system – especially for functional testing – because they tend to be much more focused in scope and are usually much faster to execute than the corresponding black box test. White box tests can also be much easier to write because there’s simply far less technical stuff (databases, external web services, service buses, you name it) to configure or set up.

I do believe that there is value in having some blackbox tests, but I think that these blackbox tests should be focused on finding problems in technical integrations and infrastructure whereas the whitebox tests should be used to verify the desired functionality.

Especially at the beginning of my career, I frequently worked with software testers and developers who just did not believe that any test was truly useful unless the testing deployment was exactly the same as production. I think that attitude is inefficient. My philosophy is that you write automated tests to find and remove problems from your system, but not to prove that the system is perfect. Adopting that philosophy, favoring white box over black box testing makes much more sense.

Choose the Quickest, Useful Feedback Mechanism

Automating tests against a user interface has to be one of the most difficult and complex undertakings in all of software development. While teams have been successful with test automation using tools like WebDriver, I very strongly recommend that you do not test business logic and rules through your UI if you don’t have to. For that matter, try hard to avoid testing business logic without using the database. What does this mean? For example:

  • Test complex logic by calling into a service layer instead of the UI. That’s a big issue for one of the teams I work with who really needs to replace a subsystem behind http json services without necessarily changing the user interface that consumes those services. Today the only integration testing involving that subsystem is done completely end to end against the full stack. We have plenty of unit test coverage on the internals of that subsystem, but I’m pretty certain that those unit tests are too coupled to the implementation to be useful as regression or characterization tests when that team tries to improve or replace that subsystem. I’m strongly recommending that that team write a new suite of tests against the gateway facade service to that subsystem for faster feedback than the end to end tests could ever possibly be.
  • Use Subcutaneous Tests even to test some UI behavior if your application architecture supports that
  • Make HTTP calls directly against the endpoints in a web application instead of trying to automate the browser if that can be useful to test out the backend.
  • Consider testing user interface behavior with tightly controlled stub services instead of the real backend

The general rule we encourage in test automation is to use the “quickest feedback cycle that tells you something useful about your code” — and user interface testing can easily be much slower and more brittle than other types of automated testing. Remember too that we’re trying to find problems in our system with our tests instead of trying to prove that the system is perfect.

Setting up State in Automated Tests

I wrote a lot about this topic a couple years ago in My Opinions on Data Setup for Functional Tests, and I don’t have anything new to say since then;) To sum it up:

  • Use self-contained tests that set up all the state that a test needs.
  • Be very cautious using shared test data
  • Use the application services to set up state rather than some kind of “shadow data access” layer
  • Don’t couple test data setup to implementation details. I.e., I’d really rather not see gobs of SQL statements in my automated test code
  • Try to make the test data setup declarative and as terse as possible

Test Automation has to be a factor in Architecture

I once had an interview for a company that makes development tools. I knew going in that their product had some serious deficiencies in their automated testing strategy. When I told my interviewer that I was confident that I could help that company make their automated testing support much better, I was told that testing was just a “process issue.” Last I knew, it is still weak for its support for automating tests against systems that use that tool.

Automated testing is not merely a “process issue,” but should be a first class citizen in selecting technologies and shaping your system architecture. I feel like my shop is far above average for our test automation and that is in no small part because we have purposely architected our applications in such a way to make functional, automated testing easier. The work I described in sections above to collapse a distributed system into one process for easier testing, using a compositional architecture effectively composed by an IoC tool, and isolatating business rules from the database in our systems has been vital to what success we have had with automated testing. In other places we have purposely added logging infrastructure or hooks in our application code for no other reason than to make it easier for test automation infrastructure to observe or control the application.

Other Stuff for later…

I don’t think that in 10 years of blogging I’ve ever finished a blog series, but I might get around to blogging about how we coordinate multiple services in distributed messaging architectures during automated tests or how we’re integrating much more diagnostics in our automated functional tests to spot and prevent performance problems from creeping into application.

* There are some strategies to use in testing if you absolutely have no other choice in using a shared database, but I’m not a fan. The one approach that I want to pursue in the future is utilizing multi-tenancy data access designs to create a fake tenant on each test run to keep the data isolated for the test even if the damn database is shared. I’d still rather smack the DBA types around until they get their project automation act together so we could all get isolated databases.

Retooling Build and Test Automation Tools

tl;dr: I had a convention based build automation approach based on Rake we used for the FubuMVC projects that I was proud of, but the world moved on and I’ve been replacing that with newer and hopefully better tools like Fixie, Paket, and gulp.js. What I do today with FubuRake I’ve generally used Rake as my build scripting tool for the last 7-8 years, even on .Net projects and I’ve been mostly happy with it. In a grandiose attempt to simplify the build scripts across the FubuMVC ecosystem, make all the homegrown build tools we’d created easier to adopt, and support a Ruby on Rails style “fubu new” approach to bootstrapping new FubuMVC projects, I created the FubuRake library as an addon to Rake. In its most simple form, a complete working build script using FubuRake can look like this below:

require 'fuburake'

FubuRake::Solution.new do |sln|
	sln.assembly_info = {
		:product_name => "FubuCore",
		:copyright => 'Copyright 2008-2015...'
	}
end

That simple script above generated rake tasks for:

  1. Generating a “CommonAssemblyInfo.cs” file to embed semantic version numbers, CI build numbers, and git commit numbers into the compiled assemblies
  2. Compiling code
  3. Fetching, building, and publishing nuget with Ripple
  4. Running unit tests with NUnit
  5. Tasks to interact with our FubuDocs tool for documentation generation (and yes, I spent way more time building our docs tool than writing docs)
  6. Tasks to create embedded resource files in csproj files as part of our Bottles strategy for modularizing large web applications

If you typed rake -T to see the task list for this script you’d see this output:

rake ci                # Target used for the CI server
rake clean             # Prepares the working directory for a new build
rake compile           # Compiles the solution src/FubuCore.sln
rake compile:debug     # Compiles the solution in Debug mode
rake compile:release   # Compiles the solution in Release mode
rake default           # **Default**, compiles and runs tests
rake docs:bottle       # 'Bottles' up a single project in the solution with...
rake docs:run          # Runs a documentation project hosted in FubuWorld
rake docs:run_chrome   # Runs the documentation projects in this solution i...
rake docs:run_firefox  # Runs the documentation projects in this solution i...
rake docs:snippets     # Gathers up code snippets from the solution into th...
rake ripple:history    # creates a history file for nuget dependencies
rake ripple:package    # packages the nuget files from the nuspec files in ...
rake ripple:publish    # publishes the built nupkg files
rake ripple:restore    # Restores nuget package files and updates all float...
rake ripple:update     # Updates nuget package files to the latest
rake sln               # Open solution src/FubuCore.sln
rake unit_test         # Runs unit tests for FubuCore.Testing
rake version           # Update the version information for the build

As you’ve probably inferred, FubuRake depended very heavily on naming and folder layout conventions for knowing what to do and how to build out its tasks like:

  • Compile the one and only *.sln file under the /src directory using MSBuild with some defaults (.Net version = 4.0, compile target = Debug)
  • Run all the NUnit tests in folders under /src that end in *.Tests or *.Testing. In the case up above, that meant “FubuCore.Testing”
  • Build and publish all the *.nuspec files found in /packaging/nuget

You could, of course, explicitly override any of the conventional behavior. Any tool using conventions almost has to have an easy facility for overriding or breaking out of the conventions for one off cases. It was fine and great as long as you mostly stayed inside our idioms for project layout and you were okay with using NUnit and Ripple. All in all, I’d say that FubuRake was a mild technical success but time has passed it by. The location of MSBuild changed in .Net 4.5, breaking our msbuild support. Ripple was always problematic and it was frequently hard to keep up with Nuget features and changes. FubuDocs was a well intentioned thing, but the support for it that got embedded into FubuRake has file locking issues that sometimes forced me to shutdown VS.Net in order to run the build. If FubuRake is Roland in the Stephen King’s Dark Tower novels, I’d say that the world has moved on. Time for the world to move on I’ve been quiet on the blogging and even the Twitter front lately because I’ve been working very hard on a near rewrite of our Storyteller tool (more on this soon) that we use at work for executable specifications and integration testing. The new work includes a lot of performance driven improvements in the existing .Net code and a brand new user interface written as a Javascript single page application. At the end of last week my situation looked like this:

  • I needed to start publishing pre-release nugets and we only do that from successful CI builds
  • Our TeamCity CI build was broken because of some kind of problem with our Ripple tool that we use for Nuget management.
  • I had started the new client in a completely separate Github repository and had been using a git submodule to include the Javascript code within the Storyteller .Net code repository for full integration
  • I had no effective automation to attach the submodule if it was missing or do the initial npm install or any of the other hidden things that a developer would have to do before being able to work with the code. In other words, my “time to login screen” metric was terrible right as I’d love to start getting some other developers to start contributing to the project with feedback or patches.
  • I’ve wanted to upgrade or replace several pieces of the build and test automation tools I use in my OSS projects for quite some time anyway — especially where there were opportunities to replace homegrown tools I no longer want to support in favor of actively maintained OSS projects.
  • We have a heavy investment in Rake and Ruby for build automation both at work and on OSS projects. Now that we have so much dependency on Node.js tools for client asset work, we’ve gotten into this situation where project build scripts may include installing gems, nugets, and npm packages and it’s becoming a problem of technology overload and build times.

As a result, I’ve spent the last couple days swapping out tools and generally trying to make the new build automation a lot easier to use for other people. I’ve merged the client javascript code into the old Storyteller repository to avoid the whole git submodule mess. I created a new build script that does everything necessary to get both the Javascript and C# code ready for development work. And lastly, I took the very unusual step (for me) of trying to document how to use the code in a readme. All told I:

  • Replaced our old NUnit-based SpecificationExtensions I ripped off from Scott Bellware ages ago with Shouldly. I chose Shouldly because I liked how terse it is, it was an easier switch than Should or Fluent Assertions would have been, and I love their error messages. It went smoothly
  • Replaced Ripple with Paket for managing Nuget dependencies. Ripple is effectively dead, I didn’t want to invest any more time in it, and Paket has a strong, active community right now. Using Nuget out of the box is completely out of the question for me, so Paket was really the only alternative. So far, so good. I hit a few quirks with Paket using multiple feeds, but quickly learned to just be very particular about version numbers. The proof for me will be when we start using Paket across multiple upstream and downstream repositories like we did with our “floating” nuget dependencies in Ripple. It’s my hope though that tools like Paket or Ripple are no longer necessary in ASP.Net vNext but we’ll see.
  • Built a small gulp.js script to compile the .Net code and run the C# unit tests. I leaned heavily at first on Mike O’Brien’s blogpost about building .Net projects with gulp. It didn’t go as smoothly as I’d like and I partially retreated to doing some things in npm scripts instead. I think this is definitely something I’m going to watch and reconsider moving forward. EDIT 3/25: I’ve already given up on gulp.js for the .Net build and I’m in the process of just using simple Javascript files called from NPM as a replacement. Oh well, no harm done.
  • Completely replaced NUnit with Fixie, but used Fixie’s ability to emulate NUnit’s attributes and behavior as a temporary measure. The FubuMVC team has wanted to do this for quite a while because of Fixie’s crazy flexible feature set and performance — and this was the perfect opportunity to finally pull that off. I’m very happy with how this has turned out so far and I’m even seeing the promised performance improvement with the test suite consistently taking 75% of the runtime that NUnit was taking on my machine. Going forward, I’m going to look to slowly remove the NUnit attributes in favor of the cleaner Fixie idioms. I’m happy with Fixie right off the bat.
  • Finally, to make sure it all “just works,” I created an overarching “npm run build” script with a matching “build.cmd” shortcut to do everything you need to do to build both the Javascript and C# code and pull down all of the various dependencies. So now, if you do a fresh clone of the Storyteller code and you already have both .Net 4.5 and Node.js >+ v10 installed, you should be able to execution “npm run build” and go straight to work. And I’ll keep patching things until that’s certified to be true by other developers;)

In case you’re wondering, the Javascript code is built with Webpack through npm scripts that also delegate to mocha and karma for testing. Ironically, I’m using gulp.js to build the .Net code but not the Javascript code, but that’s a subject for someone else to blog;)

More on Replacing Rake with Gulp

I personally like Ruby better than Javascript, especially for the kind of scripting you do for build and test automation. From time to time I still see folks wishing that the browsers would adopt Ruby as the embedded scripting language instead of Javascript, but that metaphorical ship has long since sailed and I think more developers are going to be familiar with JS than Ruby.

I gave some thought to just trying to use pure console tools for the build automation, and some of my coworkers want to investigate using make, but there’s just a little bit of programmatic manipulation here and there for picking up build versions and arguments from the CI server that I wanted to retain some kind of scripting language. I put this very question of replacing Rake to the StructureMap community and got myriad suggestions: F# based tools, C# tools using scriptcs, and Powershell based tools all made an appearance. My stance at this point is that the build script is best done with a low ceremony scripting language and preferably one that’s commonly understood so as to not be a barrier to entry. As much as I liked Rake personally, Ruby was a problem for us with many .Net developers. By choosing a Javascript based tool, we’re investing in what’s arguably the most widely used programming language going. And while this also forces developers to have a working installation of Node.js on their box, I think that’s going to be pretty common anyway.

Long Lived Codebases: Architecture Gardening, Rewrites, and Automated Tests

This post continues my series about my experiences maintaining and growing the StructureMap codebase over the last decade and change based on my talk this year at CodeMash. This series started with The Challenges, and this post is a direct continuation of the discussion on my last post on A Crime Against Computer Science. You’ll likely want to read the previous post first.

A Timeline

2009 – I introduced the nested container feature into StructureMap 2.6 as a purely tactical feature to fill an immediate need within a project at work. The implementation of nested containers, as I described in my my previous post, was highly problematic.

2010 – I attempted a complete, ground up rewrite of StructureMap largely to address the problems with the nested container. I burned out badly on OSS work for a while before I got very far into it and never picked the work up again.

2014 – StructureMap 3.0 was released with a considerably different architecture that largely fixed the known issues with the nested container feature (I measured a 100X improvement in nested container performance in a bigger application). The 3.0 work was done by applying a series of intermediate transformative changes to the existing 2.6 codebase until the architecture essentially resembled the vision of the earlier attempt to rewrite StructureMap.

Sacrificial Architecture and Continuous Design

I’ve beaten myself up quite a bit over the years for how bad the original implementation of the nested container was, but looking at it from another perspective, I was able to deliver some real value to a project in a hurry. The original implementation and the problems it incurred definitely informed the design decisions that I made in the 3.0 release. I can be a lot easier on myself if I can just treat the initial design as Martin Fowler’s Sacrificial Architecture concept.

My one true regret is just that it took so many years to permanently address the issue. It’s fine and dandy to write some throwaway code you know isn’t that great to satisfy an immediate need if you really do come back and actually throw it away before it does a lot of harm. And as an industry, we’re really good about replacing throwaway code, right? Right?

I’ve been a big believer in the idea of continuous design my entire career — even before the Agile programming movement came around to codify that idea. Simply put, I think that most of the examples of great technical work I’ve been a part of was the direct result of iterating and evolving an idea rather than pure creation. Some of that mental iteration can happen on paper or whiteboards as you refine your ideas. Other times it seems to take some initial stumbles in code before you really learn how to better solve your coding problem.

I’m not the greatest OSS project leader of all time by any means, but the one piece of advice that I can give out with a straight face is to simply avoid being over-extended. The gap in major StructureMap releases, and the continual lack of complete documentation, is a direct result of me having way too many OSS irons in the fire.

Play the Long Game

One of the biggest lessons I’ve learned in working with long lived codebases is that you can never act as if the architecture is set in stone.

In the case of a long lived codebase that changes in functionality and technology way beyond its initial vision, you need to constantly challenge the basic architecture, abstractions, and technical assumptions rather than just try to jam new features into the existing codebase. In the case of StructureMap, there have been several occasions where doing structural changes first has made new functionality easier to implement. In the case of the nested container feature, I tried to get away with building the new feature in even though the current architecture didn’t really lend itself to the new feature and I paid for that.

I think this applies directly to my day job. We don’t really start many new projects and instead continually change years old codebases. In many cases we know we’d like to make some big changes in the application architecture or switch out elements of the technical infrastructure (we’re getting rid of Angular 1.* if it’s the last thing I do), but there’s rarely time to just stop and do that. I think that you have to play the “long game” in those cases. You still need to continuously make and refine architectural plans for what you wish the system could be, then continuously move closer to your ideal as you get the chance in the course of normal project work.

In most of my OSS projects I’ve been able to think ahead to many future features and have a general idea of how I would want to built them or what structural changes would be necessary. In the case of the original nested container implementation, I was caught flat-footed. I don’t know how you completely avoid this, but I highly recommend not trying to make big changes in a hurry without enough time spent thinking through the implications.

Tactical vs. Strategic Thinking

I think the balance between thinking tactically and strategically in regards to software projects is a valuable topic that I don’t see discussed enough. Tilt too far to the tactical side and you get dramatically shitastic code going into production so that you can say that you kinda, sorta made some sort of arbitrary deadline — even though it’s going to cause you heartburn in subsequent releases. Tilt too far to the strategic side of things and you spend all your day going architect astronaut, fiddling with the perfect project automation, and generally getting nothing done.

In my case, I was only looking at the short term gains and not thinking about how this new nested container feature should fit into StructureMap’s architecture. The improved nested container implementation in 3.0 was only possible because I stepped back and reconsidered how the architecture could be changed to better support the existing feature. Since the nested container problems were so severe, that set of architectural improvements was the main driver for me to finally sit down and make the 3.0 release happen.

Whither the Rewrite?

To my credit, I did recognize the problems with the original nested container design early on. I also realized that the StructureMap internals needed to be quite different in order to improve the nested container feature that was suddenly so important in our application architecture. I started to envision a different configuration model that would enable StructureMap to create nested containers much more efficiently and eliminate the problems I had had with lifecycle bugs. I also wanted to streamline some unnecessary duplication between nearly parallel configuration models that had grown over the years in the StructureMap code.

In 2010 I started an entirely new Github repository for StructureMap 3. When I looked at the differences between my architectural vision for StructureMap and its current state in the 2.6 era, I thought that it was going to be far too difficult to make the changes in place so I was opting for a complete, ground up rewrite (don’t bother looking for the repository, I think I deleted it because people were getting confused about where the real StructureMap code was). In this case, the rewrite flopped — mostly because I just wasn’t able to devote enough time in a burst to rebuild all the functionality.

The killer risk for doing any kind of rewrite is that you just can’t derive much value from it until you’re finished. My experience, both on side projects and in enterprise IT, is that rewrite efforts frequently get interrupted or even completely shelved before they advance far enough to be usable.

At this point, I think that unless a project is small or at least has a limited and very well understood set of functionality, you should probably avoid attempts at rewriting. The StructureMap rewrite attempt failed because I was interrupted before it got any momentum. In the end, I think it was a much wiser choice to evolve and transform the existing codebase instead — all while staying within the existing acceptance level tests in the code (more on this below). It can be significantly harder to come up with the intermediate steps such that you can evolve to your end goal than just doing the rewrite, but I think you have a much better chance of delivering value without accidentally dropping functionality.

I’m the primary developer on a couple OSS projects that have gone through, or about to go through, a rewrite or restructure effort:

  • StructureMap 3.0 — As stated before, I gave up on the rewrite and transformed the existing code to a more effective internal model. I’m mostly happy with how this turned out.
  • Storyteller 3.0 — Storyteller is a tool for executable specifications I originally built in 2009. We use it heavily at work with some mixed success, but the WPF based user interface is clumsy and we’re having some severe throughput issues with it on a big project. Everybody wanted a complete rewrite of the WPF client into a web application (React.js with a Flux-like architecture), but I was still left with the rump .Net engine. I identified several structural changes I wanted to make to try to improve usability and performance. I tried to identify intermediate steps to take in the existing code, but in that case I felt like I needed to make too many changes that were interrelated and I was just getting tired of constantly pounding out “git checkout –force” and opted for a rewrite. In this case, I felt like the scope was limited, I could reuse the acceptance tests from the original code to get back to the same functionality, and that a rewrite was the easier approach. So far, so good, but I think this was an exception case.
  • FubuMVC 3 / “Jasper” — I know I said that I was giving up on FubuMVC and I meant it at the time, but for a variety of reasons we want to just move our existing FubuMVC .Net applications to the vNext platform this year and decided that it would be easier to modernize FubuMVC rather than have to rewrite so much of our code to support the new ASP.Net MVC/Web API combo. At the same time we want to transform FubuMVC’s old “Behavior” middleware model to just use the OWIN middleware signature throughout. After a lot of deliberation about a new, ground up framework, we’ve tentatively decided to just transform the existing FubuMVC 2 code and stay within the existing automated test coverage as we make the architectural changes.

So in three cases, I’ve opted against the rewrite twice. I think my advice at this point is to avoid big rewrites. Rewriting a subsystem at a time, sure, but not a complete rewrite unless the scope is limited.

Automated Testing is Good, Except When it Isn’t

Automated testing can most certainly help you create better designs and architectures by providing the all important quality of reversibility — and if you believe in or practice the idea of continuous design, you absolutely have to have reversibility. In other all too common cases, automated tests that are brittle can prevent you from making changes to the codebase because you’re too afraid of breaking the tests.

What I’ve found through the years of StructureMap development is that high level acceptance tests that are largely black box tests that express the desired functionality from a client perspective are highly valuable as regression tests. Finer grained tests that are tightly coupled to the implementation details can be problematic, especially when you want to start restructuring the code. When I was doing the bigger changes for StructureMap 3.0 I relied very heavily on the coarser grained tests. I would even write all new characterization tests to record the existing functionality before making structural changes when I felt like the existing test coverage was too light.

I’m not prepared to forgo the finer grained unit tests that are largely the result of using TDD to drive the low level design. Those tests are still valuable as a way to think though low level design details and keep your debugger turned off. I do think you can take steps to limit the coupling to implementation details.

If fine grained tests are causing me difficulty while making changes, I generally take one of a couple approaches:

  1. Delete them. Obviously use your best judgement on this one;)
  2. Comment the body of the test out and stick some kind of placeholder in instead that makes the test fail with a message to rewrite. I’ve fallen into the habit of using Assert.Fail(“NWO”) just to mean “rewrite to the new world order.”
  3. Write all new code off to the side to replace an entire subsystem. After switching over to using the new subsystem and getting all the acceptance level tests passing, go back and delete the old code plus the now defunct unit tests

Was I lulled to sleep by passing tests?

I had a great question from the audience at CodeMash about the poor initial implementation that caught me a little flat-footed. I was asked if “I had been lulled by passing tests into believing that everything was fine?” Quite possibly, yes. It’s certain that I wasn’t thinking about performance or how to first change the StructureMap internals to better support the new feature. I’m going to blame schedule pressure and overly tactical thinking at that time more than my reliance on TDD however.

There’s an occasional meme floating in the software world that TDD leads to developers just not thinking, and I know at the time of my talk I had just read this post from Ali Kheyrollahi and I was probably defensive about TDD. I’m very clearly in the pro-TDD camp of course and still feel very strongly that TDD can be a key contributor to better software designs. I’m also the veteran of way too many online debates about the effectiveness of TDD (see this one from 2006 for crying out loud). Unlike Ali, my experience is that there is a very high correlation suggesting that developers who use TDD are more effective than developers that don’t — but I think that might be more of a coincidental effect than a root cause.

That all being said, TDD in my experience is more effective for doing design at the small scale down at the class, method, or function level than it is as a technique for more larger scale issues like performance or extensibility. While I certainly wouldn’t rule out the usage of TDD, it’s just a single tool and should never be the sole design tool in your toolbox. TDD advocate or not, I do a lot of software design with pencil and paper using an amalgmation  of “UML as sketch” and CRC cards (I’m still a big fan of Responsibility Driven Design).

At the end of the day, there isn’t really any substitute for just flat out paying attention to what you’re doing. I probably should have done some performance testing on the nested container feature right off the bat or at least thought through the performance implications of all the model copying I was doing in the original implementation (see the previous post for background on that one).

Long Lived Codebases: A Crime Against Computer Science

All code samples are links to the exact lines of code in GitHub. Every thing I’ve ever done to embed code into blog posts has been a PITA, so I’m just punting this time.

This continues the adaptation of my CodeMash 2015 talk about my experiences developing StructureMap over the past decade and change. This series started last week with The Challenges. This post is about the wretchedly poor original implementation of StructureMap’s “nested container” feature and how I re-architected the StructureMap internals in the 3.0 release to greatly improve the performance of this feature. I ran out of steam while writing this, so I ended up breaking this out into two posts.

A little background on nested containers

Sometime around early 2009 my team and I were building a small quasi-service bus for our system that processed messages sent to a queue. In our simple case, the processing of each message would be treated as a single logical transaction. At the time, we were using NHibernate to do all our persistence, so the ISession interface is your de facto unit of work. What we needed was for every object that would participate in the handling of a single message to use the right ISession for that message handling transaction — even if the object was resolved lazily from the StructureMap container.

What we needed was a new kind of container scoping for a logical operation independent of thread or HttpContext or any of the older mechanisms that the IoC containers typically used at that time. Inspired by a similar feature in Windsor* (I think), I conceived of what is now the Nested Container feature in StructureMap that I quickly rolled into the 2.6 release just so we could use it in our homegrown service bus at work.

From a functionality perspective, the nested container feature has been a complete success. It’s used by my own FubuMVC and FubuTransportation frameworks plus other development frameworks like MassTransit, NServiceBus, and even ASP.Net MVC and Web API through OSS adapters. That being said, the original implementation of nested containers in the 2.6.* versions was a mess that suffered from poor performance and usability bugs that took me years to address — so bad that the 3.0 release was largely wrapped around some very significant architectural changes specifically to improve the nested container feature.

* People sometimes get upset by the number of different IoC containers in .Net, but there is some real value in competition between different technical solutions to push and inspire each other. I’ve always thought that development tools in .Net would be significantly better overall if the wider .Net community would be more willing to adopt tools originating outside of Microsoft so that MS would be forced to compete for adoption. 

The Original Implementation

During my talk at CodeMash, I stated that I believe the original implementation of the nested container feature in StructureMap caused other developers more harm than any other thing I’ve ever done. So what was so bad about the original version?

There was a pretty nasty bug related to object scoping in regards to singleton scoped objects that wasn’t really addressed outside of published workarounds until the 3.0 release 3-4 years later. The biggest issues though were performance and thread contention at runtime.

One of the original requirements for the nested container was to enable users to override service registrations in a nested container without having any impact on the original, parent container.* For context, the acceptance test for this behavior demonstrates what I mean. To pull this requirement off, I made the fateful decision to make a complete copy of the parent’s internal configuration model to pass into the nested container.

To illustrate why this turned out to be such an awful approach, see the code for the method PipelineGraph.ToNestedGraph() in the 2.6 branch that’s used to create the isolated configuration model for a new nested container. In particular, see how that code is making deep, programmatic clones of several structures. See also the code lock(this) that creates an exclusive lock around the parent object as it performs the work inside of the code block (I had to create a lock around the dictionary being cloned so that nothing else could alter that dictionary while I was in the process of iterating through it).** Doing the deep clone of the configuration models is expensive, especially when StructureMap was used in bigger applications. Even worse though, the shared lock that I had to do in order to copy the internal configuration structures meant that only one thread in the entire application could be creating a nested container at one time — which is a pretty big problem when you’re talking about a web application under significant load that wants to create an individual nested container for each unique HTTP request.

* FubuMVC exploits this ability to inject services that represent the current HTTP request into a nested container just before building the handlers for that HTTP request so that you can use constructor injection “all the way down.”

** Yes, I do buy that this is an example of where immutability can be valuable in concurrent code. Do also cut me a little bit of slack because this code was written long before .Net 4.0, the TPL, and the newer concurrent collection classes that came with it.

Re-architecting Nested Containers in 3.0 

It took about three years (and another year before a public release), but I was finally able to permanently fix (knock on wood!) the performance, thread contention, and scoping bugs related to the nested container feature in the 3.0 release. In my testing against one of our biggest codebases at work, I measured a two order of magnitude improvement in the time it took to create a new nested container. I was also able to completely eliminate the thread contention issues.

How was I able to do make those improvements? Heres the new version of PipelineGraph.ToNestedGraph() as it exists in the 3.0 code today — note that all it does is create a few new objects and pass in references to some existing objects. No deep cloning, no crazy data shuffling, and certainly no shared lock.

The nested container now has its own configuration model for its overrides and a reference to its parent’s configuration model. In the new world order, the nested container fulfills requests by using a sort of chain of responsibility pattern internally to locate the right action. If you ask a nested container for a service, it will:

  1. Look in its own configuration to see if it has an explicit override for that service. If one exists, build that configuration.
  2. If the nested container has no explicit override, it looks into its parent for the configuration for that service. Assuming one is found, the nested container builds out that configuration

This is over-simplified of course, but that’s the gist of the new structure and design.

I’ve had to revisit several of my OSS infrastructure projects lately (StructureMap’s nested container and shared locking problems, FubuMVC’s startup time, and now StoryTeller‘s throughput speed) to address performance issues. Depending upon my ambition level, I may write a blog post on those experiences.

How and Why?

So what factors might have led me to blunder so badly with the first implementation? What are the signs that I didn’t pick up on at the time that should have told me to go a different way than the flawed original approach? Why did it take me so long to get to a better state? How did I transform the StructureMap code into a very different internal structure to enable the better nested container performance? Why did I chicken out on a complete rewrite of StructureMap a few years back? In the next post I’ll attempt to answer those questions…

Ok, to be perfectly honest, I just ran out of steam and wanna hit “publish.” Till next time, laters.

Long Lived Codebases: The Challenges

I did a talk at CodeMash 2015 called “Lessons Learned from a Long Lived Codebase” that I thought went very well and I promised to turn into a series of blog posts. I’m not exactly sure how many posts it’s going to be yet, but I’m going to try to get them all out by the end of January. This is the first of maybe 4-5 theoretical posts on my experience evolving and supporting the StructureMap codebase over the past 11-12 years.

Some Background

In 2002 the big corporate IT shop I was working in underwent a massive “Dilbert-esque” reorganization that effectively trapped me in a non-coding architect role that I hated. I could claim 3-4 years of development experience and had some significant technical successes under my belt in that short time, but I’d mostly worked with the old COM-based Windows DNA platform (VB6, MTS, ADO, MSXML, ASP) and Oracle technologies right as J2EE and the forthcoming .Net framework seemed certain to dominate enterprise software development for the foreseeable future.

I was afraid that I was in danger of being made obsolete in my new role. I looked for some kind of project I could do out in the open that I could use to both level up on the newer technologies and prove to potential employers that “yes, I can code.” Being a pretty heavy duty relational database kinda guy back then, I decided that I was going to build the greatest ORM tool the world had ever seen on the new .Net platform. I was going to call it “StructureMap” to reflect its purpose of mapping the database to object structures. I read white papers, doodled UML diagrams like crazy, and finally started writing some code — but got bogged down trying to write an over-engineered configuration and modularity layer that would effectively allow you to configure object graphs in Xml. No matter, I managed to land a job with ThoughtWorks (TW) and off I went to be a real developer again.

During the short time that I worked at ThoughtWorks, Martin Fowler published his paper about Dependency Injection and Inversion of Control Containers and other folks at the company built an IoC container in Java called PicoContainer that was getting some buzz on internal message boards. I came to TW in hopes of being one of the cool kids too, so I dusted off the configuration code for my abandoned ORM tool and transformed that it into an IoC library for .Net during my weekly flights between Austin and Chicago. StructureMap was put into a production application in early 2004 and publicly released on SourceForge in June of 2004 as the very first production ready IoC tool on the .Net platform (yes, StructureMap is actually older than Windsor or Spring.Net even though they were much better known for many years).

Flash forward to today and there’s something like two dozen OSS IoC containers for .Net (all claiming to be a special snowflake that’s easier to use than the others while being mostly about the same as the others), at least three (Unity, MEF, and the original ObjectBuilder) from Microsoft itself with yet another brand new one coming in the vNext platform. I’m still working with and on StructureMap all these years later after the very substantial improvements for 3.o last year — but at this point very little remains unchanged from the early code. I’m not going to waste your time trying to sell you on StructureMap, especially since I’m going to spend so much time talking about the mistakes I’ve made during its development. This series is about the journey, not the tool itself.

What’s Changed around Me

Being 11 years old and counting, StructureMap has gone through a lot of churn as the technologies have changed and approaches have gone in and out of favor. If you maintain a big codebase over time, you’re very likely going to have to migrate it to newer versions of your dependencies, use completely different dependencies, or you’ll want to take advantage of newer programming language features. In no particular order:

  • StructureMap was originally written against .Net 1.1, but at the time of this post targets .Net 4.0 with the PCL compliance profile.
    • Newer elements of the .Net runtime like Task and Lazy<T> have simplified the code internals.
    • Lambdas as introduced in .Net 3.5 made a tremendous difference in the coding internals and had a big impact on the usage of the tool itself.
    • As I’ll discuss in a later post, the introduction of generics support into StructureMap 2.0 was like the world’s brightest spotlight shining on all the structural mistakes I made in the initial code structure of early StructureMap, but I’ll still claim that the introduction of generic types has made for huge improvements in StructureMap’s usability — and also one of the main reason why I think that the IoC tools in .Net are generally more usable than those in Java or Scala.
  • The build automation was originally done with NAnt, NUnit, and NMock. As my tolerance for Xml and coding ceremony decreased, StructureMap moved to using Rake and RhinoMocks. For various reasons, I’m looking to change the automation tooling yet again to modernize the StructureMap development experience.
  • StructureMap was originally hosted on SourceForge with Subversion source control. Releases were done in the byzantine fashion that SourceForge required way back then. Today, StructureMap is hosted on GitHub and distributed as Nuget packages. Nuget packages are generated as an artifact of each continuous integration build and manually promoted to Nuget.org whenever it’s time to do a public release. Nuget is an obvious improvement in distribution over manually created zip files. It is my opinion that GitHub is the single best thing to ever happen for Open Source Software development. StructureMap has received vastly more community contribution since moving to GitHub. I’m on record as being critical of the .Net community for being too passive and not being participatory in regards to .Net community tooling. I’m pleasantly surprised with how much help I’ve received from StructureMap users since the 3.0 release last year to fix bugs and fill in usability gaps.
  • The usage patterns and the architectures that folks build using StructureMap. In a later post I’ll do a deep dive on the evolution of the nested container feature.
  • Developer aesthetics and preferences, again, in a later post

 

Other People

Let’s face it, you and I are perfectly fine, but the “other” developers are the problem. In the particular case of a widely used library, you frequently find out that other developers use your tool in ways that you did not expect or anticipate. Frameworks that abstract the IoC container with some sort of adapter library have been some of the worst offenders in this regard.

The feedback I’ve gotten from user problems has led to many changes over the years:

  • All new features. The interception capabilities were originally to support AOP scenarios that I don’t generally use myself.
  • Changing the API to improve usability when verbiage is wrong
  • Lots and lots of work tweaking the internals of StructureMap as users describe architectural strategies that I would never think of, but do turn out to be useful — usually, but not always, involving open generic types in some fashion
  • New conventions and policies to remove repetitive code in the tool usage
  • Additional diagnostics to explain the outcome of the new conventions and policies from above
  • Adding more defensive programming checks to find potential problems faster. My attitude toward defensive programming is much more positive after supporting StructureMap over the years. This might apply more to tools that are configuration intense like say, an IoC tool.
  • A lot of work to improve exception messages (more on this later maybe)

 

One thing that should happen is to publish and maintain best practice recommendations for StructureMap. I have been upset with the developers of a popular .Net OSS tool who did, in my opinion, a wretched job of integrating StructureMap in their adapter library (to the point where I advise users of that framework to adopt a different IoC tool). Until I actually manage to publish the best practice advice to avoid the very problems they caused in their StructureMap usage, those problems are probably on me. Trying to wean users off of using StructureMap as a static service locator and being a little too extreme in applying a certain hexagonal architecture style have been constant problems on the user group over the years.

I’m not sure why this is so, but I’ve learned over the years that the more vitriolic a user is being toward you online when they’re having trouble with your tool, the more likely it is that they themselves are just doing something very stupid that’s not necessarily a poor reflection on your tool. If you ever publish an OSS tool, keep that in mind before you make the mistake of opening a column in your Twitter client just to spot references to your project or a keyword search in StackOverflow. I’ve also learned that users who have uncovered very real problems in StructureMap can be reasonable and even helpful if you engage them as collaborators in fixing the issue instead of being defensive. As I said earlier about the introduction of GitHub, I have routinely gotten much more assistance from StructureMap users in reproducing, diagnosing, and fixing problems in StructureMap over the past year than I ever had before.

 

Pull, not Push for New Features

In early 2008 I was preparing the grand StructureMap 2.5 release as the purported “Python 3000” release that was going to fix all the usability and performance issues in StructureMap once and for all time (Jimmy Bogard dubbed it the Duke Nukem Forever release too, but the 3.0 release took even longer;)). At the same time, Microsoft was gearing up for not one, but two new IoC tools (Unity from P&P and MEF from a different team). I swore that I wasn’t going down without a fight as Microsoft stomped all over my OSS tool, so I kicked into high gear and started stuffing StructureMap with new features and usability improvements. Those new things roughly fell into two piles:

  • Features or usability improvements I made based on my experience with using StructureMap on real projects that I knew would remove some friction from day to day usage. These features introduced in the 2.5 release have largely survived until today and I’d declare that many of them were successful
  • Things that I just thought would be cool, but which I had no immediate usage in my own work. You’ve already called it, much of this work was unsuccessful and later removed because it was either in the way, confusing to use, easily done in other ways, or most especially, a pain in the neck for me to support online because it wasn’t well thought out in the first place.

You have to understand that any feature you introduce is effectively inventory you have to support, document, and keep from breaking in future work. To reaffirm one of the things that the Lean Programming people have told us for years, it’s better to “pull” new features into your tool based on a demonstrated need and usage than it is to “push” a newly conceived feature in the hope that someone might find useful later.

 

Yet to come…

I tend to struggle to complete these kinds of blog series, but I do have the presentation and all of the code samples, so maybe I pull it off. I think that the candidates for following posts are something like:

  • A short discussion on backward compatibility
  • My documentation travails and how I’m trying to fix that
  • “Crimes against Computer Science” — the story of the nested container feature, how it went badly at first, and what I learned while fixing it in 3.0
  • “The Great Refactoring of Aught Eight”
  • API Usage Now and Then
  • Diagnostics and Exceptions

Thoughts from CodeMash 2015

I had a fantastic time at CodeMash last week and I’d like to think all of the organizers for the hard work they do making one of the best development community events happen each year. I was happy with how my talk went and I will get around to what looks like a 3 part blog series adaptation of it later this week after I catch up on other things. I had a much lighter speaking load than I’ve had in the past, leading to many more opportunities and energy to just talk to the other developers there. As best I can recall, here’s a smattering of the basic themes from those discussions:

  • Microservices — I lived through DCOM and all the sheer lunacy of the first wave of SOA euphoria, making me very leery of the whole micro-service concept. I did spoke to a couple people that I respect who were much more enthusiastic about micro services than I am.
  • Roslyn — I feel bad for saying this, but I’m tempering my hopes for Roslyn right now — but that’s largely a matter of me having had outsize expectations and hopes for Roslyn. I’m disappointed by the exclusion of runtime metaprogramming for now and I think the earlier hype about how much faster the Roslyn compiler would be compared to the existing CSC compiler might have been overstated. The improved ability to introspect your code with Roslyn is pretty sweet though.
  • RavenDb — I still love RavenDb conceptually and it’s my favorite database development time experience, but the quality issues and the poor DevOps tooling makes it a borderline liability in production. From my conversations with other RavenDb users at CodeMash, this seems to be a common opinion and experience. I hate to say it, but I’m in favor of phasing out RavenDb at work in the next couple years.
  • Postgresql — Postgres has a bit of buzz right now and I’ve been very happy with my limited usage of it so far. I talked to several people who were interested in using postgres as a pseudo document database. I’m planning to do a lot more side work with postgresql in the coming year to see how easy it would be to use it as more of a document database and add a .Net client to the event store implementation I was building for Node.js.
  • Programming Languages — The trend that I see is a blurring of the line between static and dynamic typing. Static languages are getting better and better type inference and dynamic languages keep getting more optional type declarations. I think this trend can only help developers over time. I do wish I hadn’t overslept the introduction to Rust workshop, but you can’t do everything.
  • OWIN — One of the highlights of CodeMash for me was fellow Texan Ryan Riley‘s history of the OWIN specification set to the Pina Colada song. For as insane as the original version of OWIN was, I think we’ll end up being glad that the OWIN community persevered through all the silliness and drama on their list to deliver something that’s usable today. I do still think Ryan needs to add “mystery meat” into his description of OWIN though.
  • Functional Programming — In the past, over zealous FP advocates have generally annoyed me the same way that I bet I annoyed folks online when Extreme Programming hype back in the day. The last time I was at CodeMash I remember walking out of a talk that was ostensibly about FP because it was nothing but a very long winded straw man argument against OOP done badly. This time around I enjoyed the couple FP talks I took in and appreciated the candor from the FP guys I spoke with. I do wish the FP guys would stop looking at the FP vs. OOP or imperative development comparison as a zero sum game and spend more time talking about the specific areas and problems where FP is valuable and much less time bashing everything else.

 

 

Follow

Get every new post delivered to your Inbox.

Join 45 other followers