Skip to content

Long Lived Codebases: A Crime Against Computer Science

All code samples are links to the exact lines of code in GitHub. Every thing I’ve ever done to embed code into blog posts has been a PITA, so I’m just punting this time.

This continues the adaptation of my CodeMash 2015 talk about my experiences developing StructureMap over the past decade and change. This series started last week with The Challenges. This post is about the wretchedly poor original implementation of StructureMap’s “nested container” feature and how I re-architected the StructureMap internals in the 3.0 release to greatly improve the performance of this feature. I ran out of steam while writing this, so I ended up breaking this out into two posts.

A little background on nested containers

Sometime around early 2009 my team and I were building a small quasi-service bus for our system that processed messages sent to a queue. In our simple case, the processing of each message would be treated as a single logical transaction. At the time, we were using NHibernate to do all our persistence, so the ISession interface is your de facto unit of work. What we needed was for every object that would participate in the handling of a single message to use the right ISession for that message handling transaction — even if the object was resolved lazily from the StructureMap container.

What we needed was a new kind of container scoping for a logical operation independent of thread or HttpContext or any of the older mechanisms that the IoC containers typically used at that time. Inspired by a similar feature in Windsor* (I think), I conceived of what is now the Nested Container feature in StructureMap that I quickly rolled into the 2.6 release just so we could use it in our homegrown service bus at work.

From a functionality perspective, the nested container feature has been a complete success. It’s used by my own FubuMVC and FubuTransportation frameworks plus other development frameworks like MassTransit, NServiceBus, and even ASP.Net MVC and Web API through OSS adapters. That being said, the original implementation of nested containers in the 2.6.* versions was a mess that suffered from poor performance and usability bugs that took me years to address — so bad that the 3.0 release was largely wrapped around some very significant architectural changes specifically to improve the nested container feature.

* People sometimes get upset by the number of different IoC containers in .Net, but there is some real value in competition between different technical solutions to push and inspire each other. I’ve always thought that development tools in .Net would be significantly better overall if the wider .Net community would be more willing to adopt tools originating outside of Microsoft so that MS would be forced to compete for adoption. 

The Original Implementation

During my talk at CodeMash, I stated that I believe the original implementation of the nested container feature in StructureMap caused other developers more harm than any other thing I’ve ever done. So what was so bad about the original version?

There was a pretty nasty bug related to object scoping in regards to singleton scoped objects that wasn’t really addressed outside of published workarounds until the 3.0 release 3-4 years later. The biggest issues though were performance and thread contention at runtime.

One of the original requirements for the nested container was to enable users to override service registrations in a nested container without having any impact on the original, parent container.* For context, the acceptance test for this behavior demonstrates what I mean. To pull this requirement off, I made the fateful decision to make a complete copy of the parent’s internal configuration model to pass into the nested container.

To illustrate why this turned out to be such an awful approach, see the code for the method PipelineGraph.ToNestedGraph() in the 2.6 branch that’s used to create the isolated configuration model for a new nested container. In particular, see how that code is making deep, programmatic clones of several structures. See also the code lock(this) that creates an exclusive lock around the parent object as it performs the work inside of the code block (I had to create a lock around the dictionary being cloned so that nothing else could alter that dictionary while I was in the process of iterating through it).** Doing the deep clone of the configuration models is expensive, especially when StructureMap was used in bigger applications. Even worse though, the shared lock that I had to do in order to copy the internal configuration structures meant that only one thread in the entire application could be creating a nested container at one time — which is a pretty big problem when you’re talking about a web application under significant load that wants to create an individual nested container for each unique HTTP request.

* FubuMVC exploits this ability to inject services that represent the current HTTP request into a nested container just before building the handlers for that HTTP request so that you can use constructor injection “all the way down.”

** Yes, I do buy that this is an example of where immutability can be valuable in concurrent code. Do also cut me a little bit of slack because this code was written long before .Net 4.0, the TPL, and the newer concurrent collection classes that came with it.

Re-architecting Nested Containers in 3.0 

It took about three years (and another year before a public release), but I was finally able to permanently fix (knock on wood!) the performance, thread contention, and scoping bugs related to the nested container feature in the 3.0 release. In my testing against one of our biggest codebases at work, I measured a two order of magnitude improvement in the time it took to create a new nested container. I was also able to completely eliminate the thread contention issues.

How was I able to do make those improvements? Heres the new version of PipelineGraph.ToNestedGraph() as it exists in the 3.0 code today — note that all it does is create a few new objects and pass in references to some existing objects. No deep cloning, no crazy data shuffling, and certainly no shared lock.

The nested container now has its own configuration model for its overrides and a reference to its parent’s configuration model. In the new world order, the nested container fulfills requests by using a sort of chain of responsibility pattern internally to locate the right action. If you ask a nested container for a service, it will:

  1. Look in its own configuration to see if it has an explicit override for that service. If one exists, build that configuration.
  2. If the nested container has no explicit override, it looks into its parent for the configuration for that service. Assuming one is found, the nested container builds out that configuration

This is over-simplified of course, but that’s the gist of the new structure and design.

I’ve had to revisit several of my OSS infrastructure projects lately (StructureMap’s nested container and shared locking problems, FubuMVC’s startup time, and now StoryTeller‘s throughput speed) to address performance issues. Depending upon my ambition level, I may write a blog post on those experiences.

How and Why?

So what factors might have led me to blunder so badly with the first implementation? What are the signs that I didn’t pick up on at the time that should have told me to go a different way than the flawed original approach? Why did it take me so long to get to a better state? How did I transform the StructureMap code into a very different internal structure to enable the better nested container performance? Why did I chicken out on a complete rewrite of StructureMap a few years back? In the next post I’ll attempt to answer those questions…

Ok, to be perfectly honest, I just ran out of steam and wanna hit “publish.” Till next time, laters.

Long Lived Codebases: The Challenges

I did a talk at CodeMash 2015 called “Lessons Learned from a Long Lived Codebase” that I thought went very well and I promised to turn into a series of blog posts. I’m not exactly sure how many posts it’s going to be yet, but I’m going to try to get them all out by the end of January. This is the first of maybe 4-5 theoretical posts on my experience evolving and supporting the StructureMap codebase over the past 11-12 years.

Some Background

In 2002 the big corporate IT shop I was working in underwent a massive “Dilbert-esque” reorganization that effectively trapped me in a non-coding architect role that I hated. I could claim 3-4 years of development experience and had some significant technical successes under my belt in that short time, but I’d mostly worked with the old COM-based Windows DNA platform (VB6, MTS, ADO, MSXML, ASP) and Oracle technologies right as J2EE and the forthcoming .Net framework seemed certain to dominate enterprise software development for the foreseeable future.

I was afraid that I was in danger of being made obsolete in my new role. I looked for some kind of project I could do out in the open that I could use to both level up on the newer technologies and prove to potential employers that “yes, I can code.” Being a pretty heavy duty relational database kinda guy back then, I decided that I was going to build the greatest ORM tool the world had ever seen on the new .Net platform. I was going to call it “StructureMap” to reflect its purpose of mapping the database to object structures. I read white papers, doodled UML diagrams like crazy, and finally started writing some code — but got bogged down trying to write an over-engineered configuration and modularity layer that would effectively allow you to configure object graphs in Xml. No matter, I managed to land a job with ThoughtWorks (TW) and off I went to be a real developer again.

During the short time that I worked at ThoughtWorks, Martin Fowler published his paper about Dependency Injection and Inversion of Control Containers and other folks at the company built an IoC container in Java called PicoContainer that was getting some buzz on internal message boards. I came to TW in hopes of being one of the cool kids too, so I dusted off the configuration code for my abandoned ORM tool and transformed that it into an IoC library for .Net during my weekly flights between Austin and Chicago. StructureMap was put into a production application in early 2004 and publicly released on SourceForge in June of 2004 as the very first production ready IoC tool on the .Net platform (yes, StructureMap is actually older than Windsor or Spring.Net even though they were much better known for many years).

Flash forward to today and there’s something like two dozen OSS IoC containers for .Net (all claiming to be a special snowflake that’s easier to use than the others while being mostly about the same as the others), at least three (Unity, MEF, and the original ObjectBuilder) from Microsoft itself with yet another brand new one coming in the vNext platform. I’m still working with and on StructureMap all these years later after the very substantial improvements for 3.o last year — but at this point very little remains unchanged from the early code. I’m not going to waste your time trying to sell you on StructureMap, especially since I’m going to spend so much time talking about the mistakes I’ve made during its development. This series is about the journey, not the tool itself.

What’s Changed around Me

Being 11 years old and counting, StructureMap has gone through a lot of churn as the technologies have changed and approaches have gone in and out of favor. If you maintain a big codebase over time, you’re very likely going to have to migrate it to newer versions of your dependencies, use completely different dependencies, or you’ll want to take advantage of newer programming language features. In no particular order:

  • StructureMap was originally written against .Net 1.1, but at the time of this post targets .Net 4.0 with the PCL compliance profile.
    • Newer elements of the .Net runtime like Task and Lazy<T> have simplified the code internals.
    • Lambdas as introduced in .Net 3.5 made a tremendous difference in the coding internals and had a big impact on the usage of the tool itself.
    • As I’ll discuss in a later post, the introduction of generics support into StructureMap 2.0 was like the world’s brightest spotlight shining on all the structural mistakes I made in the initial code structure of early StructureMap, but I’ll still claim that the introduction of generic types has made for huge improvements in StructureMap’s usability — and also one of the main reason why I think that the IoC tools in .Net are generally more usable than those in Java or Scala.
  • The build automation was originally done with NAnt, NUnit, and NMock. As my tolerance for Xml and coding ceremony decreased, StructureMap moved to using Rake and RhinoMocks. For various reasons, I’m looking to change the automation tooling yet again to modernize the StructureMap development experience.
  • StructureMap was originally hosted on SourceForge with Subversion source control. Releases were done in the byzantine fashion that SourceForge required way back then. Today, StructureMap is hosted on GitHub and distributed as Nuget packages. Nuget packages are generated as an artifact of each continuous integration build and manually promoted to Nuget.org whenever it’s time to do a public release. Nuget is an obvious improvement in distribution over manually created zip files. It is my opinion that GitHub is the single best thing to ever happen for Open Source Software development. StructureMap has received vastly more community contribution since moving to GitHub. I’m on record as being critical of the .Net community for being too passive and not being participatory in regards to .Net community tooling. I’m pleasantly surprised with how much help I’ve received from StructureMap users since the 3.0 release last year to fix bugs and fill in usability gaps.
  • The usage patterns and the architectures that folks build using StructureMap. In a later post I’ll do a deep dive on the evolution of the nested container feature.
  • Developer aesthetics and preferences, again, in a later post

 

Other People

Let’s face it, you and I are perfectly fine, but the “other” developers are the problem. In the particular case of a widely used library, you frequently find out that other developers use your tool in ways that you did not expect or anticipate. Frameworks that abstract the IoC container with some sort of adapter library have been some of the worst offenders in this regard.

The feedback I’ve gotten from user problems has led to many changes over the years:

  • All new features. The interception capabilities were originally to support AOP scenarios that I don’t generally use myself.
  • Changing the API to improve usability when verbiage is wrong
  • Lots and lots of work tweaking the internals of StructureMap as users describe architectural strategies that I would never think of, but do turn out to be useful — usually, but not always, involving open generic types in some fashion
  • New conventions and policies to remove repetitive code in the tool usage
  • Additional diagnostics to explain the outcome of the new conventions and policies from above
  • Adding more defensive programming checks to find potential problems faster. My attitude toward defensive programming is much more positive after supporting StructureMap over the years. This might apply more to tools that are configuration intense like say, an IoC tool.
  • A lot of work to improve exception messages (more on this later maybe)

 

One thing that should happen is to publish and maintain best practice recommendations for StructureMap. I have been upset with the developers of a popular .Net OSS tool who did, in my opinion, a wretched job of integrating StructureMap in their adapter library (to the point where I advise users of that framework to adopt a different IoC tool). Until I actually manage to publish the best practice advice to avoid the very problems they caused in their StructureMap usage, those problems are probably on me. Trying to wean users off of using StructureMap as a static service locator and being a little too extreme in applying a certain hexagonal architecture style have been constant problems on the user group over the years.

I’m not sure why this is so, but I’ve learned over the years that the more vitriolic a user is being toward you online when they’re having trouble with your tool, the more likely it is that they themselves are just doing something very stupid that’s not necessarily a poor reflection on your tool. If you ever publish an OSS tool, keep that in mind before you make the mistake of opening a column in your Twitter client just to spot references to your project or a keyword search in StackOverflow. I’ve also learned that users who have uncovered very real problems in StructureMap can be reasonable and even helpful if you engage them as collaborators in fixing the issue instead of being defensive. As I said earlier about the introduction of GitHub, I have routinely gotten much more assistance from StructureMap users in reproducing, diagnosing, and fixing problems in StructureMap over the past year than I ever had before.

 

Pull, not Push for New Features

In early 2008 I was preparing the grand StructureMap 2.5 release as the purported “Python 3000” release that was going to fix all the usability and performance issues in StructureMap once and for all time (Jimmy Bogard dubbed it the Duke Nukem Forever release too, but the 3.0 release took even longer;)). At the same time, Microsoft was gearing up for not one, but two new IoC tools (Unity from P&P and MEF from a different team). I swore that I wasn’t going down without a fight as Microsoft stomped all over my OSS tool, so I kicked into high gear and started stuffing StructureMap with new features and usability improvements. Those new things roughly fell into two piles:

  • Features or usability improvements I made based on my experience with using StructureMap on real projects that I knew would remove some friction from day to day usage. These features introduced in the 2.5 release have largely survived until today and I’d declare that many of them were successful
  • Things that I just thought would be cool, but which I had no immediate usage in my own work. You’ve already called it, much of this work was unsuccessful and later removed because it was either in the way, confusing to use, easily done in other ways, or most especially, a pain in the neck for me to support online because it wasn’t well thought out in the first place.

You have to understand that any feature you introduce is effectively inventory you have to support, document, and keep from breaking in future work. To reaffirm one of the things that the Lean Programming people have told us for years, it’s better to “pull” new features into your tool based on a demonstrated need and usage than it is to “push” a newly conceived feature in the hope that someone might find useful later.

 

Yet to come…

I tend to struggle to complete these kinds of blog series, but I do have the presentation and all of the code samples, so maybe I pull it off. I think that the candidates for following posts are something like:

  • A short discussion on backward compatibility
  • My documentation travails and how I’m trying to fix that
  • “Crimes against Computer Science” — the story of the nested container feature, how it went badly at first, and what I learned while fixing it in 3.0
  • “The Great Refactoring of Aught Eight”
  • API Usage Now and Then
  • Diagnostics and Exceptions

Thoughts from CodeMash 2015

I had a fantastic time at CodeMash last week and I’d like to think all of the organizers for the hard work they do making one of the best development community events happen each year. I was happy with how my talk went and I will get around to what looks like a 3 part blog series adaptation of it later this week after I catch up on other things. I had a much lighter speaking load than I’ve had in the past, leading to many more opportunities and energy to just talk to the other developers there. As best I can recall, here’s a smattering of the basic themes from those discussions:

  • Microservices — I lived through DCOM and all the sheer lunacy of the first wave of SOA euphoria, making me very leery of the whole micro-service concept. I did spoke to a couple people that I respect who were much more enthusiastic about micro services than I am.
  • Roslyn — I feel bad for saying this, but I’m tempering my hopes for Roslyn right now — but that’s largely a matter of me having had outsize expectations and hopes for Roslyn. I’m disappointed by the exclusion of runtime metaprogramming for now and I think the earlier hype about how much faster the Roslyn compiler would be compared to the existing CSC compiler might have been overstated. The improved ability to introspect your code with Roslyn is pretty sweet though.
  • RavenDb — I still love RavenDb conceptually and it’s my favorite database development time experience, but the quality issues and the poor DevOps tooling makes it a borderline liability in production. From my conversations with other RavenDb users at CodeMash, this seems to be a common opinion and experience. I hate to say it, but I’m in favor of phasing out RavenDb at work in the next couple years.
  • Postgresql — Postgres has a bit of buzz right now and I’ve been very happy with my limited usage of it so far. I talked to several people who were interested in using postgres as a pseudo document database. I’m planning to do a lot more side work with postgresql in the coming year to see how easy it would be to use it as more of a document database and add a .Net client to the event store implementation I was building for Node.js.
  • Programming Languages — The trend that I see is a blurring of the line between static and dynamic typing. Static languages are getting better and better type inference and dynamic languages keep getting more optional type declarations. I think this trend can only help developers over time. I do wish I hadn’t overslept the introduction to Rust workshop, but you can’t do everything.
  • OWIN — One of the highlights of CodeMash for me was fellow Texan Ryan Riley‘s history of the OWIN specification set to the Pina Colada song. For as insane as the original version of OWIN was, I think we’ll end up being glad that the OWIN community persevered through all the silliness and drama on their list to deliver something that’s usable today. I do still think Ryan needs to add “mystery meat” into his description of OWIN though.
  • Functional Programming — In the past, over zealous FP advocates have generally annoyed me the same way that I bet I annoyed folks online when Extreme Programming hype back in the day. The last time I was at CodeMash I remember walking out of a talk that was ostensibly about FP because it was nothing but a very long winded straw man argument against OOP done badly. This time around I enjoyed the couple FP talks I took in and appreciated the candor from the FP guys I spoke with. I do wish the FP guys would stop looking at the FP vs. OOP or imperative development comparison as a zero sum game and spend more time talking about the specific areas and problems where FP is valuable and much less time bashing everything else.

 

 

My Talk on Long-Lived Codebases at Codemash

When I go to conferences I see a lot of talks that follow the general theme of I/my team/our product/our process/this tool is wonderful. In contrast to that, I’m giving a talk Friday at Codemash entitled “Lessons Learned from a Long-Lived Codebase” about my experiences with the evolution of the StructureMap codebase in its original production usage in 2004 all the way up to the very different tool that it is today. Many structural weaknesses in your code structure and architecture don’t become apparent or even harmful until you either need to make large functional changes you didn’t foresee or the code gets used in far higher volume systems. I’m going to demonstrate some of the self-inflicted problems I’ve incurred over the years in the StructureMap and what I had to do to ameliorate those issues in the hopes of helping others dodge some of that pain.

Because StructureMap is such an old tool that’s been constantly updated, it’s also a great microcosm of how our general attitudes toward software development, designing API’s, and how we expect our tools to work have changed in the past decade and I’ll talk about some of that.

In specific, I’m covering:

  • Why duplication of even innocuous logic can be such a problem for changing the system later
  • Abstractions should probably be based strictly on roles and variations and created neither too early nor too late
  • Why your API should be expressed in terms of your user’s needs rather than in your own tool’s internals
  • Trying to pick names for concepts in your tool that are not confusing to your users
  • For the usage of any development tool that’s highly configurable there’s a tension between making the user be very explicit about what they want to do versus providing a much more concise or conventional usage that is potentially too “magical” for many users.
  • How the diagnostics and exception messages in StructureMap have evolved in continuous attempts to reduce the cost of supporting StructureMap users;)
  • Using “living documentation” to try to keep your basic documentation reasonably up to date
  • Organizing whatever documentation you do have in regards to what the users are trying to do rather than being a dry restatement of the implementation details
  • The unfortunate tension between backward compatibility and working to improve your tool
  • Automated testing in the context of a long lived codebase. What kinds of tests allow you to improve the internals of your code over time, where I still see value in TDD for fine grained design, and my opinions about how to best write acceptance tests that came out of my StructureMap work over the years
  • Lastly, the conversation about automated testing is a great sequeway to talk about the great “Rewrite vs. Improve In-Place” discussion I faced before the big StructureMap 3.0 release earlier this year.

This is a big update to a talk I did at QCon San Francisco in 2008. Turns out there was plenty more lessons to learn since then;)

I’d be willing to turn this talk into a series of more concrete blog posts working in the before and after code samples if I get at least 5-6 comments here from people that would be interested in that kind of thing.

Some Thoughts on the new .Net

The big announcements on the new ASP.Net runtime were made a couple weeks ago, but I write very slowly these days, so here’s my delayed reaction.

 

Less Friction and a Faster Feedback Cycle Maybe? 

I’ve started to associate .Net “classic” with seemingly constant aggravations like strong naming conflicts, csproj file merge hell, slow compilation, slow nuget restores, and how absurdly heavyweight and bloated that Visual Studio.Net has become over the years. Plenty of it was self-inflicted, but I felt at the end that way too much of our efforts in FubuMVC development was really about working around limitations in the .Net ecosystem or ASP.Net itself (the existing System.Web infrastructure is awful for modularity in my opinion and clearly shows its age and the MSBuild model is just flat out nuts).

In contrast, my brief exposure to Node.js development has been a breathe of fresh air. My preferred workflow is to just start mocha in “watch” mode in a terminal window with Growl notifications on and continuously code in Sublime. The feedback cycle between making a change in code to test results is astonishing fast compared to .Net (admittedly less awesome when using lots of generator functions or using WebPack to bundle up React.js JSX files or running tests in Karma) and the editor is snappy.

ASP.Net vNext is very clearly a reaction to the improved development workflows you can get with Node.js or Golang. In particular, I’m positive or at least hopeful that:

  • The Roslyn compiler is fast enough that auto-test tools in .Net “feel” much snappier
  • The new package resolution functionality is much faster than the current Nuget
  • The new runtime is much more command line friendly. I do particularly like how easy they’ve made it to expose commands from Nuget packages in the project.json declaration. That was a big irritation to me in the existing Nuget that we eventually solved by moving to gems instead for CLI tools.
  • The new project.json model is so much cleaner and simpler than the existing MSBuild schema. I hope that translates to easier tooling for scaffolding and much less merge hell than .Net classic
  • I’m impressed with how much activity is happening around the OmniSharp project, because I’d like to be productive in C# without having to fire up Visual Studio.Net at all — or at least only when I want ReSharper to do a lot of heavy refactoring work.
  • The abomination that was strong naming is gone

 

The Spirit of Openness

I like that the ASP.Net team has posted their code for their next generation work on GitHub. The best part of that for me is that they moved the code over before it was completely baked and you can actually see what they’re up to. Even crazier to me is that the ASP.Net team seems to actually be listening to the feedback (I’ve felt in the past that the ASP.Net team hasn’t been all that receptive to feedback).

Compare that to how the tool that became Nuget was developed. Nuget was largely conceived and built in secret at the same time that a pair of community efforts were trying to build out a new package management strategy for .Net (Seb’s OpenWrap and the Nu project to adapt RubyGems to .Net that I was somewhat involved in). Contrary to the pleasant sounding legend that’s grown up around it, Nuget is not a continuation of a community project. Nuget partially took over the name and asked the guys doing Nu to just stop.

The Nuget episode is irritating to me, not just because of how they de facto killed off the community efforts, but because I think they made some design decisions that caused some severe problems for usage (I described the problems I saw and my recommendations for Nuget earlier this year). Maybe, just maybe, the usage and performance problems could have been solved by more contributions, awareness, and feedback from the larger community if Nuget had been built in the open.

 

Kill the Insider Groups

Most of us didn’t known anything about Nuget or the new ASP.Net vNext platform before it was announced (I did, but not through official channels), but Microsoft has several official “Insider” groups that get much more advance warning and theoretically provide feedback to Microsoft on proposals and early work. I’d like to propose right now that these self-selected “insider” groups be disbanded and much more of these conversations happen in more public forums. My reasoning is pretty simple: there are far more strong developers outside of Redmond and that list than there are on the official list. My fear is that those types of insider, true believer clubs can too easily turn into echo chambers with little exposure to the world outside of .Net — and I feel that my limited involvement with that list frankly confirms that suspicion.

I was not a member of the ASP.Net Insider list until somewhat recently. If I had been aware of the plans for “Project K” earlier, I would have shut down FubuMVC development much earlier than I did because so much of the improvements in ASP.Net vNext make some very large development efforts we did in FubuMVC irrelevant or unnecessary.

 

OSS and the Community Thing

I’m guilty of spreading the “.Net community sucks” meme, but that’s not precisely true, it’s just different than other communities. What makes .Net so much different than many other development communities is how the majority of it revolves around what one particular company is doing.

So much of .Net is open source now and they even take contributions. Awesome, great, but my very first reaction was that it doesn’t matter much because the .Net community as a whole isn’t as participatory as other communities and that would have to change before ASP.Net vNext being OSS matters. It’ll be interesting to me to see if that changes over time.

With few exceptions (NancyFx comes to mind), OSS projects in .Net struggle to get an audience and build a community. I just don’t think there are enough venues for community to coalesce around OSS efforts in .Net, and I think that’s the reason why we have so much duplication of OSS projects without any one alternative getting much involvement beyond the initial authors. I do suspect that the .Net OSS community is much better in Europe now than North America, but that might just me having become such a homebody over the past 5 years.

 

Mono is More Viable and Docker(!)

I thought that trying to support running FubuMVC on Mono was a borderline disaster, to the point where I didn’t want to touch Mono again afterward (it’s a longer conversation, but beyond the normal aggravations like the file system differences between Windows and *nix we hit several Mono bugs). With the recent announcements about open sourcing so much more of the .Net runtime, the PCL standardization, and the better collaboration between Microsoft and Xamarin on display in Miguel de Icaza’s blogpost on the .Net announcements, I think that building applications on server side Mono is going to be much more viable.

Like just about every development shop around, we’re very interested in using Docker for deployments. The ability to eventually deploy our existing .Net applications as Docker containers could be a very big win for us.

 

We wanna code on the Mac

I resisted switching over to Mac’s for years, even when so many of my friends were doing even .Net development on their Macs, just because I was too cheap. After a couple years now of using a Mac, I’d really prefer to stay on that side of things and hopefully give my Windows VM much more time off. Mac OS being a first class citizen for the new .Net and the progress on the OmniSharp tools for Sublime or MacVim is going to make the new ASP.Net vNext runtime a much easier sell in my shop.

 

I think we might just head back to the new .Net

We’ve been working toward eventually moving off of .Net and I have been trying to concentrate my side work efforts on Node.js related things, but the truth of the matter is that we have a huge amount of existing code on .Net and a heavy investment in .Net OSS tools that we just can’t make go away overnight. If the feedback cycle is really much better in the new runtime, we can code on Macs, and use Docker, our thinking is that we’re still better off in the end to just adopt the better .Net instead of rewriting the existing systems on another platform.

 

vNext & Me

I have no intentions of restarting FubuMVC itself, but we’ve put together a concept for a  partial reboot called “Jasper” that may or may not become reality someday. I do want to move our FubuTransportation service bus to vNext next year. At work we *might* try to extend the new version of ASP.Net MVC to act more like FubuMVC to make migrating our existing applications easier if we don’t try to do the Jasper idea. StructureMap 3 is already PCL compliant, so it *should* be easy to get up and going on vNext.

In the meantime, I think I’m going to start a blog series just reviewing bits of the code in the ASP.Net GitHub organizations to get myself more familiar with the new tooling.

 

StructureMap 3 Documentation

Shockingly, my efforts to complete the documentation on StructureMap 3 have taken much, much longer than I had hoped — but there’s some real progress worth talking about. This time around, I’m adopting the idea of “living documentation” where the code samples are taken directly out of the code in the main GitHub repository at publishing time so that the documentation never gets out of sync with the code. For the most part, I’m using unit test code to demonstrate API usage in the documentation with the thinking there that the resulting documentation is much less ambiguous and again, cannot be out of sync with how the code actually works as long as those unit tests are passing.

If you’re curious, I’ve been using our (now abandoned) FubuDocs project with FubuMVC.CodeSnippets to author and publish the documentation. All the documentation is in GitHub in the StructureMap.Docs project (and yes, I certainly take pull requests).

The new documentation has moved to GitHub pages hosting at http://structuremap.github.io. I’m not sure what’s going to happen to the old structuremap.net website yet.

Some highlights of the documentation so far:

 

I’m committed to finishing the documentation, but I’m obviously not sure when it will be complete. I’d like to say it’s “done” before starting any significant new OSS project and I’m using that to force myself to finally finish;)

How We Do Strong Typed Configuration

TL;DR: I’ve used a form of “strong typed configuration” for the past 5-6 years that I think has some big advantages over the traditional approach to configuration in .Net. I still like this approach and I’m seeing something similar showing up in ASP.Net vNext as well. I’m NOT trying to sell you on any particular tool here, just the technique and concepts.

What’s Wrong with Traditional .Net Configuration?

About a decade ago I came late into a project where we were retrofitting some sort of new security framework onto a very large .Net codebase.* The codebase had no significant automated test coverage, so before we tried to monkey around with its infrastructure, I volunteered to create a set of characterization tests**. My first task was to stand up a big chunk of the application architecture on my own box with a local sql server database. My first and biggest challenge was dealing with all the configuration needs of that particular architecture (I’m wanting to say it was >75 different items in the appSettings collection).

As I recall, the specific problems with configuration were:

  1. It was difficult to easily know what configuration items any subset of the code (assemblies, classes, or subsystems) needed in order to run without painstakingly tracing the code
  2. It was somewhat difficult to understand how items in very large configuration files were consumed by the running code. Better naming conventions probably would have helped.
  3. There was no way to define configuration items on the fly in code. The only option was to change entries in the giant web.config Xml file because the code that used the old System.Configuration namespace “pulled” its configuration items when it needed configuration. What I wanted to do was to “push” configuration to the code to change database connections or file paths in test scenarios or really any time I just needed to repurpose the code. In a more general sense, I called this the Pull, Don’t Push rule in my CodeBetter days.

From that moment on, I’ve been a big advocate of approaches that make it much easier to both trace configuration items to the code that consumes it and also to make application code somehow declare what configuration it needs.

 

Strong Typed Configuration with “Settings” Classes 

For the past 5-6 years I’ve used an approach where configuration items like file paths, switches, url’s, or connection information is modeled on simple POCO classes that are suffixed with “Settings.” As an example, in the FubuPersistence package we use to integrate RavenDb with FubuMVC, we have a simple class called RavenDbSettings that holds the basic information you would need to connect to a RavenDb database, partially shown below:

    public class RavenDbSettings
    {
        public string DataDirectory { get; set; }
        public bool RunInMemory { get; set; }
        public string Url { get; set; }
        public bool UseEmbeddedHttpServer { get; set; }


        [ConnectionString]
        public string ConnectionString { get; set; }

        // And some methods to create in memory datastore's
        // or connect to the specified external datastore
    }

Setting aside how that object is built up for the moment, the DocumentStoreBuilder class that needs the configuration above just gets this object through simple constructor injection like so: new DocumentStoreBuilder(new RavenDbSettings{}, ...);. The advantages of this approach for me are:

  • The consumer of the configuration information is no longer coupled to how that information is resolved or stored in any way. As long as it’s giving the RavenDbSettings object in its constructor, DocumentStoreBuilder can happily go about its business.
  • I’m a big fan of using constructor injection as a way to create traceability and insight into the dependencies of a class, and injecting the configuration makes the dependency on configuration from a class much more transparent than the older “pull” forms of configuration.
  • I think it’s easier to trace back and forth between the configuration items and the code that depends on that configuration. I also feel like it makes the code “declare” what configuration items it needs through the signature of the Settings classes

 

Serving up the Settings with an IoC Container

I’m sure you’ve already guessed that we just use StructureMap to inject the Settings objects into constructor functions. I know that many of you are going to have a visceral reaction to the usage of an IoC container, and while I actually do respect that opinion, it’s worked out very well for us in practice. Using StructureMap (I think most of the other IoC containers could do this as well) we get a couple big benefits in regards to default configuration and the ability to swap out configuration at runtime (mostly for testing).

Since the Settings classes are generally concrete classes with no argument constructors, StructureMap can happily build them out for you even if StructureMap has no explicit registration for that type. That means that you can forgo any external configuration or StructureMap configuration and your code can still work as long as the default values of the Settings class is useful.  To use the example of RavenDbSettings from the previous section, calling new RavenDbSettings() creates a configuration that will connect to a new embedded RavenDb database that stores its data to the file system in a folder called /data parallel to the project directory (you can see the code here).

The result of the design above is that a FubuMVC/FubuTransportation application was completely connected to a working RavenDb database by simply installing the FubuMVC.RavenDb nuget with zero additional configuration.

I demoed that several times at conferences last year and the audiences seemed to be very unimpressed and disinterested. Either that’s not nearly as impressive as I thought it was, too much magic,  I’m not a good presenter, or they don’t remember what a PITA it used to be just to install and configure everything you needed just to get a blank development database going. I still thought it was cool.

The other huge advantage to using an IoC container to deliver all the configuration to consumers is how easy that makes it to swap out configuration at runtime. Again going to the RavenDbSettings example, we can build out our entire application and swap out the RavenDb connection at will without digging into Xml or Json files of any kind. The main usage has been in testing to get a clean database per test when we do end to end testing, but it’s also been helpful in other ways.

 

So where does the Setting data come from?

Alright, so the actual data has to come from somewhere outside the codebase at some point (like every developer of my age I have a couple good war stories from development teams hard coding database connection strings directly into compiled code). We generally put the raw data into the normal appSettings key/value pairs with the naming convention “[SettingsClassName].[PropertyName].” The first time a Settings object is needed within StructureMap, we read the raw key/value data from the configuration file and use FubuCore’s model binding support to create the object and do all the type coercion necessary to create the Settings object. An early version of this approach was described by my former colleague Josh Flanagan way back in 2009. The actual mechanics are in the FubuMVC.StructureMap code in the main fubumvc repository.

Some other points:

  • The data source for the model binding in this configuration setup was pluggable, so you could use external files or anything else that could be exposed as key/value pairs. We generally just use the appSettings support now, but on a previous project we were able to centralize the configuration files to a common location for several related processes
  • The model binding is flexible enough to support deep objects, enumerable properties, and just about anything you would need to use in your Settings objects
  • We also supported a model where you could combine key/value information from multiple sources, but layer the data in precedence to enable overrides to the basic configuration. My goal with that support was to avoid making all customer or environment specific configuration be in the form of separate override files without having to duplicate so much boilerplate configuration. In this setup, when the Settings objects were bound to the raw data, profile or environment specific information just got a higher precedence than the default configuration.
  • As of FubuMVC 2.0, if you’re using the StructureMap integration, you no longer have to do anything to setup this settings provider infrastructure. If StructureMap encounters an unknown dependency that’s a concrete type suffixed by “Settings,” it will try to resolve it from the model binding via a custom StructureMap policy.

 

Programmatic Configuration in FubuMVC 

We also used the “Settings” configuration idea to programmatically specify configuration for features inside the application configuration by applying alterations to a Setting object like this code from one of our active projects:

// Sets up a custom authorization rule on any diagnostic 
// pages
AlterSettings<DiagnosticsSettings>(x => {
	x.AuthorizationRights.Add(new AuthorizationCheckPolicy<InternalUserPolicy>());
});

// Directs FubuMVC to only look for client assets
// in the /public folder
AlterSettings<AssetSettings>(x =>
{
	x.Mode = SearchMode.PublicFolderOnly;
});

The DiagnosticSettings and AssetSettings objects used above are injected into the application container as the application bootstraps, but only after the alterations show above are applied. Behind the scenes, FubuMVC will first resolve the objects using any data found in the appSettings key/value pairs, then apply the alternation overrides in the code above. Using the alteration lambdas instead of just injecting the Settings objects directly allowed us to also embed settings overrides in external plugins, but ensure that the main application overrides always “win” in the case of conflicts.

I’m still happy with how this has turned out in real usage and I’ve since noticed that ASP.Net vNext uses a remarkably similar mechanism to configure options in their IApplicationBuilder scheme (think SetupOptions<MvcOptions>(Action<MvcOptions>)). I’m interested to see if the ASP.Net team will try to exploit that capability to provide much better modularity than the older ASP.Net MVC & Web API frameworks.

 

 

Other Links

 

* That short project was probably one of the strongest teams I’ve ever been a part of in terms of talented individuals, but it also spawned several of my favorite development horror stories (massive stored procedures, non-coding architects, the centralized architect team anti-pattern, harmful coding standards, you name it). In my career I’ve strangely seen little correlation between the technical strength of the development team (beyond basic competence anyway) and the resulting success of the project. Environment, the support or lack thereof from the business, and the simple wisdom of doing the project in the first place seem to be much more important.

** As an aside, that effort to create characterization tests as a crude regression test suite did pay off and we it did find some regression errors after we started making the bigger changes with that test suite. I think the Feathers’ playbook for legacy systems, where I got the inspiration for those characterization tests, is still very relevant some 10+ years later.

Follow

Get every new post delivered to your Inbox.

Join 40 other followers