Skip to content

FubuMVC Lessons Learned — Misadventures in DevOps with Ripple, Nuget, TeamCity, and Gems

tl;dr: Large .Net codebases are a challenge, strong naming in .Net is awful, Nuget out of the box breaks down when things are changing rapidly, and the csproj file format is problematic — but we remedied some, but certainly not all, of these issues with a tool we built called Ripple.

At this point, we have collectively decided that FubuMVC did absolutely nothing right in regards to documentation, samples, getting started tutorials, and generally making it easier for new users to get going, so I’m going to automatically delete any comment deriding us yet again on these topics.  We did, however, do a number of things in the purely technical realm that might still be of use to other folks and that’s what I’m concentrating on throughout the rest of these posts.

I got to speak at the Monkeyspace conference last year (had a blast, highly recommend it this year in Dublin, I’m hoping to go again) and one of my talks was about our experiences with dependency management across the FubuMVC projects.  Just a sampling of the slides included “A Comedy of Errors”, “Merge Hell”, and “Strong Naming Woes.”  To put it mildly, we’ve had some technical difficulties with Nuget, TeamCity, Git, our own Ripple tool, and .Net CLR mechanics in general that have caused me to use language that my Grandmother would not approve of.

 

.Net Codebases Do Not Scale

There’s a line of thinking out there that says that you can happily use dynamic languages for smaller applications but when you get into bigger codebases you have to graduate to a grown up static typed language. While I don’t have any first hand knowledge about how well a Ruby on Rails or a Python Django codebase will scale in size, I can tell you that a .Net codebase can become a severe productivity problem as it becomes much larger.  Visual Studio.Net grinds down, ReSharper gets slower, your build script gets slower, compile times take longer, and basically every single feedback mechanism that a good development team wants to use gets slower.

The FubuMVC ecosystem became very large within just a year of constant development.  Take a brief glance at the sheer number of active projects we have going at the moment and the test counts.*  The sheer size of the codebase became a problem for us and I felt like the slower build times were slowing down development.

 

Split up Large Codebases into Separate Cohesive Repositories

The main FubuMVC GitHub repository quickly became quite large and sluggish. If you’ll take a look at a very old tagged branch from that time, you can probably see why.  The main FubuMVC.Core library was already getting big just by itself and the repository also included several related libraries and their associated testing libraries — and my consistent experience over the years has been that the number of projects in a solution to compile seems to make more difference in compile times than the raw number of lines of code.

The very obvious thing to do was to split off the ancillary libraries like FubuCore, FubuLocalization, and FubuValidation into their own git repositories. Great and all, but the next issue was that FubuMVC was dependent upon the now upstream build products from FubuCore and FubuLocalization.  So what to do?  The old way was to just check the FubuCore and FubuLocalization assemblies into the FubuMVC repository, but as I’ll discuss later, we found that to be problematic with git.  More importantly, even though in a perfect world the upstream projects were stable and would never introduce breaking changes, we would absolutely need a quick way to migrate changes from upstream to test against the downstream FubuMVC codebase.

 

Enter Ripple

As part of the original effort to break up the codebases, Joshua Flanagan and I worked on a series of tooling that we eventually named “Ripple”** to automate the flow of build products from upstream to downstream in cascading automated builds (by “cascading” I mean that a successful build of FubuCore would trigger a new CI build of FubuMVC using the latest FubuCore build products).  Ripple originally worked in two modes.  First, a “local” ripple that acted as a little bit of glue to build locally on your box and copy the build products to the right place in the downstream code and run the downstream build to check for any breaking changes without having to push any code changes to GitHub.  Secondly, a Nuget-based workflow that allowed us to consume the very latest Nuget version from the upstream builds in the downstream builds.  More on Ripple below.

Once this infrastructure was in place we were able to break the codebase into smaller, more cohesive codebases and reap the rewards of faster build times and smaller codebases — or we would have been if that new tooling hadn’t been so damn problematic as I’ll describe in sections below.

 

Thoughts on breaking up a codebase:

The following is a mixed bag of my thoughts on when and whether you should break up a large codebase.  Unfortunately, there is no black and white answer.  I’m still glad that we went through the effort of breaking up the main FubuMVC codebase, but in retrospect I would not have gone as far as we did in splitting up the main FubuMVC.Core library and I’ve actually partially reversed that trend for the 2.0 release.

  • Don’t break things up before what would be the upstream library or package has a stable API
  • Things that are tightly coupled and often need to change together should be built and released together
  • You better have a good way to automate cascading builds in continuous integration across related codebases before you even attempt to split up a codebase
  • It was helpful to pull out parts of the codebase that were relatively stable while isolating subsystems that were evolving much more quickly
  • Sometimes breaking up a larger library into smaller, more cohesive libraries makes the functionality more discoverable.  The FubuCore library for instance has support for command line tools, model binding, reflection helpers, and an implementation of a dependency graph.  We theorized over the years that we should have broken up FubuCore to make it more obvious what the various functions were.
  • Many .Net developers seem to be almost allergic to having more than a couple dependencies and we got feedback over the years that some folks really didn’t like how starting a new FubuMVC application required so many different libraries.  The fact that Nuget put it all together for you was irrelevant.  Unfortunately, I think this issue militates against getting too slap happy with dividing up your code repository and assemblies.
  • It was a little challenging to do release management across so many different repositories.  Even though we could conceivably release packages separately for upstream and downstream products, I usually ended up doing the releases together.  I don’t have a great answer for this problem and now I don’t have to now that we’re shutting things down;)
  • Don’t attempt to build a very large codebase with a very small team, no matter how good or passionate said team is

 

Don’t put binaries in Git

Git does NOT like it when you check your binaries into source control the way that we used to in the pre-Nuget/Subversion days.  We found this out the hard way when I was at Dovetail and we were rev’ing FubuMVC very hard at the same time and committing the FubuMVC binaries into our application’s git repository.  The end result was that the Java (and yes, I see where the problem might have been) client that TeamCity used for Git just absolutely choked and flopped over with out of memory exceptions.  It turned out that Jenkins *could* handle our git repository, but there’s still a very noticeable performance lag with the git repo’s that have way too many revisions of binary dependencies.

Other git clients can handle the binaries, but there’s a very noticeable hit to Git’s near legendary performance moving from a repository with almost all text files to a codebase that commits their binaries *cough* MassTransit (at the time that I wrote this draft) *cough*.

 

Enter ripple restore for cascading builds

In the end, we built the “ripple restore” feature (Ripple’s analogue to Nuget Package Restore, but for the record, we built our feature before the Nuget team did and Ripple is significantly faster than Nuget’s;) to find and explode out Nuget dependencies declared inside the codebase at build time as a precursor to compiling in our rake scripts on either the CI server or a local developer box.  We no longer have to commit any binaries to the repository that are delivered via Nuget and the impact on Git repository performance, especially for fresh clones, is very noticeable.

Ripple treats Nuget dependencies as either a “Fixed” dependency that is locked to a specific version or a “Float” dependency that is always going to resolve to the very latest published version.  In the case of FubuMVC’s dependencies today, the internal FubuCore dependency is a “Float”, while the external dependencies like NUnit, Katana, and WebDriver are “Fixed.”  When the FubuMVC build on our TeamCity server runs, it always runs against the very latest version of FubuCore.  Moreover, we use the cascading build feature of TeamCity to trigger FubuMVC builds whenever a FubuCore build succeeds.  This way we have very rapid feedback whenever an upstream change in FubuCore breaks something downstream in FubuMVC — and while I wish I could say that I was so good that that never happens, it certainly does.

Awesome, we’ve got cascading builds and a relatively quick feedback loop between our upstream and downstream builds.  Except now we ran into some new problems.

 

Nuget and CsProj Merge Hell

Ironically, Phil Haack has a blog post out this week on how bad merge conflicts are in csproj files, because the last time I saw Phil I was trying to argue with him that the way that Nuget embeds the package version number in the exploded file paths was a big mistake specifically because that caused us no end of horrendous merge conflicts when we were updating Nugets rapidly.  When we were doing development across repositories it wasn’t that uncommon for the same Nuget dependency to get updated in different feature branches, causing some of the worse merge conflicts you can possibly imagine with csproj and the Nuget Packages.config files.

Josh Arnold beat this issue permanently in Ripple 2.0 by using a very different workflow than Nuget out of the box.  The first step was to eliminate the !@#$%ing version number in our Nuget /packages folder by moving the version requirement to the level of the codebase instead of being project by project (another flaw in OOTB Nuget in my opinion).  Doing that meant that the csproj files only be change on Nuget package updates if the Nuget packages in question changed their own structure.  Bang, a whole bunch of merge issues went away just like that.

The second thing we did was to eliminate the Packages.config Xml files in each project folder and replace it with a simple flat file analogue that listed each project’s dependencies in alphabetic order.  That change also helped reduce the number of merge conflicts.

The end result was that we were able to move and revision faster and more effectively across multiple code repositories.  I still think that was a very big win.

Let’s just say this bluntly, it’s a big anti-pattern to have any kind of central file that has to frequently and simultaneously change by multiple people doing what should be parallel work — be it ORM mapping, IoC container configuration, the blasted csproj files, or some kind of routing table, it’s a hurtful design that causes project friction — and Xml makes it so much worse.  I think Microsoft tools to this day do not take adequate precautions to avoid merge conflict problems (EF configuration, *.csproj files, Web.config).

 

TeamCity as a Nuget Server

It’s easy to use, quick to set up, but I’m recommending that you don’t use it for much.  I feel like it didn’t hold up very well as the feed got bigger performance wise and it would often “lose its Nugets,” forcing you to re-build the Nuget index before the feed would work again.  The rough thing was that the feed wouldn’t fail, it would just return very old results and cause plenty of havoc for our users that depended on the edge feed.  To keep the performance to a decent level, we had to set up archive rules to delete all but say 10 versions of each Nuget.  Deleting the old version caused obvious trouble.

If I had to do it again, I would have opted for many fewer builds and made a point of treating builds that were triggered by cascading builds from builds that were triggered by commits to source control.  As it is, we publish new Nuget packages every single time a CI build succeeds.  In a better world we would have only published new Nugets on builds caused by changes to the source code repository.

 

Reproduceability of Builds with Floating Dependencies

The single worst thing we did that I’ve always regretted is not creating perfectly reproduce-able builds with our floating ripple dependencies.  To make things fully reproduce-able, we would have needed to be able to build a specific version of a codebase by using the exact same versions of all of its dependencies that were used at the time of the CI build.  I.e., when I try to work with FubuMVC #1200, we need it to be using the exact same version of FubuCore that was used inside the TeamCity CI build.  We got somewhat close.  Ripple would build a history digest of its dependencies and have that published to TeamCity’s artifacts — but we were archiving the artifacts to keep the build server running faster.  We also set up tagging on successful builds to add the build number to the GitHub repositories after successful builds (that’s an old part of the Extreme Programming playbook too, but we didn’t get that going upfront and I really wish we had).  What we probably needed was some additional functionality in Ripple to take our published dependency history and completely rebuild everything back to the way we needed it.  I’m still not exactly sure what we should have done to alleviate this issue.

This was mostly an issue for teams that wanted to try to reproduce problems with older versions of FubuMVC that were well behind the current edge version.  One of the things that I think helped sink FubuMVC was that so many of the teams that were very active in our community early on stopped contributing and being involved and we were stuck trying to support lots of old versions even while we were trying to push to the magic SemVer 1.0 release.

 

Nuget vs. gems vs. git submodules for build time dependencies

In my last post in this series I got plenty of abuse from people who think that having to install Ruby and learn how to type “rake” at the command prompt was too big of a barrier for .Net developers (that was sarcasm by the way).  Some commenters thought that we should have been using absolutely nothing but Nuget to fulfill build time dependencies on tools that we used within the automated builds themselves.  There’s just one little problem with that understandable ideal: Nuget was a terrible fit for command line executables within an automated build.

We use a couple different command line tools from within our Rake scripts:

  • Ripple for our equivalent of Nuget package restore and publishing build products as Nuget packages
  • The Bottles executable for packing web content like views and Javascript files into assemblies in pre-compile steps (think of an area in Rails or a superset of ASP.Net MVC portable areas)
  • FubuDocs for publishing documentation (and yes, it’s occurred to me many times that I spent much more time creating a new tool for publishing technical documentation than I did writing docs but we all know which activity was much more fun to do)

Yet again, having the package version number as part of the Nuget package folder made using command line tools resolved by Nuget a minor nightmare.  We had an awkward Ruby function that could magically determine the newest version of the Nuget package to find the right path to the bottles.exe/ripple.exe/fubudocs.exe tools.  In retrospect, we could have used the Nuget’s as is to continue distributing the executables after Ripple 2.0 fixed the predictable Nuget path problem, but we also wanted to be able to use these tools from the command line as well.

As it turned out, using Ruby gems to distribute and install our .Net executables was much more effective than Nuget was.  For one, gems is well integrated with Rake which we were already using.  Gems also has the ability to place a shim for an executable onto your Windows PATH, making our custom tools easier to use at the command line.

And yes, we could have used Chocolately to distribute our executables, but at one point we were much more invested in making our ecosystem be cross platform and Chocolately is strictly Windows only where gems is happily cross-platform.  Because Rob Reynolds is just that awesome, you can actually use Chocolately to install our gems and it’ll even install Ruby for you if it’s not already there.

And yeah, in the very early days because FubuMVC actually predates Nuget, we tried to distribute shared build utilities via Git submodules.  The less said about this approach, the better.  I’ll never willingly do that again.

Topics for Another Day:

I’m trying to keep my newfound blogging resurgence going, but we’ll see how it goes.  The stuff below got cut from this post for length:

  • Why semantic versioning is so important
  • My recommendations for improving Nuget (the Nuget team is asking the Ripple team for input and I’m trying to oblige)
  • One really long rant about how strong naming is completely broken
  • Why and how we think Ripple improves upon Nuget
  • Branch by feature across multiple repositories with Ripple

 

* For my money, the number of unit tests is my favorite metric to judge how large and complicated a codebase is, but only measured in the large.  Like all other metrics, I’d use this with a grain of salt and it’s probably also useless if the developers are cognizant of the unit test as metric usage.

** Get it, changes “ripple” from one repo to another.  I love the song “Ripple” by the Greatful Dead and it’s also pretty likely that I was listening to that song the day we came up with the name.

 

 

OSS Bugs and the Participatory Community

I pushed the official StructureMap 3.0 release a couple weeks ago.  Since this is a full point release and comes somewhat close to being a full rewrite of the internals, there’s inevitably been a handful of bugs reported already.  While I’d love to say there were no bugs at all, I’d like to highlight a trend I’d love to see continue that’s quite different from what supporting StructureMap was like just a few years ago.  Namely, I’ve been getting failing unit tests in GitHub or messages on the list from users that demonstrate exactly what they’re trying to do and how things are failing for them — and I’d love to see this trend continue (as long as there are really bugs).

One of the issues that was reported a couple times early on was an issue with setter injection policies.  After the initial reports, I looked at my unit tests for the functionality and they were all passing, so I was largely shrugging my shoulders — until a user gave me the exact reproduction steps in a failing test on the GitHub issue showing me a combination of inputs and usage steps that brought out the problem.  Once I had that failing test in my grubby little hands, fixing the issue was a simple one line fix.  I guess my point here is that I’m seeing more and more StructureMap users jumping in and participating much more in how issues get fixed, where the exact problem is, and how it should be fixed and that’s making StructureMap work a lot less stressful, more enjoyable for me, and bugs are getting addressed much faster than in the past.

 

Pin down problems with new tests

An old piece of lore from Extreme Programming was to always add a failing test to your testing library before you fix a reported bug to ensure that the bug fix stays fixed.  Regression bugs are a serious source of overhead and waste in software development, and anything that prevents them from reoccurring is probably worth doing in my book.  If you look at the StructureMap codebase, you’ll see a namespace for bug regression tests that we’ve collected over the years.

 

The Participatory Community and .Net

In a way, now might be the golden age of open source development.  GitHub in particular supports such a more collaborative workflow than the older hosting options ever did. Nuget, for all the flaws that I complain about, makes it so much easier to release and distribute new releases.

In the same vein, even Microsoft of all people is trying to encourage an OSS workflow with their own tools and allowing .Net community members to jump in and contribute to their tools. I think that’s great, but it only matters if more folks in the greater .Net community will participate in the OSS projects.  Today I think that there’s a little too much passivity overall in the .Net community.  After all, our tools are largely written by the official .Net teams inside of Redmond, with OSS tools largely consigned to the fringes.  Most of us probably don’t feel like we can exert any control over our tooling, but every indication I see is that Microsoft itself actually wants to change that with more open development processes as they host more of their tools in GitHub or CodePlex and adopt out of band release cycles happening more often than once every couple years.

My case in point is an issue about Nuget usage that came up in a user list a couple weeks back that I think is emblematic of how the .Net community needs to change to make OSS matter.  The author was asking the Nuget team to do something to Nuget’s package restore feature to fix the accumulation of outdated Nuget packages in a codebase.  No specific recommendations, just asking the Nuget team to fix it.  While I think the Nuget team is perfectly receptive to addressing that down the road, the fubu community has already solved that very technical issue with our own open source Ripple tool that we use as a Nuget workflow tool.  Moreover, the author of that user list post could get a solution for his problem a lot faster even if he didn’t want to use our Ripple tool by contributing a pull request to fix Nuget’s package restore himself rather than wait for Microsoft to do it for him when they can get around to it. My point here is that the .Net community isn’t fully using all of its potential because we’re collectively just sitting back and waiting for a finite number of folks in the Gu’s teams to fix too many of our problems instead of just jumping in and collectively doing it ourselves.

Participating doesn’t have to mean taking on big features and issuing pull requests of outstanding technical merit, it can also mean commenting on GitHub issues, proving specific feedback to the authors, and doing what I think of as “sandpaper pull requests” — small contributions that clear up little usability issues in a tool.  It’s not a huge thing, but I really appreciate how some of my coworkers have been improving exception messages and the logging when they find little issues in their own usage of some of the FubuMVC related projects.  That kind of thing helps a great deal because I know that it’s almost impossible for me to foresee every potential usage or source of confusion.

We obviously use a lot of the FubuMVC family of tools at my workplace, and something I’ve tried to communicate to my colleagues is that they never have to live with usability problems in any of those frameworks or libraries because we can happily change those tools to improve their development experience (whenever it’s worth the effort to do so of course).  That’s a huge shift in how many developers think about their tools, but given the choice to be empowered and envision a better experience versus just accepting what you get, wouldn’t you like to have that control?

 

I wish MS would do even more in the open

Of course, I also think it would help if Microsoft could do even more of their own development in public. Case in point, I’m actually pretty positive about the technical work the ASP.Net team for one is talking about for forthcoming versions, but only the chosen ASP Insiders and MVP types are seeing any of that work (I’m not going to get myself yelled at for NDA violations, so don’t even ask for specifics).  They might just get a lot more useful involvement from the community if they were able to do that work in the open before the basic approach was completely baked in.

 

 

 

FubuMVC Lessons Learned — “fubu new”, Standardization, and Polyglot Programming

EDIT:  Someone on Twitter got upset about my “or an inferior IoC tool” comment.  Most of you probably know that I’m also the author of StructureMap, and I did mean that as a joke.  In a later post about IoC integration I’ll happily admit that making FubuMVC too StructureMap-centric was a problem with adoption (we did support Autofac and had Windsor support all but done as well).  To be blunt, I think that the IoC tools in .Net are largely an interchangeable commodity item and any of the mainstream ones will get the job done just fine.

I’m still up for doing this series of technical lessons learned about FubuMVC to wring some value out of the whole process, but I feel like I’ve said enough mea culpas about our failure on documentation. From now on, instead of dog piling on me in the comments, could you just say “Docs sucked++” and then get on with whatever else it is you want to say?

Today’s blogpost is partially about the danger of deviating from .Net orthodoxy, but mostly a lamentation on my part on missed opportunities and unrealized potential.  I’m actually quite proud and happy with most of what I’m describing in this post, but it was too late to matter much and might not have ever been widely accepted.

Quickstart from the Command Line

I’ve always admired elements of Ruby on Rails, especially the almost magical “rails new” project skeleton creation that sets you up with an entire code tree with a standard build script that exposes common build tasks for testing, database migrations, and even deployment recipes — but the word “standard” in that last sentence is a very key concept.  Much of the value of “rails new” is enabled by standardizing the layout and naming conventions of a Rails codebase to make it cheaper to write reusable command line tooling.

We knew from the very beginning that we’d eventually want our very own “fubu new” analogue.  Our community built a simple one initially that would just clone a pre-canned Git repository and do some simple string replacement for your project name.  Great, and it added value immediately.  However, the FubuMVC ecosystem got bigger as we built:

  • Ripple as a better command line manager for Nuget packages in continuous integration
  • Bottles as our modularization strategy, including a command line bottles tool to package up web content files as a pre-compile step in your build script
  • FubuDocs as a command line tool to author and publish technical documentation
  • fubu run as a Katana based development server that’s more efficient in development than anything IIS or VS based
  • FubuTransportation as our FubuMVC based message bus
  • A slew of command line testing tools

To unlock the usefulness of all those tools above to new users, not to mention just getting a simple application codebase up and running fast, we embarked on a new effort to create a vastly improved “fubu new” story that would allow you to mix and match options, different project types, and integrate many of those tools I listed above.

At the time of this post, you can stand up a brand new FubuMVC application using the Spark view engine from scratch to do grown up development by following these steps at the command line assuming that you already have Ruby 1.9.3+ installed:

  1. gem install fubu
  2. fubu new MyApp –options spark

If you do this, you’ll see a flurry of activity as it:

  1. Builds a new VS.Net solution file
  2. Csproj files for the main application and a matching test library
  3. Creates the necessary classes to bootstrap and run a minimal FubuMVC application
  4. Invokes gems to install Rake, FubuRake, Ripple, FubuDocs, and the bottles command line tool
  5. Invokes Ripple’s equivalent to Nuget Package Restore to bring down all the necessary Nugets
  6. Opens the new Visual Studio solution

The result is a code tree that’s completely ready to do grown up development with a build script containing these tasks:

rake ci                # Target used for the CI server
rake clean             # Prepares the working directory for a new build
rake compile           # Compiles the solution src/MyApp.sln
rake compile:debug     # Compiles the solution in Debug mode
rake compile:release   # Compiles the solution in Release mode
rake default           # **Default**, compiles and runs tests
rake docs:bottle       # 'Bottles' up a single project in the solution with...
rake docs:run          # Runs a documentation project hosted in FubuWorld
rake docs:run_chrome   # Runs the documentation projects in this solution i...
rake docs:run_firefox  # Runs the documentation projects in this solution i...
rake docs:snippets     # Gathers up code snippets from the solution into th...
rake myapp:alias       # Add the alias for src/MyApp
rake myapp:chrome      # run the application with Katana hosting and 'watch...
rake myapp:firefox     # run the application with Katana hosting and 'watch...
rake myapp:restart     # touch the web.config file to force ASP.Net hosting...
rake myapp:run         # run the application with Katana hosting
rake ripple:history    # creates a history file for nuget dependencies
rake ripple:package    # packages the nuget files from the nuspec files in ...
rake ripple:restore    # Restores nuget package files and updates all float...
rake ripple:update     # Updates nuget package files to the latest
rake sln               # Open solution src/MyApp.sln
rake unit_test         # Runs unit tests for MyApp.Testing
rake version           # Update the version information for the build

This entire rake script is (I’ve added some explanatory comments for this blog post):

require 'fuburake'

@solution = FubuRake::Solution.new do |sln|
        # This is unnecessary if there's only one sln file in the code
	sln.compile = {
		:solutionfile => 'src/MyApp.sln'
	}
				 
        # This feeds the CommonAssemblyInfo.cs file we use
        # to embed version information into compiled assemblies
        # on the build server
	sln.assembly_info = {
		:product_name => "MyApp",
		:copyright => 'Copyright 2013. All rights reserved.'
	}
	
        # These are defaults now, but I still left it in the
        # template
	sln.ripple_enabled = true
	sln.fubudocs_enabled = true
end

# This one line of code below creates rake tasks as a convenience
# to run our development server in various modes
FubuRake::MvcApp.new({:directory => 'src/MyApp', :name => 'myapp'})

 

The terseness of the rake script above relies very heavily on a standardized code tree layout and naming conventions.  As long as you could accept the FubuMVC idiomatic code tree layout, I feel like we succeeded in making our ragbag collection of command line tools easy to setup and use.  Great, awesome, it’s done wonders for the Rails community and I’ve now seen and used working analogues with Scala’s Play framework and Mimosa.js’s “mimosa new” command.  A couple problems though:

  • There’s never been a successful effort in .Net to identify and codify idioms for laying out a code repository and we’ve found that many folks are loathe to change their own preferences (“src” versus “source”, “lib” vs “bin”, and so on).  It might be easier if TreeSurgeon had succeeded way back when.
  • I think it was just too little, too late.  I think that OSS projects, once announced, have a short window of opportunity to be awesome or else.  I felt good about the improved “fubu new” experience just in time for NDC London this past December — but that was almost 4 years after FubuMVC started.
  • VS.Net templating was a lot of work and dealing with different project types, VS versions, and versions of .Net added overhead
  • Nuget and our own Ripple tool are still problematic
  • While common in other development communities, this is not a common approach for .Net developers

 

Why not Visual Studio Templates or Project Template Nugets?

We did support Visual Studio project templates, but I always felt that would only be for folks that just want to play with the framework a little bit.  The feedback we got from other OSS projects that did invest heavily in VS templates was uniformly negative and I wasn’t enthusiastic about doing much with them.  It was a tremendous amount of work to build our templating engine (FubuCsProjFile), but at least what we got was something easy to author and possible to cover with automated testing (a never-ending shortcoming of so many tools originating from Redmond).

We supported a project template with Nuget very early on and even finagled out a “install nuget, press F5″ experience, but I always found it to be very limited and problematic.  Again, testability was an issue.  If I had it to all do over again, I’d still choose our command line approach with the mix and match selection of options (Spark or Razor?  StructureMap or an inferior IoC tool?  Want RavenDb support?  Bootstrap?  Html Conventions?).

 

The Polyglot Thing is a Double Edged Sword

There’s always some sort of meme running around in developer circles that’s meant to make the cool edgy kids feel superior to the unwashed masses.  A couple years ago there was a lot of hype about polyglot programming where you would happily mix and match different programming languages and paradigms in a single solution based on their relative strengths.

For better or worse, the FubuMVC projects were a mild example of polyglot programming and that repeatedly scared people off.  We used Ruby’s Rake tool for our project automation, we partially replaced Nuget with gems for distributing binaries that we used in the build scripts.  For 2.0, we’re happily ditching our original asset pipeline in favor of the Node.js based Mimosa.js tool.  At other times we also used the Python-based Sphinx for some early documentation efforts.

While I think that Rake is an outstanding tool and very easy to use, the simple need for a Ruby tool drove many people away — which is a shame because many other .Net OSS tools besides FubuMVC prefer Rake-based build scripts over the other alternatives.

It’s not hard to understand people being hesitant about having to learn non-mainstream .Net tools, it’s unfortunate.  One of the big problems with trying to do ambitious OSS work on .Net is that .Net simply isn’t the home of the best tools.  We repeatedly found Nuget to be lacking (a blog post for a later day) and MSBuild is, in my opinion, a complete non-starter for build automation.

As an aside, I’ve frequently seen members of the ASP.Net team complain on Twitter about having to install Ruby just to build some OSS project when they turn right around and require you to install all kinds of Visual Studio.Net add ons and templates just to build their code.

 

Aborted Plans for FubuMVC 2.0:

For 2.0 I wanted us to push forward with more templating options and introduce a new standalone web application that you could run from some sort of “fubu new –interactive” mode to visually discover and select the options you wanted to use in your own solution.  I also wanted to stop bundling the template library into the fubu.exe tool itself in favor of using a git repository that could be easily auto-updated by fubu.exe as we extended or improved our templates.

I thought we had a strong concept for the bootstrapping, but after getting one hard look at Typesafe Activator in Scala, it’s pretty obvious that we would never have been able to match that level of polish.

I also wanted to upgrade our standard testing tools from whatever old version of NUnit we’ve been using for years to something based on the excellent Fixie tool.

 

Conclusion

In a lot of ways, I think the .Net ecosystem — even though it’s over a decade old — is immature compared to other development platforms.  I feel like we had to do way too much bespoke infrastructure (Bottles for packaging web content, Ripple for a more usable Nuget story, FubuCsProjFile for csproj file manipulation) to pull off the “fubu new” story.  I wonder a bit if what we did might be easier in a couple years when Nuget matures and the new OneGet package manager gains traction.

I feel like Visual Studio.Net was a significant hurdle in everything we tried to do with our bootstrapping story.  I think .Net would be able to innovate much faster if our community would be much more accepting of lighter weight command line tools instead of demanding much more time intensive Visual Studio integration.

My colleagues at work and I are likely moving to the Play/Akka stack on Scala and a very common refrain around the office this week is excitement over being able to use lighter weight tools like Sublime and SBT instead of being forced to work with a heavyweight IDE like VS.

 

 

FubuMVC Lessons Learned — Magic Conventions, Russian Dolls, and Ceremonial Code

tl;dr: FubuMVC stressed concise code and conventions over writing explicit code and that turns out to be polarizing

The typical way to be successful in OSS is to promote the hell out of your work before you give up on it, but I’m under a lot less pressure after giving up on FubuMVC and I feel like blogging again.  Over the next couple months I’m going to write about the technical approach we took on FubuMVC to share the things I think went well, the stuff I regret, how I wish we’d done it instead, discarded plans for 2.0, and how I’d do things differently if I’m ever stupid enough to try this again on a different platform.

 

Some Sample Code

So let’s say that you start a new FubuMVC project and solution from scratch (a topic for another blog post) by running:

fubu new Demo --options spark

You’ll get this (largely placeholder) code for the MVC controller part of the main home page of your new application:

namespace Demo
{
        // You'd generally do *something* in this method, otherwise
        // it's just some junk code to make it easier for FubuMVC
        // to hang a Spark or Razor view off the "/" route
        // For 2.0, we wanted to introduce an optional convention
        // to use an "action less view" for the home page
	public class HomeEndpoint
	{
		public HomeModel Index(HomeModel model)
		{
			return model;
		}
	}
}

To make things a little more clear, fubu new also generates a matching Spark view called Home.spark to render a HomeModel resource:

<viewdata model="Demo.HomeModel" />

<content:header></content:header>

<content:main>
Your content would go here
</content:main>

<content:footer></content:footer>

The code above demonstrates a built in naming convention in FubuMVC 1.0+ such that the home “/” route will point to the action “HomeEndpoint.Index()” if that class and method exists in the main application assembly.

Some additional endpoints (FubuMVC’s analogue to Controller’s in MVC frameworks) might look like the following:

    public class NameInput
    {
        public string Name { get; set; }
    }

    public class Query
    {
        public int From { get; set; }
        public int To { get; set; }
    }

    public class Results { }

    public class MoreEndpoints
    {
        // GET: name/{Name}
        public string get_name_Name(NameInput input)
        {
            return "My name is " + input.Name;
        }

        // POST: query/{From}/to/{To}
        public Results post_query_From_to_To(Query query)
        {
            return new Results();
        }
    }

 

What you’re not seeing in the code above:

  • No reference whatsoever to the FubuMVC.Core namespace.
  • No model binding invocation
  • No code to render views, write output, set HTTP headers
  • No code for authentication, authorization, or validation
  • No “BaseController” or “ApiController” or “CoupleMyCodeVeryTightlyToTheFrameworkGuts” base class
  • No attributes
  • No marker interfaces
  • No custom fluent interfaces that render your application code almost completely useless outside of the context of the web framework you’re using

What you are seeing in the code above:

  • Concrete classes that are suffixed with “Endpoint” or “Endpoints.”  This is an out of the box naming convention in FubuMVC that marks these classes as being Action’s.
  • Public methods that take in 0 or 1 inputs and return a single “resource” model (they can be void methods too).
  • Route patterns are derived from the method names and properties of the input model — more on this in a later post because this one’s already too long.

 

One Model In, One Model Out and Automatic Content Negotiation

As a direct reaction to ASP.Net MVC, the overriding philosophy from the very beginning was to make the code we wrote be as clean and terse as possible with as little coupling from the application to the framework code as possible.  We also believed very strongly in object composition in contrast to most of the frameworks of the time that required inheritance models.

To meet this goal, our core design idea from the beginning was the one model in, one model out principle.  By and large, most endpoints should be built by declaring an input model of everything the action needs to perform its work and returning the resource or response model object.  The framework itself would do most of the repetitive work of reading the HTTP request and writing things out to the HTTP response for you so that you can concentrate on only the responsibilities that are really different between actions.

At runtime, FubuMVC is executing content negotiation (conneg) to read the declared inputs (see the NameModel class above and how it’s used) from the HTTP request with a typical combination of model binding or deserialization, calling the action methods with the right input, and then rendering the resource (like HomeModel in the HomeEndpoint.Index() method) again with content negotiation.  As of FubuMVC 1.0, rendering views are integrated into the normal content negotiation infrastructure (and that ladies and gentlemen, was a huge win for our internals).    Exactly what content negotiation can read and write is largely determined by OOTB conventions.  For example:

  • If a method returns a string, then we write that string with the content-type of “text/plain”
  • If an action method returns a resource model, we try to “attach” a view that renders that very resource model type
  • In the absence of any other reader/writer policies, FubuMVC attaches Json and Xml support automatically with model binding for content-type=”application/x-www-form-urlencoded” requests

The automatic content negotiation conventions largely means that FubuMVC action methods just don’t have to be concerned about the details of how the response is going to be written out.

View resolution is done conventionally as well.  The easiest, simplest thing to do is to simply make your strongly typed Spark or Razor view render the resource model type (the return type) of an action method and FubuMVC will automatically apply that view to the matching action.  I definitely believe that this was an improvement over the ASP.Net MVC ViewResult mechanism and some other frameworks *cough* NancyFx *cough* adapted this idea after us.

The huge advantage of the one model in, one model out was that your action methods became very clean and completely decoupled from the framework.  The pattern was specifically designed to make unit testing action methods easy, and by and large I feel like we met that goal.  It’s also been possible to reuse FubuMVC endpoint code in contexts outside of a web request because there is no coupling to FubuMVC itself in most of the action methods, and I think that’s been a big win from time to time.  Try to do that with Web API, ASP.Net MVC, or a Sinatra-flavored framework like NancyFx!

The downside was the times when you really did need to exert more fine grained control over HTTP requests and responses.  While you could always happily take in constructor dependencies to read and write to the raw HTTP request/response, this wasn’t all that obvious.

 

The Russian Doll Behavioral Model

I’ve regretted the name “FubuMVC” almost from the beginning because we weren’t really a Model 2 MVC framework.  Our “Action” methods just perform some work within an HTTP request, but don’t really act as logical “Controller’s.”  It was also perfectly possible to build endpoints without Action methods and other endpoints that used multiple Action methods.

The core of FubuMVC’s runtime was the Russian Doll “Behavior” model I described way back in 2011 — in which action methods are called inside of a pre-built Behavior in the middle of a chain.  For example, our HomeEndpoint.Index() action above has a chain of nested behaviors in real life something like:

  1. AuthenticationBehavior
  2. InputBehavior — does content negotiation on the request to build the HomeModel input
  3. ActionCall –> HomeEndpoint.Index() — executes the action method with the input read by conneg and stores the output resource for later
  4. OutputBehavior — does conneg on the resource and “accepts” header to write the HTTP response accordingly

FubuMVC heavily uses additional Behavior’s for cross cutting concerns like authorization, validation, caching, and instrumentation that can be added into a chain of behavior to compose an HTTP pipeline.  Every web framework worth its salt has some kind of model like this, but FubuMVC took it farther by standardizing on a single abstraction for behaviors (everything is just a behavior) and exposing a model that allowed you to customize the chain of behaviors by either convention or explicitly on a single endpoint/route.

In my opinion, the Behavior model gave FubuMVC far more modularity, extensibility, and composability than our peers.  I would go so far as to say that this concept has been validated by the sheer number of other frameworks like Microsoft’s WebAPI that have adopted some form of this pattern.

 

Clean, terse “magical” code versus explicit code

The downside to the behavior model, and especially FubuMVC’s conventional construction of the nested Behaviors is the “magical” aspect.  Because the framework itself is doing so much more work for you, there isn’t a blob of explicit code in one place that tells a developer everything that’s happening in an HTTP request.  In retrospect, even though I personally wanna write the tightest, most concise code possible and avoid repetitive code, other developers are much happier writing and reading code that’s much more explicit — even when that requires them to write much more repetitive code.  It turns out that repetitive code ceremony is not a bad thing to a large number of developers. 

Other developers hated the way that FubuMVC doesn’t really do much to lead you to what to do next or make the framework capabilities discoverable because so much of the code was meant to be driven by FubuMVC conventions based on what your code looked like rather than you writing explicit code against FubuMVC API’s with Intellisense there to guide you along the way.  And yes, I’m fully cognizant that I just make an argument in favor of a Sinatra style fluent interface like NancyFx’s.  I know full well that many developers considered NancyFx much easier to learn than FubuMVC because of Nancy’s better discoverability.

We did offset the “magical” problem with diagnostics that I’ll discuss at a later time, but I think that the “magical” aspect of FubuMVC scared a lot of potential users away in retrospect.  If I had it to do over again, I think I would have pushed to standardize and describe our built in conventions much earlier than we did — but that’s another blog post altogether.

 

What I wanted to do in FubuMVC 2.0 

I had no intention of adopting a programming model more like Sinatra or Web API where you write more explicit code.  My feeling is that there is room in the world for more than one basic approach, so for 2.0, I wanted to double down on the “one model in, one model out” approach by extending more conventions for finer grained control over the HTTP request & response without losing the benefits.  Things like:

  • More built in model binding conventions to attach Cookie values, system clock values, IPrincipal’s, and whatever else I could think of into the OOTB model binding.
  • Built in conventions for writing header, response codes, and cookie values from resource model values to maintain the one model in, one model out motif while still allowing for more powerful HTTP API capabilities
  • We did change the content negotiation defaults to make views “additive” to Json/Xml endpoints, so that any endpoint that renders a view for accept=”text/html” can also return Json or Xml by default
  • We made it a little easier to replace the built in Json/Xml serialization
  • We did streamline the content negotiation internals to make customization smoother
  • Add new built in conventions to attach custom content negotiation readers and writers to the appropriate input and resource types

 

I’m throwing in the towel in FubuMVC

tl;dr:  I’m giving up on fubu and effectively retiring from OSS on .Net

 

Some things came to a head last week and I announced that I’m planning to cease work on FubuMVC after finishing some performance and startup time optimization work as a 2.0 release — which right now means that FubuMVC and its very large ecosystem of surrounding projects is effectively kaput.  Even just a couple weeks ago I was still excited for our 2.0 release and making big plans, but the writing has been on the wall for quite some time that FubuMVC no longer has enough community support to be viable and that the effort I think it would take to possibly change that situation probably isn’t worth it.

For me personally, FubuMVC has turned out to be a fantastic experience in my own learning and problem solving growth.  My current position is directly attributable to FubuMVC and I’ve generally enjoyed much of the technical work I’ve gotten to do over the past 2-3 years.  I’ve forged new relationships with the folks who I met through my work on FubuMVC.

On the downside, it’s also been a massive opportunity cost because of all the things I haven’t learned or done in the meantime because FubuMVC takes up so much of my time and that’s the main reason that it has to stop now.

Some History

 

Rewind to the summer of 2008.  I had just started a new job where we were going to do a big rewrite of my new company’s website application.  My little team had a choice in front of us, we could either choose Ruby on Rails (the hot thing of the day) or continue with .Net and use the brand new ASP.Net MVC framework coming down the pike.  If I had it all to do again, I would choose RoR in a heartbeat and just have gotten on with getting the project done.  At the time though, the team previous to us had completely flopped with RoR (not for technical reasons) and the company was understandably leery of using Rails again.  At the same time, this was at the tail end of the ALT.Net movement and I felt very optimistic about .Net at the time.  Plus, I had a large personal investment in StructureMap and Fluent NHibernate (yes Virginia, there was a time when we thought ORM’s were a good idea).

We opted for .Net and started working on early versions of ASP.Net MVC.  For various reasons, I disliked MVC almost immediately and started to envision a different way of expressing HTTP endpoints that wouldn’t require so much coupling to the framework, less cruft code, and better testability at the unit level.  For a little while we tried to work within ASP.Net MVC by customizing it very heavily to make it work the way we wanted, but MVC then and now isn’t a very modular codebase and we weren’t completely happy with the results.

From there, we got cocky and in December 2009 embarked on our own framework we called FubuMVC based on the “for us, by us” attitude because we believed that after all the bellyaching we did for years about how bad WebForms was (and it was) Microsoft gave us yet another heavily flawed framework with little input from the community that was inevitably going to be the standard in .Net.

Fast forward to today and FubuMVC is up to v1.3, spawned arguably a healthy ecosystem of extensions, and it’s used in several very large applications (fubu’s sweet spot was always larger applications).  It’s also failed miserably to attract or generate much usage or awareness in the greater .Net community — and after this long it’s time for me to admit that the gig is up.

 

Why I think it failed

Setting aside the very real question of whether or not OSS in .Net is a viable proposition (it’s largely not, no matter how hoarse Scott Hanselman makes himself trying to say otherwise), FubuMVC failed because we — and probably mostly me because I had the most visibility by far — did not do enough to market ourselves and build community through blog posts, documentation, and conference speaking.  At one time I think I went almost 2 years without writing any blog posts about fubu and I only gave 3-4 conference talks on FubuMVC total over the past 5 years.  I believe that if we’d just tried to get FubuMVC in front of many more people earlier and generated more interest we might have had enough community to do more, document more, and ground away the friction in FubuMVC faster through increased feedback.

We also didn’t focus hard enough on creating a good, frictionless getting started story to make FubuMVC approachable for newbies.  FubuMVC was largely written for and used on very large, multi-year projects, so it’s somewhat understandable that we didn’t focus a lot on a task that we ourselves only did once or twice a year, but that still killed off our growth as a community.  At this point, I feel good about our Rails-esque “fubu new” story now, but we didn’t have that in the early days and even now that freaks out most .Net developers who don’t believe anything is real until there’s a Visual Studio plugin.

I’ll leave a technical retrospective of what did and did not work well for a later time.

What I’m doing next

I turned 40 this January, but I feel like I’m a better developer than ever and I’m not really burnt out.  I tease my wife that she’s only happy when she’s planning something new for us, but I know that I’m happiest when I’ve got some kind of technical project going on the side that lets me scratch the creative itch.

I’d like to start blogging again because I used to enjoy it way back but I wouldn’t hold your breathe on that one.

We’re kicking the tires on Golang at work for server side development (I’m dubious about the productivity of the language, but the feedback cycle and performance is eye popping) and the entire Scala/TypeSafe stack.  I’m a little tempted to rewrite StructureMap in Scala as a learning experience (and because I think the existing Scala/Java IoC containers blow chunks).  Mostly though, Josh Arnold and I are talking about trying to rebuild Storyteller on a Node.js/Angular.js/Require.js/Mimosa.js/Bower stack so I can level up on my JavaScript skills because, you know, I’ve got a family to support and JavaScript quietly became the most important programming language on the planet all the sudden.

I’ll get around to blogging a strictly technical retrospective later.  Now that I’m not under any real pressure to deliver new code with fubu, I might just manage to blog about some of the technical highlights.  And if I can do it without coming off as pissed and bitter, I’ve got a draft on my thoughts about .Net and the .Net community in general that I’ll publish.

StructureMap 3.0 is Live

At long last, I pushed a StructureMap 3.0 nuget to the public feed today nearly a full decade after its very first release on SourceForge as the very first production ready IoC container in .Net.   I’d like to personally thank Frank Quednau for all his work on finally making StructureMap PCL compliant.

While this release don’t add a lot of new features, it’s a big step forward for usability and performance and I believe that StructureMap 3.0 will greatly improve the developer experience.  Most importantly, I think the work we’ve done on the 3.0 release has fixed all of the egregious internal flaws that have bothered me for years and *I* feel very good about the shape of the code.

 

What’s Different and/or Improved?

  • The diagnostics and exception messages are much more useful
  • The registration DSL has been greatly streamlined with a hard focus on consistency throughout the API
  • The core library is now PCL compliant and targets .Net 4.0.  So far SM3 has been successfully tested on WP8
  • I have removed strong naming from the public packages to make the world a better place.  I’m perfectly open to creating a parallel signed version of the Nuget, but I’m holding out for a pull request on that one:)
  • Nested container performance and functionality is vastly improved (100X performance in bigger applications!)
  • Xml configuration and the ancient attribute based configuration has been removed.
  • Interception has been completely rewritten with a much better mechanism for applying decorators (a big gripe of mine from 2.5+)
  • Resolving large object graphs is faster
  • The Profile support was completely rewritten and more effective now
  • Child containers (think client specific or feature specific containers)
  • Improvements in the usage of open generics registration for Jimmy Bogard
  • Constructor function selection and lifecycle configuration can be done per Instance (like every other IoC container in the world except for SM <3.0 :( )
  • Anything that touches ASP.Net HttpContext has been removed to a separate StructureMap.Web nuget.
  • Conventional registration is more powerful now that the configuration model is streamlined and actually useful as a semantic model

 

 

Still to Come:

I’m still working on a new version of the StructureMap website (published with FubuDocs!), but it’s still a work in progress.  Since I desperately want to reduce the time and effort I spend on supporting StructureMap, look for it soon-ish (for real this time).

Someday I’d like to get around to updating my old QCon ’08 talk about lessons learned from a long-lived codebase with all the additional lessons from the past 6 years.

StructureMap 3 is gonna tell you what’s wrong and where it hurts

tl;dr:  StructureMap 3 introduces some cool new diagnostics, improves the old diagnostics, and makes the exception messages a lot better.  If nothing else scroll to the very bottom to see the new “Build Plan” visualization that I’m going to claim is unmatched in any other IoC container.

I’ve had several goals in mind with the work on the shortly forthcoming StructureMap 3.0 release.  Make it run FubuMVC/FubuTransportation applications faster, remove some clumsy limitations, make the registration DSL consistent, and make the StructureMap exception messages and diagnostic tools completely awesome so I don’t have to do so much work answering questions in the user list users will have a much better experience.  To that last end, I’ve invested a lot of energy into improving the diagnostic abilities that StructureMap exposes and adding a lot more explanatory information to exceptions when they do happen.

First, let’s say that we have these simple classes and interfaces that we want to configure in a StructureMap Container:

    public interface IDevice{}
    public class ADevice : IDevice{}
    public class BDevice : IDevice{}
    public class CDevice : IDevice{}
    public class DefaultDevice : IDevice{}

    public class DeviceUser
    {
        // Depends on IDevice
        public DeviceUser(IDevice device)
        {
        }
    }

    public class DeviceUserUser
    {
        // Depends on DeviceUser, which depends
        // on IDevice
        public DeviceUserUser(DeviceUser user)
        {
        }
    }

    public class BadDecorator : IDevice
    {
        public BadDecorator(IDevice inner)
        {
            throw new DivideByZeroException("No can do!");
        }
    }

Contextual Exceptions

Originally, StructureMap used the System.Reflection.Emit classes to create dynamic assemblies on the fly to call constructor functions and setter properties for better performance over reflection alone.  Almost by accident, having those generated classes made for a decently revealing stack trace when things went wrong.  When I switched StructureMap to using dynamically generated Expression‘s, I got a much easier model to work with for me inside of StructureMap code, but the stack trace on runtime exceptions became effectively worthless because it was nothing but a huge series of nonsensical Lambda classes.

As part of the effort for StructureMap 3, we’ve made the Expression building much, much more sophisticated to create a contextual stack trace as part of the StructureMapException message to explain what the container was trying to do when it blew up and how it got there.  The contextual stack can tell you, from inner to outer steps like:

  1. The signature of any constructor function running
  2. Setter properties being called
  3. Lambda expressions or Func’s getting called (you have to supply the description yourself for the Func, but SM can use an Expression to generate a description)
  4. Decorators
  5. Activation interceptors
  6. Which Instance is being constructed including the description and any explicit name
  7. The lifecycle (scoping like Singleton/Transient/etc.) being used to retrieve a dependency or the root Instance

So now, let’s say that we have this container configuration that experienced StructureMap users know is going to fail when we try to fetch the DeviceUserUser object:

        [Test]
        public void no_configuration_at_all()
        {
            var container = new Container(x => {
                // Notice that there's no default
                // for IDevice
            });

            // Gonna blow up
            container.GetInstance<DeviceUserUser>();
        }

will give you this exception message telling you that there is no configuration at all for IDevice.

One of the things that trips up StructureMap users is that in the case of having multiple registrations for the same plugin type (what you’re asking for), StructureMap has to be explicitly told which one is the default (where other containers will give you the first one and others will give you the last one in).  In this case:

        [Test]
        public void no_configuration_at_all()
        {
            var container = new Container(x => {
                // Notice that there's no default
                // for IDevice
            });

            // Gonna blow up
            container.GetInstance<DeviceUserUser>();
        }

Running the NUnit test will give you an exception with this exception message (in Gist).

One last example, say you get a runtime exception in the constructor function of a decorating type.  That’s way out of the obvious way, so let’s see what StructureMap will tell us now.  Running this test:

        [Test]
        public void decorator_blows_up()
        {
            var container = new Container(x => {
                x.For<IDevice>().DecorateAllWith<BadDecorator>();
                x.For<IDevice>().Use<ADevice>();
            });

            // Gonna blow up because the decorator
            // on IDevice blows up
            container.GetInstance<DeviceUserUser>();
        }

generates this exception.

Container.WhatDoIHave()

StructureMap has had a textual report of its configuration for quite a while, but the WhatDoIHave() feature gets some better formatting and the ability to filter the results by plugin type, assembly, or namespace to get you to the configuration that you want when you want it.

This unit test:

        [Test]
        public void what_do_I_have()
        {
            var container = new Container(x => {
                x.For<IDevice>().AddInstances(o => {
                    o.Type<ADevice>().Named("A");
                    o.Type<BDevice>().Named("B").LifecycleIs<ThreadLocalStorageLifecycle>();
                    o.Type<CDevice>().Named("C").Singleton();
                });

                x.For<IDevice>().UseIfNone<DefaultDevice>();
            });

            Debug.WriteLine(container.WhatDoIHave());

            Debug.WriteLine(container.WhatDoIHave(pluginType:typeof(IDevice)));
        }

will generate this output.

The WhatDoIHave() report will list each PluginType matching the filter, all the Instance’s for that PluginType including a description, the lifecycle, and any explicit name.  This report will also tell you about the “on missing named Instance” and the new “fallback” Instance for a PluginType if one exists.

It’s not shown in this blog post, but all of the information that feeds the WhatDoIHave() report is query able from the Container.Model property.

Container.AssertConfigurationIsValid()

At application startup time, you can verify that your StructureMap container is not missing any required configuration and generally run environment tests with the Container.AssertConfigurationIsValid() method.  If anything is wrong, this method throws an exception with a report of all the problems it found (build exceptions, missing primitive arguments, undeterminable service dependencies, etc.).

For an example, this unit test with a missing IDevice configuration…

        [Test]
        public void assert_container_is_valid()
        {
            var container = new Container(x => {
                x.ForConcreteType<DeviceUserUser>()
                    .Configure.Singleton();
            });

            // Gonna blow up
            container.AssertConfigurationIsValid();
        }

…will blow up with this report.

Show me the Build Plan!

I saved the best for last.  At any time, you can interrogate a StructureMap container to see what the entire “build plan” for an Instance is going to be.  The build plan is going to tell you every single thing that StructureMap is going to do in order to build that particular Instance.  You can generate the build plan as either a shallow representation showing the immediate Instance and any inline dependencies, or a deep representation that recursively shows all of its dependencies and their dependencies.

This unit test:

        [Test]
        public void show_me_the_build_plan()
        {
            var container = new Container(x =>
            {
                x.For<IDevice>().DecorateAllWith<BadDecorator>();
                x.For<IDevice>().Use<ADevice>();

            });

            var shallow = container.Model
                .For<DeviceUserUser>()
                .Default
                .DescribeBuildPlan();

            Debug.WriteLine("This is the shallow representation");
            Debug.WriteLine(shallow);

            var deep = container.Model
                .For<DeviceUserUser>()
                .Default
                .DescribeBuildPlan(3);

            Debug.WriteLine("This is the recursive representation");
            Debug.WriteLine(deep);
        }

generates this representation.

Follow

Get every new post delivered to your Inbox.