Category Archives: StructureMap

Gutting FubuMVC and Rebooting as “Jasper”

tl;dr – FubuMVC and FubuTransportation (the service bus tooling we built on top of FubuMVC) are getting a full reboot with the name “Jasper” on the new DNX platform. This blog post tries to explain why we’d do such a silly thing and describe our current thinking on the technical direction to start getting some feedback. Just for fun, I’m also describing a lot of functionality that I’ve been ripping out of FubuMVC in preparation for the reboot for folks that are interested in how web development has changed since FubuMVC was conceived in 2008-9.

My wife loves watching all the home remodeling shows on HG TV. One of her favorites is a show called Love it or List It. The premise of the show is a couple that wants to move to a new house gets the opportunity to choose between staying in their old home after it has been remodeled by one of the show’s stars — or decides to sell the now remodeled home in favor of purchasing a different house that the other star of the show finds for them on the market. Last year I said that I was giving up on FubuMVC development when it became clear that it was going nowhere and our community support was almost completely gone.

My shop had some flirtations with other platforms and like many shops we have been supplementing .Net development with Node.js work, but this is our current situation as I see it:

  1. We’ve got a pretty big portfolio of existing FubuMVC-based applications, and the idea of rewriting them to a new platform or even just a different .Net application framework or architecture is daunting
  2. We’re very happy with how the FubuTransportation service bus built on top of FubuMVC has worked out for us in production, but we would like it to be sitting on top of a technical foundation that isn’t “dead” like FubuMVC
  3. We’d love to get to be able to “Docker-ize” our applications and quite possibly move our production hosting and day to day development off of Windows
  4. We’ve got a huge investment and coupling in test automation and diagnostics tooling tied to FubuMVC and FubuTransportation that’s providing value
  5. I think many of us are generally positive about the new .Net tooling (DNX, Roslyn, CoreCLR) — except for the part where they didn’t eliminate strong naming in the new world order:(

Taking those factors into account, we’ve decided to “love it” instead of “leaving it” with what I’ve been calling a “Casino Royale style reboot” of the newly combined FubuMVC and FubuTransportation.

I’m working this week to hopefully finish up an intermediate FubuMVC 3.0 release that largely consolidates the codebase, streamlines the application bootstrapping, improves the diagnostics, and eliminates a whole lot of code and functionality that no longer matters. When we finally make the jump to DNX, FubuMVC/FubuTransportation is going to be rebranded as “Jasper.”

IMG_1017

 

The Vision for Jasper

My personal hopes for Jasper are that we retain the best parts of FubuMVC, dramatically improve the performance and scalability of our applications, and solve the worst of the usability problems that FubuMVC and FubuTransportation have today. I’m also hoping that we end up with a decent foundation of technical documentation just for once. I’m not making any unrealistic goals for adoption beyond having enough community support to be viable.

We’re trying very hard to focus on what we consider to be our core competencies this time instead of trying to build everything ourselves. We’re going to fully embrace OWIN internally as a replacement for FubuMVC’s behavior model. We’re betting big on the future of OWIN servers, middleware, and community — even though I’ve been known to gripe about OWIN on Twitter from time to time. We’re also going to support integration and usage of some subset of ASP.Net MVC from within Jasper applications, with a catch. Some users have asked us to make Jasper an addon to ASP.Net MVC, but my strong opinion is that what we want to do with Jasper won’t work unless Jasper is in control of the middleware composition and routing.

Mainly though, we just need Jasper to provide enough benefits to justify the time we’re going to spend building it on work time;-)

What’s Changing from FubuMVC

  • We’re going to stop using the Routing module from ASP.Net in favor of a new routing engine for OWIN based on the Trie algorithm
  • Dropping support for System.Web altogether. It’s OWIN or nothing baby.
  • Dropping most of the server side rendering support, probably including our Spark view engine support. More on this below.
  • The OWIN AppFunc is going to be the new behavior. We’re keeping the behavior graph model for specifying which middleware goes where, but this time we’re planning to use Roslyn to compile code at runtime for composing all the middleware for a single route or message type into one OWIN AppFunc. We have high hopes that doing this will lead to easier to understand exception stack traces and a significantly more efficient runtime pipeline than the current FubuMVC model. We’ll also support MidFunc, but it won’t be encouraged.
  • Part of adopting OWIN is that we’ll be going async by default in all routes and message handling chains. Users won’t be forced to code this way, but it’s a great way to wring out a lot more scalability and many other .Net tools are already headed in this direction.
  • Jasper needs to be much easier in cases where users need to drop down directly to HTTP manipulation or opt out of the conventional FubuMVC behavior
  • FubuMVC on Mono was an unrewarding time sink. I’m very hopeful that with Jasper and the new cross platform support for .Net that coding on OS X and hosting on Linux will be perfectly viable this time around.

 

What Jasper is Keeping from FubuMVC

  • One of our taglines from FubuMVC was the “web framework that gets out of your way.” To that end, Jasper should have the least possible intrusion into your application — meaning that we’ll try hard to avoid cluttering up application code with fluent interfaces from the framework, mandatory base classes, marker interfaces, and the copious number of attributes that are all too common in many .Net development tools.
  • Keep FubuMVC’s Russian Doll Model and the “BehaviorGraph” configuration model that we use to compose pipelines of middleware and handlers per route or service bus message
  • Retain the “semantic logging” strategy we use within FubuMVC. I think it’s been very valuable for diagnostics purposes and frequently for testing automation support. The Glimpse guys are considering something similar for ASP.Net MVC that we might switch to later if that looks usable.
  • Continue supporting the built in diagnostics in FubuMVC/FT. These are getting some considerable improvement in the 3.0 release for performance tracking and offline viewing
  • Our current mechanisms for deriving url routes from endpoint actions and the reverse url resolution in FubuMVC today. As far as I know, FubuMVC is the only web framework on any platform that provides reverse url resolution for free without additional work on the user’s part.
  • The “one model in, one model out” architecture for expressing url endpoints — but for Jasper we’re going to support more patterns for cases where the one in, one out signature was annoying
  • The built in conventions that FubuMVC and FubuTransportation support today
  • Jasper will continue to support “meta-conventions” that allow users to create and use their own policies
  • The areas or slices modularity support that we have today with Bottles and FubuMVC, but this has already been simplified to only apply to server side code. Jasper is almost entirely getting out of the client side asset business.
  • Content negotiation, the authorization model, and the lightweight asset support from FubuMVC 2.0 will be optimized somewhat but mostly kept as is.
  • Definitely keep the strong-typed “Settings” pattern for application and framework configuration.

 

 

First, gut the existing code

Like I said in the beginning, my wife loves HG TV fix it up shows about remodeling houses. A lot of those episodes invariably include contractors tearing into an old house and finding all kinds of unexpected horrors lurking behind the dry wall and ripping out outdated 70’s shag carpet. Likewise, I’ve spent the last month or so at work ripping a lot of 70’s shag carpet type features and code out of FubuMVC. I’m ostensibly making an intermediate FubuMVC 3.0 release that we’ll use internally at work until next year when Jasper is ready and the dust seems settled enough on DNX, but I’ve also taken advantage of the time to clean as much junk out of the codebase as possible before transforming FubuMVC to Jasper.

The main goal of this release was to consolidate all the FubuMVC related code that is going to survive into Jasper into one single Github repository. As secondary goals, I’ve also streamlined the application bootstrapping, removed about a net of 10k lines of code, and I’ll be working this coming week on performance instrumentation inside FubuMVC’s diagnostics and some of the test automation support.

 

Consolidate and Simplify the Code Topology

FubuMVC’s ecosystem of add on projects and spun off tooling became massive and spread across about ~75 GitHub repositories at its peak. FubuMVC had — in my obviously biased opinion — a very effective architecture for modularity that led us to get a little too slaphappy with splitting features into separate assemblies and nugets. Doing development across related repositories though turned out to be a huge source of friction for us and no matter how much DNX may or may not improve that experience, we’re never going to try to do that again. In that vein, I’ve spent much of the past several weeks consolidating the codebase into many fewer libraries. Instead of just dropping assemblies into the application to auto-magically add new behavior or features to a FubuMVC application, those features ride with the core library but users need to explicitly opt into those features. I liked the “just drop the assembly file in” plugin abilities, but others prefer the explicit code. I’m not sure I have a strong opinion right now, but fewer repositories, libraries, and nugets definitely makes my life easier as the maintainer.

For previous FubuMVC users, I combined:

  • FubuTransportation, FubuMVC.Authentication, FubuMVC.AntiForgery, FubuMVC.StructureMap, and FubuMVC.Localization into FubuMVC.Core
  • FubuMVC.Diagnostics was combined into FubuMVC.Core as part of the 2.1 release earlier this year
  • FubuPersistence and FubuTransporation.RavenDb were all combined into FubuMVC.RavenDb
  • All the Serenity add ons were collapsed into Serenity itself
  • Some of Bottles was folded into FubuMVC and the rest thrown away. More on that later.

 

StructureMap Only For Now

FubuMVC, like very many .Net frameworks, had some abstractions to allow the tool to be used with multiple IoC containers. I was never happy with our IoC registration abstraction model, but our biggest problem was that FubuMVC was built primarily against StructureMap and its abilities and assumptions about open generic types, enumerable types, and lifecycle management and that made it very difficult for us to support other IoC containers. In addition to StructureMap, we fully supported Autofac and got *this* close with Windsor — but I’m not aware of anyone using other containers besides StructureMap with FubuMVC in a production application.

As of a couple weeks ago, I demolished the IoC abstractions in favor of just having FubuMVC use StructureMap directly. That change allowed me to throw away a lot of code and unit tests, eliminate a couple assemblies, remove some nontrivial duplication in reflection handling code between StructureMap and FubuMVC, and simplify the application bootstrapping.

In the longer run, if we decide to once again support other IoC containers, my thought is that Jasper itself will use StructureMap’s registration model and we’ll just have that model mapped into whatever the other IoC container is at bootstrapping time. I know we could support Autofac and probably Windsor. Ninject and SimpleInjector are definitely out. I don’t have the slightest clue about a Unity adapter or the other 20 or so .Net IoC tools out there.

The new IoC integration in ASP.Net MVC6 is surprisingly similar to FubuMVC’s original IoC integration in many aspects and I think is very likely to run into the exact same problems that we did in FubuMVC (some of the IoC containers out there aren’t usable with MVC6 as it is and their project maintainers aren’t happy with the ASP.Net team). That’s a subject for another blog post on another day though.

 

Backing off of Server Side Rendering

I know not everyone is onboard the single page application train, but it’s been very obvious to me over the past 2-3 years that server side html generation is becoming much less important as more teams use tools like Angular.js or React.js to do client side development while using FubuMVC to mostly expose Json over Http endpoints. We had some very powerful features in FubuMVC for server side html generation, but the world has moved on and among others, these features have been removed:

  • Html conventions – FubuMVC supported user-defined conventions for generating forms, labels, editors, and displays based on the signature of view models built around the HtmlTags library. While I think that our html convention support was technically successful, it’s no longer commonly used by our teams and I’ve removed it completely from FubuMVC.Core. Jimmy Bogard has pulled most of the convention support into HtmlTags 3.0 such that you can use the html convention generation in projects that don’t use FubuMVC. I’ve been surprised by how well the new TagHelpers feature in MVC6 has been received by the .Net community. I feel like our old HtmlTags-based conventions were much more capable than TagHelpers, but I still think that the time for that kind of functionality has largely passed.
  • Content extensions — a model we had early on in FubuMVC to insert customer specific markup into existing views without having to change those view files. It was successful, but it’s no longer used and out it goes.

 

 

 

 

StructureMap in 2015/16

I’ve been absurdly bogged down most of this calendar year rewriting the Storyteller tool we use at work for acceptance and regression testing. Now that I’m past that crunch, I’m finally able to focus on other things like StructureMap, this blog, and the still unfinished StructureMap documentation. That being said, here’s what’s going on with StructureMap now and in the near future.

Documentation

StructureMap and I have earned a bad reputation over the years for being under-documented and having out of date documentation. As of about an hour ago, I finished converting the existing StructureMap 3.0 documentation site to the far better “living documentation” tooling I built as part of my Storyteller work earlier this year. There are still far too many holes in that site, but I’m hoping to knock out a couple topics a week until they’re complete. With the new tooling, I think it would be much easier for other folks to contribute to the documentation effort.

Strong Naming and StructureMap. Sigh.

My position on strong naming with StructureMap 3 and beyond is that I do not want the main packages to be strong named. I have said that I would consider supporting a signed version of StructureMap 3 in a parallel nuget if someone did the pull request for that support. Lo and behold, someone finally did that, so look for the structuremap-signed package if you absolutely have to have a strong named version of StructureMap. That being said, I still fervently recommend that you do not use the signed version unless you absolutely have to in your situation.

StructureMap 3.2 in 2015

I’m just about to start some work on some new features and internal improvements for StructureMap that will make up a 3.2 release. Using Semantic Versioning rules, there should be no breaking API changes from 3.* to 3.2. Right now, I think the major changes are:

  1. Optimize the type scanning and convention registration. This was the one big subsystem that I left essentially untouched in the push to 3.0 and sure enough, it’s not holding up well for some users. I have some ideas about how to improve the performance and usability of the type scanning that I have already started in a private branch.
  2. Optimize and tighten up the “TryGetInstance()” runtime behavior under heavy multi-threaded loads. I don’t use this feature myself and I try to discourage its usage overall, but all the ASP.Net frameworks (MVC & Web API) use it in their integration and that’s been problematic to a couple folks.
  3. Some new syntactical sugar to verify specific container registrations
  4. New types of Container wide conventions or policies that act against the entire container
  5. Take advantage of default parameter values
  6. Features to make StructureMap easier to configure from mix and match sources for highly modular architectures

StructureMap 4 in 2016(?)

I *think* that StructureMap 4.0 is going to be all about the new stuff:

  • Use Roslyn runtime code generation in place of the current strategy of building then compiling Expression’s at runtime. I don’t know that this is going to result in faster code, but I am hopeful that it makes the guts of StructureMap’s internals more approachable. Really though, that’s because I want to use the Roslyn functionality on a proposed replacement for FubuMVC next year.
  • Maybe use Roslyn’s support for compiler symbols in place of the existing type scanning?
  • Support CoreCLR

Re-tooled the Codebase

I changed up the build automation tooling for StructureMap a couple weeks ago in an attempt to make the code easier to work with for more mainstream .Net developers.

I started from:

  • Rake (Ruby) as a build scripting tool
  • The old Ripple tool from the FubuMVC family of projects for Nuget support
  • The build script had to be executed at least once before working with the code to generate some required files and fetch all the necessary nuget requirements
  • Good ol’ fashioned NUnit for unit testing

The usage of Ruby for build automation has been off-putting to many .Net developers and the mass majority of .Net developers seem to prefer being able to open Visual Studio.Net and just go to work. Based on the twin desires to make the StructureMap build faster and easier for most .Net developers, the new build tooling changed to:

  • Use Paket as the nuget management tool with its auto-restore capability
  • After some conversations with the NancyFx team over the years, I stole their idea of using Rake but making it completely optional for potential contributors
  • Replaced NUnit with Fixie, a much better and faster testing library from some of the Headspring folks here in Austin

Long Lived Codebases: The Challenges

I did a talk at CodeMash 2015 called “Lessons Learned from a Long Lived Codebase” that I thought went very well and I promised to turn into a series of blog posts. I’m not exactly sure how many posts it’s going to be yet, but I’m going to try to get them all out by the end of January. This is the first of maybe 4-5 theoretical posts on my experience evolving and supporting the StructureMap codebase over the past 11-12 years.

Some Background

In 2002 the big corporate IT shop I was working in underwent a massive “Dilbert-esque” reorganization that effectively trapped me in a non-coding architect role that I hated. I could claim 3-4 years of development experience and had some significant technical successes under my belt in that short time, but I’d mostly worked with the old COM-based Windows DNA platform (VB6, MTS, ADO, MSXML, ASP) and Oracle technologies right as J2EE and the forthcoming .Net framework seemed certain to dominate enterprise software development for the foreseeable future.

I was afraid that I was in danger of being made obsolete in my new role. I looked for some kind of project I could do out in the open that I could use to both level up on the newer technologies and prove to potential employers that “yes, I can code.” Being a pretty heavy duty relational database kinda guy back then, I decided that I was going to build the greatest ORM tool the world had ever seen on the new .Net platform. I was going to call it “StructureMap” to reflect its purpose of mapping the database to object structures. I read white papers, doodled UML diagrams like crazy, and finally started writing some code — but got bogged down trying to write an over-engineered configuration and modularity layer that would effectively allow you to configure object graphs in Xml. No matter, I managed to land a job with ThoughtWorks (TW) and off I went to be a real developer again.

During the short time that I worked at ThoughtWorks, Martin Fowler published his paper about Dependency Injection and Inversion of Control Containers and other folks at the company built an IoC container in Java called PicoContainer that was getting some buzz on internal message boards. I came to TW in hopes of being one of the cool kids too, so I dusted off the configuration code for my abandoned ORM tool and transformed that it into an IoC library for .Net during my weekly flights between Austin and Chicago. StructureMap was put into a production application in early 2004 and publicly released on SourceForge in June of 2004 as the very first production ready IoC tool on the .Net platform (yes, StructureMap is actually older than Windsor or Spring.Net even though they were much better known for many years).

Flash forward to today and there’s something like two dozen OSS IoC containers for .Net (all claiming to be a special snowflake that’s easier to use than the others while being mostly about the same as the others), at least three (Unity, MEF, and the original ObjectBuilder) from Microsoft itself with yet another brand new one coming in the vNext platform. I’m still working with and on StructureMap all these years later after the very substantial improvements for 3.o last year — but at this point very little remains unchanged from the early code. I’m not going to waste your time trying to sell you on StructureMap, especially since I’m going to spend so much time talking about the mistakes I’ve made during its development. This series is about the journey, not the tool itself.

What’s Changed around Me

Being 11 years old and counting, StructureMap has gone through a lot of churn as the technologies have changed and approaches have gone in and out of favor. If you maintain a big codebase over time, you’re very likely going to have to migrate it to newer versions of your dependencies, use completely different dependencies, or you’ll want to take advantage of newer programming language features. In no particular order:

  • StructureMap was originally written against .Net 1.1, but at the time of this post targets .Net 4.0 with the PCL compliance profile.
    • Newer elements of the .Net runtime like Task and Lazy<T> have simplified the code internals.
    • Lambdas as introduced in .Net 3.5 made a tremendous difference in the coding internals and had a big impact on the usage of the tool itself.
    • As I’ll discuss in a later post, the introduction of generics support into StructureMap 2.0 was like the world’s brightest spotlight shining on all the structural mistakes I made in the initial code structure of early StructureMap, but I’ll still claim that the introduction of generic types has made for huge improvements in StructureMap’s usability — and also one of the main reason why I think that the IoC tools in .Net are generally more usable than those in Java or Scala.
  • The build automation was originally done with NAnt, NUnit, and NMock. As my tolerance for Xml and coding ceremony decreased, StructureMap moved to using Rake and RhinoMocks. For various reasons, I’m looking to change the automation tooling yet again to modernize the StructureMap development experience.
  • StructureMap was originally hosted on SourceForge with Subversion source control. Releases were done in the byzantine fashion that SourceForge required way back then. Today, StructureMap is hosted on GitHub and distributed as Nuget packages. Nuget packages are generated as an artifact of each continuous integration build and manually promoted to Nuget.org whenever it’s time to do a public release. Nuget is an obvious improvement in distribution over manually created zip files. It is my opinion that GitHub is the single best thing to ever happen for Open Source Software development. StructureMap has received vastly more community contribution since moving to GitHub. I’m on record as being critical of the .Net community for being too passive and not being participatory in regards to .Net community tooling. I’m pleasantly surprised with how much help I’ve received from StructureMap users since the 3.0 release last year to fix bugs and fill in usability gaps.
  • The usage patterns and the architectures that folks build using StructureMap. In a later post I’ll do a deep dive on the evolution of the nested container feature.
  • Developer aesthetics and preferences, again, in a later post

 

Other People

Let’s face it, you and I are perfectly fine, but the “other” developers are the problem. In the particular case of a widely used library, you frequently find out that other developers use your tool in ways that you did not expect or anticipate. Frameworks that abstract the IoC container with some sort of adapter library have been some of the worst offenders in this regard.

The feedback I’ve gotten from user problems has led to many changes over the years:

  • All new features. The interception capabilities were originally to support AOP scenarios that I don’t generally use myself.
  • Changing the API to improve usability when verbiage is wrong
  • Lots and lots of work tweaking the internals of StructureMap as users describe architectural strategies that I would never think of, but do turn out to be useful — usually, but not always, involving open generic types in some fashion
  • New conventions and policies to remove repetitive code in the tool usage
  • Additional diagnostics to explain the outcome of the new conventions and policies from above
  • Adding more defensive programming checks to find potential problems faster. My attitude toward defensive programming is much more positive after supporting StructureMap over the years. This might apply more to tools that are configuration intense like say, an IoC tool.
  • A lot of work to improve exception messages (more on this later maybe)

 

One thing that should happen is to publish and maintain best practice recommendations for StructureMap. I have been upset with the developers of a popular .Net OSS tool who did, in my opinion, a wretched job of integrating StructureMap in their adapter library (to the point where I advise users of that framework to adopt a different IoC tool). Until I actually manage to publish the best practice advice to avoid the very problems they caused in their StructureMap usage, those problems are probably on me. Trying to wean users off of using StructureMap as a static service locator and being a little too extreme in applying a certain hexagonal architecture style have been constant problems on the user group over the years.

I’m not sure why this is so, but I’ve learned over the years that the more vitriolic a user is being toward you online when they’re having trouble with your tool, the more likely it is that they themselves are just doing something very stupid that’s not necessarily a poor reflection on your tool. If you ever publish an OSS tool, keep that in mind before you make the mistake of opening a column in your Twitter client just to spot references to your project or a keyword search in StackOverflow. I’ve also learned that users who have uncovered very real problems in StructureMap can be reasonable and even helpful if you engage them as collaborators in fixing the issue instead of being defensive. As I said earlier about the introduction of GitHub, I have routinely gotten much more assistance from StructureMap users in reproducing, diagnosing, and fixing problems in StructureMap over the past year than I ever had before.

 

Pull, not Push for New Features

In early 2008 I was preparing the grand StructureMap 2.5 release as the purported “Python 3000” release that was going to fix all the usability and performance issues in StructureMap once and for all time (Jimmy Bogard dubbed it the Duke Nukem Forever release too, but the 3.0 release took even longer;)). At the same time, Microsoft was gearing up for not one, but two new IoC tools (Unity from P&P and MEF from a different team). I swore that I wasn’t going down without a fight as Microsoft stomped all over my OSS tool, so I kicked into high gear and started stuffing StructureMap with new features and usability improvements. Those new things roughly fell into two piles:

  • Features or usability improvements I made based on my experience with using StructureMap on real projects that I knew would remove some friction from day to day usage. These features introduced in the 2.5 release have largely survived until today and I’d declare that many of them were successful
  • Things that I just thought would be cool, but which I had no immediate usage in my own work. You’ve already called it, much of this work was unsuccessful and later removed because it was either in the way, confusing to use, easily done in other ways, or most especially, a pain in the neck for me to support online because it wasn’t well thought out in the first place.

You have to understand that any feature you introduce is effectively inventory you have to support, document, and keep from breaking in future work. To reaffirm one of the things that the Lean Programming people have told us for years, it’s better to “pull” new features into your tool based on a demonstrated need and usage than it is to “push” a newly conceived feature in the hope that someone might find useful later.

 

Yet to come…

I tend to struggle to complete these kinds of blog series, but I do have the presentation and all of the code samples, so maybe I pull it off. I think that the candidates for following posts are something like:

  • A short discussion on backward compatibility
  • My documentation travails and how I’m trying to fix that
  • “Crimes against Computer Science” — the story of the nested container feature, how it went badly at first, and what I learned while fixing it in 3.0
  • “The Great Refactoring of Aught Eight”
  • API Usage Now and Then
  • Diagnostics and Exceptions

StructureMap 3 Documentation

Shockingly, my efforts to complete the documentation on StructureMap 3 have taken much, much longer than I had hoped — but there’s some real progress worth talking about. This time around, I’m adopting the idea of “living documentation” where the code samples are taken directly out of the code in the main GitHub repository at publishing time so that the documentation never gets out of sync with the code. For the most part, I’m using unit test code to demonstrate API usage in the documentation with the thinking there that the resulting documentation is much less ambiguous and again, cannot be out of sync with how the code actually works as long as those unit tests are passing.

If you’re curious, I’ve been using our (now abandoned) FubuDocs project with FubuMVC.CodeSnippets to author and publish the documentation. All the documentation is in GitHub in the StructureMap.Docs project (and yes, I certainly take pull requests).

The new documentation has moved to GitHub pages hosting at http://structuremap.github.io. I’m not sure what’s going to happen to the old structuremap.net website yet.

Some highlights of the documentation so far:

 

I’m committed to finishing the documentation, but I’m obviously not sure when it will be complete. I’d like to say it’s “done” before starting any significant new OSS project and I’m using that to force myself to finally finish;)

How We Do Strong Typed Configuration

TL;DR: I’ve used a form of “strong typed configuration” for the past 5-6 years that I think has some big advantages over the traditional approach to configuration in .Net. I still like this approach and I’m seeing something similar showing up in ASP.Net vNext as well. I’m NOT trying to sell you on any particular tool here, just the technique and concepts.

What’s Wrong with Traditional .Net Configuration?

About a decade ago I came late into a project where we were retrofitting some sort of new security framework onto a very large .Net codebase.* The codebase had no significant automated test coverage, so before we tried to monkey around with its infrastructure, I volunteered to create a set of characterization tests**. My first task was to stand up a big chunk of the application architecture on my own box with a local sql server database. My first and biggest challenge was dealing with all the configuration needs of that particular architecture (I’m wanting to say it was >75 different items in the appSettings collection).

As I recall, the specific problems with configuration were:

  1. It was difficult to easily know what configuration items any subset of the code (assemblies, classes, or subsystems) needed in order to run without painstakingly tracing the code
  2. It was somewhat difficult to understand how items in very large configuration files were consumed by the running code. Better naming conventions probably would have helped.
  3. There was no way to define configuration items on the fly in code. The only option was to change entries in the giant web.config Xml file because the code that used the old System.Configuration namespace “pulled” its configuration items when it needed configuration. What I wanted to do was to “push” configuration to the code to change database connections or file paths in test scenarios or really any time I just needed to repurpose the code. In a more general sense, I called this the Pull, Don’t Push rule in my CodeBetter days.

From that moment on, I’ve been a big advocate of approaches that make it much easier to both trace configuration items to the code that consumes it and also to make application code somehow declare what configuration it needs.

 

Strong Typed Configuration with “Settings” Classes 

For the past 5-6 years I’ve used an approach where configuration items like file paths, switches, url’s, or connection information is modeled on simple POCO classes that are suffixed with “Settings.” As an example, in the FubuPersistence package we use to integrate RavenDb with FubuMVC, we have a simple class called RavenDbSettings that holds the basic information you would need to connect to a RavenDb database, partially shown below:

    public class RavenDbSettings
    {
        public string DataDirectory { get; set; }
        public bool RunInMemory { get; set; }
        public string Url { get; set; }
        public bool UseEmbeddedHttpServer { get; set; }


        [ConnectionString]
        public string ConnectionString { get; set; }

        // And some methods to create in memory datastore's
        // or connect to the specified external datastore
    }

Setting aside how that object is built up for the moment, the DocumentStoreBuilder class that needs the configuration above just gets this object through simple constructor injection like so: new DocumentStoreBuilder(new RavenDbSettings{}, ...);. The advantages of this approach for me are:

  • The consumer of the configuration information is no longer coupled to how that information is resolved or stored in any way. As long as it’s giving the RavenDbSettings object in its constructor, DocumentStoreBuilder can happily go about its business.
  • I’m a big fan of using constructor injection as a way to create traceability and insight into the dependencies of a class, and injecting the configuration makes the dependency on configuration from a class much more transparent than the older “pull” forms of configuration.
  • I think it’s easier to trace back and forth between the configuration items and the code that depends on that configuration. I also feel like it makes the code “declare” what configuration items it needs through the signature of the Settings classes

 

Serving up the Settings with an IoC Container

I’m sure you’ve already guessed that we just use StructureMap to inject the Settings objects into constructor functions. I know that many of you are going to have a visceral reaction to the usage of an IoC container, and while I actually do respect that opinion, it’s worked out very well for us in practice. Using StructureMap (I think most of the other IoC containers could do this as well) we get a couple big benefits in regards to default configuration and the ability to swap out configuration at runtime (mostly for testing).

Since the Settings classes are generally concrete classes with no argument constructors, StructureMap can happily build them out for you even if StructureMap has no explicit registration for that type. That means that you can forgo any external configuration or StructureMap configuration and your code can still work as long as the default values of the Settings class is useful.  To use the example of RavenDbSettings from the previous section, calling new RavenDbSettings() creates a configuration that will connect to a new embedded RavenDb database that stores its data to the file system in a folder called /data parallel to the project directory (you can see the code here).

The result of the design above is that a FubuMVC/FubuTransportation application was completely connected to a working RavenDb database by simply installing the FubuMVC.RavenDb nuget with zero additional configuration.

I demoed that several times at conferences last year and the audiences seemed to be very unimpressed and disinterested. Either that’s not nearly as impressive as I thought it was, too much magic,  I’m not a good presenter, or they don’t remember what a PITA it used to be just to install and configure everything you needed just to get a blank development database going. I still thought it was cool.

The other huge advantage to using an IoC container to deliver all the configuration to consumers is how easy that makes it to swap out configuration at runtime. Again going to the RavenDbSettings example, we can build out our entire application and swap out the RavenDb connection at will without digging into Xml or Json files of any kind. The main usage has been in testing to get a clean database per test when we do end to end testing, but it’s also been helpful in other ways.

 

So where does the Setting data come from?

Alright, so the actual data has to come from somewhere outside the codebase at some point (like every developer of my age I have a couple good war stories from development teams hard coding database connection strings directly into compiled code). We generally put the raw data into the normal appSettings key/value pairs with the naming convention “[SettingsClassName].[PropertyName].” The first time a Settings object is needed within StructureMap, we read the raw key/value data from the configuration file and use FubuCore’s model binding support to create the object and do all the type coercion necessary to create the Settings object. An early version of this approach was described by my former colleague Josh Flanagan way back in 2009. The actual mechanics are in the FubuMVC.StructureMap code in the main fubumvc repository.

Some other points:

  • The data source for the model binding in this configuration setup was pluggable, so you could use external files or anything else that could be exposed as key/value pairs. We generally just use the appSettings support now, but on a previous project we were able to centralize the configuration files to a common location for several related processes
  • The model binding is flexible enough to support deep objects, enumerable properties, and just about anything you would need to use in your Settings objects
  • We also supported a model where you could combine key/value information from multiple sources, but layer the data in precedence to enable overrides to the basic configuration. My goal with that support was to avoid making all customer or environment specific configuration be in the form of separate override files without having to duplicate so much boilerplate configuration. In this setup, when the Settings objects were bound to the raw data, profile or environment specific information just got a higher precedence than the default configuration.
  • As of FubuMVC 2.0, if you’re using the StructureMap integration, you no longer have to do anything to setup this settings provider infrastructure. If StructureMap encounters an unknown dependency that’s a concrete type suffixed by “Settings,” it will try to resolve it from the model binding via a custom StructureMap policy.

 

Programmatic Configuration in FubuMVC 

We also used the “Settings” configuration idea to programmatically specify configuration for features inside the application configuration by applying alterations to a Setting object like this code from one of our active projects:

// Sets up a custom authorization rule on any diagnostic 
// pages
AlterSettings<DiagnosticsSettings>(x => {
	x.AuthorizationRights.Add(new AuthorizationCheckPolicy<InternalUserPolicy>());
});

// Directs FubuMVC to only look for client assets
// in the /public folder
AlterSettings<AssetSettings>(x =>
{
	x.Mode = SearchMode.PublicFolderOnly;
});

The DiagnosticSettings and AssetSettings objects used above are injected into the application container as the application bootstraps, but only after the alterations show above are applied. Behind the scenes, FubuMVC will first resolve the objects using any data found in the appSettings key/value pairs, then apply the alternation overrides in the code above. Using the alteration lambdas instead of just injecting the Settings objects directly allowed us to also embed settings overrides in external plugins, but ensure that the main application overrides always “win” in the case of conflicts.

I’m still happy with how this has turned out in real usage and I’ve since noticed that ASP.Net vNext uses a remarkably similar mechanism to configure options in their IApplicationBuilder scheme (think SetupOptions<MvcOptions>(Action<MvcOptions>)). I’m interested to see if the ASP.Net team will try to exploit that capability to provide much better modularity than the older ASP.Net MVC & Web API frameworks.

 

 

Other Links

 

* That short project was probably one of the strongest teams I’ve ever been a part of in terms of talented individuals, but it also spawned several of my favorite development horror stories (massive stored procedures, non-coding architects, the centralized architect team anti-pattern, harmful coding standards, you name it). In my career I’ve strangely seen little correlation between the technical strength of the development team (beyond basic competence anyway) and the resulting success of the project. Environment, the support or lack thereof from the business, and the simple wisdom of doing the project in the first place seem to be much more important.

** As an aside, that effort to create characterization tests as a crude regression test suite did pay off and we it did find some regression errors after we started making the bigger changes with that test suite. I think the Feathers’ playbook for legacy systems, where I got the inspiration for those characterization tests, is still very relevant some 10+ years later.

StructureMap 3.1

I pushed a new minor release version 3.1 of the StructureMap, StructureMap.Web, StructureMap.AutoMocking, StructureMap.AutoMocking.Moq, and StructureMap.AutoFactory packages to Nuget.org this morning. You can see a list of the closed issues in this release here.

Thank you to Matt Honeycutt, Marco Cordeiro, and Jimmy Bogard for their help in making this incremental release.

 

Future Plans

  • The documentation is still in flight and will probably be so for quite some time. What little is there so far is up at http://structuremap.github.io.
  • Xamarin support. StructureMap 3.0 is already built with PCL compliance and runs on WP8, so getting it to run on Xamarin runtimes should be a piece of cake, right? Right?
  • Continue to make bug fix releases as needed. I hate how many bugs have popped up since the 3.0 release, but at least it’s much easier to make incremental releases in the Nuget era than it was with Sourceforge back in the day.

I’m still holding the line on not strong naming StructureMap unless someone does the pull request to support multiple signed and unsigned versions of the Nuget. I’m starting to get asked about a signed version every couple weeks and I still don’t want to do that. You’re always welcome to just clone the repository and sign the code yourself.

 

Building with IContext.Root

A couple StructureMap users have asked over the years to support contextual resolution of injected loggers with tools like log4net or NLog. While StructureMap 2.5+ supported this pattern, some of the support got lost in the big restructuring work for 3.0 and this release brings it back.

Say you’re using a logging tool that allows you to specify different logging rules and mechanisms by namespaces, types, or assemblies. Your logging tool probably has some construct like the following code to build the right logger for a given type:

    public static class LogManager
    {
        // Give me a Logger with the correctly
        // configured logging rules and sources
        // matching the type I'm passing in
        public static Logger ForType(Type type)
        {
            return new Logger(type);
        } 
    }

Now, let’s say that you want StructureMap to inject a Logger into constructor arguments of the objects it’s going to build. If you want to create a Logger that’s suitable for the topmost concrete type being built by a service location request to StructureMap. You could use code similar to this:

            // IContext.RootType is new for 3.1
            var container = new Container(_ => {
                _.For<Logger>()
                    .Use(c => LogManager.ForType(c.RootType))
                    .AlwaysUnique(); 
            });

If you wanted to build the Logger individually to match each type in the object graph being created, use this code instead:

            // Resolve the logger for the type one level up
            container = new Container(_ => {
                _.For<Logger>().Use(c => LogManager.ForType(c.ParentType))
                    .AlwaysUnique();
            });

The AlwaysUnique() lifecycle is important here to force StructureMap to create a new Logger instance every time one is necessary in the object graph to prevent the very first Logger created from being shared throughout the entire object graph. This is one of the very few use cases for the “unique” lifecycle.

 

Child Containers

New for StructureMap 3.0 are “child containers,” which should not be confused for nested containers — and all of that would be much more clear if I’d ever get around to writing the big blog post on nested container behavior that I’ve promised a half dozen people this year. Child containers are meant for stateful client development where you might want to pop a child container for a region, pane, or specific view of the application to override some of the main application services while being able to gracefully fallback to the application container for everything else.

 

Better IEnumerable<T>/IList<T>/T[] Support

Since at least version 2.5, if StructureMap encounters constructor or setter arguments of IEnumerable<T>, IList<T>, or T[] in a concrete type where the dependency is not explicitly configured, those arguments will be fulfilled by creating an enumerable of all configured instances of the type T in the container in the order in which they were registered. Great, and it was valuable in several usages within FubuMVC, but other folks are wanting to resolve the enumeration types directly or as Lazy<IList<T>> or Func<T[]>. StructureMap 3.1 will now resolve enumeration types that are not explicitly configured by returning all the known configured instances of the type T. To make that concrete, see the acceptance tests for this behavior.

FubuMVC Lessons Learned — Strong Naming Woes and Workarounds

TL;DR:  .Net isn’t ready for the brave new world of componentization and smaller, more rapid updates quite yet, but I have some suggestions based on the development of FubuMVC.

To recap, I’m writing a series of posts about the FubuMVC community’s technical experiences over the past five or so years after deciding to give up on new development after the shortly forthcoming 2.0 version. So far, I’ve discussed…

  • Our usage of the Russian Doll model and where we fell in the balance between repetitive ceremonial code and using conventions for cleaner code
  • Command line bootstrapping, polyglot programming, and the value in standardizing codebase layout for the sake of tooling (I say yes, .Net community says no)
  • Lots of DevOps woes trying to develop across multiple repositories using Nuget, TeamCity, and our own Ripple tool.

…and a lot of commenters have repeatedly and quite accurately slammed the documentation (that’s shooting fish in a barrel) and got strangely upset over the fact that we used Rake for our own build automation even though FubuMVC itself had no direct coupling to Ruby.

 

Strong Naming is Hamstringing Modularity in .Net

If you follow me on Twitter, you probably know that I hate strong naming in .Net with a passion.  Some of you might be reading this and saying “it only takes a minute to set up strong naming on projects, I don’t see the big deal” and others are saying to yourselves that “gosh, I’ve never had any problem with strong naming, what’s all the teeth gnashing about?”

Consider this all too common occurrence:

  • Your application uses an OSS component called FancyLogging and you’re depending on the latest version 2.1.5.
  • You also use FancyServiceBus that depends on FancyLogging version 2.0.7.
  • You might also have a dependency on FancyIoC with in turn depends on a much older version of FancyLogging 2.0.0.

Assuming that the authors of FancyLogging are following Semantic Versioning (more on this in a later post), you should be able to happily use the latest version of FancyLogging because there are no semantically breaking changes between it and the versions that FancyServiceBus and FancyIoC were compiled against. If these assemblies are strong named however, you just set yourself up for a whole lot of assembly version conflicts because .Net matches on the entire version number. At this point, you’re going to be spending some quality time with the Fusion Log Viewer (you want this in your toolbox anyway).

Strong naming conflicts are a common issue when your dependencies improve or change rapidly, and you upgrade somewhat often, and especially when upstream dependencies like log4net, IoC containers, and Newtonsoft.Json are also used by your upstream dependencies. Right now I think this problem is felt much more by shops that depend more heavily on OSS tools that don’t originate in Redmond, but Microsoft itself is very clearly aiming for a world where .Net itself is much more modular and the new, smaller libraries will release more often. Unless the .Net community addresses the flaws in strong naming and adopts more effective idioms for Nuget packaging, my strong naming conflict versions are about to come to a mainstream .Net shop near you.

 

Strong Naming Woes and Workarounds

While I’ve never had any trouble whatsoever with using it, at one point a couple years ago assembly conflicts with Newtonsoft.Json was my single biggest problem in daily development.  Fortunately, I encounter very little trouble today due to Newtonsoft.Json’s strong naming.  Why was this one library such a huge headache and what can we learn from how myself and the .Net community as a whole alleviated most of the pain?

First off, Newtonsoft.Json was and is strong named.  It’s very commonly used in many of the other libraries that the projects I work with depend on for daily development, chief among them WebDriver and RavenDb.  I also use Newtonsoft.Json in a couple different fubumvc related projects (Bottles and Storyteller2). Newtonsoft.Json has historically been a very active project and releases often. Really due to its own success, Newtonsoft.Json became the poster child for strong naming hell.

Consider this situation as it was in the summer of 2012:

  • We depended upon WebDriver and RavenDb, both of which at that time had an external dependency upon Newtonsoft.Json.
  • WebDriver and RavenDb were both strong named themselves
  • Both WebDriver and RavenDb were releasing quite often and we frequently needed to pull in new versions of these tools to address issues and subsequent versions of these tools often changed their own dependency versions of Newtonsoft.Json
  • Our own Storyteller2 tool we used for end to end testing depended upon Newtonsoft.Json
  • Our Serenity library we used for web testing FubuMVC applications depends upon WebDriver and Storyteller2 and you guessed it, we were frequently improving Serenity itself as we went along
  • We would frequently get strong naming conflicts in our own code by installing the Newtonsoft.Json nuget to an additional project within the same solution and getting a more recent version than the other projects

I spent a lot of time that summer cursing how much time I was wasting just chasing down assembly version conflicts from Newtonsoft.Json and WebDriver and more recently from ManagedEsent. Things got much better by the end of that year though because:

  • WebDriver ilmerge’d Newtonsoft.Json so that it wasn’t exposed externally
  • WebDriver, partially at my urging, ditched strong naming — making it much easier for us
  • RavenDb ilmerge’d Newtonsoft.Json as well
  • We ilmerge’d Newtonsoft.Json into Storyteller2 and everywhere else we took that as a dependency after that
  • Newtonsoft.Json changed their versioning strategy so that they locked the assembly version but let the real Nuget version float within semantically versioned releases. Even though that does a lot to eliminate binding conflicts, I still dislike that strategy because it’s a potentially confusing lie to consumers. The very fact that this is the recommended approach by the Nuget team themselves as the least bad approach is a pretty good indication to me that strong naming needs to be permanently changed inside the CLR itself.
  • ManagedEsent was killing us because certain RavenDb nugets smuggle it in as an assembly reference, conflicting with our own declared dependency on ManagedEsent from within the LightningQueues library. Again, we beat this with ilmerge’ing our dependency on ManageEsent into LightningQueues and problem solved.
  • With Ripple, we were able to enforce solution wide dependency versioning, meaning that when we installed a Nuget to a project in our solution with Ripple it would always try to first use the same Nuget version as the other projects in the solution.  That made a lot of headaches go away fast. The same solution wide versioning applied to Nuget updates with Ripple.

 

My advice for library publishers and consumers

I definitely feel that much of the following list is a series of compromises and workarounds, but such is life:

  • Be cautious consuming any strong named library that revisions often
  • Don’t apply strong naming at all to your published assemblies unless you have to
  • Don’t bundle in secondary assemblies into your Nuget packages that you don’t control — i.e., the loose ManagedEsent assembly in RavenDb packages problem or this issue in GitHub for Ripple
  • Prefer libraries that aren’t strong named if possible (e.g., why we choose NLog over log4net now)
  • Privately ilmerge your dependencies into your libraries if the consumers when possible, which I’ll freely admit is a compromise that can easily cause you other problems later and some clumsiness in your build scripts.  Do make sure that your unit and integration tests run against the ilmerge’d copy of your assembly in continuous integration builds for best results
  • Do the Newtonsoft.Json versioning trick where the assembly version doesn’t change across releases — even though I hate this idea on principle

 

Rip out the Strong Naming?

We never actually did this (yet), but it’s apparently very possible to rip strong naming out of .Net assemblies using a tool like Mono.Cecil. We wanted to steal an idea from Sebastien Lambla to build a feature into Ripple where it would remove the signing out of assemblies as part of the Ripple package restore feature. If I do stay involved in .Net development and the fine folks in Redmond don’t fix strong naming in the next release of .Net, I’ll go back and finally build that feature into Ripple.

My Approach to Strong Naming 

We never signed any of the FubuMVC related assemblies. FubuMVC itself was never a problem because it’s a high level dependency that was only used by the kind of OSS friendly shops that generally don’t care about strong naming. StructureMap on the other hand is a foundational type of library that’s much more frequently used in a larger variety of shops and it had been signed in the releases from (I think) 2.0 in 2007 until the previous 2.6.4 release in 2012. I still decided to tear out the strong naming as part of the big StructureMap 3.0 release with the thinking that I’d support a parallel signed release at some point if there was any demand for strong naming — preferably if the people making those demands for strong naming would be kind enough to submit the pull request for the fancier build automation to support that. I can’t tell you yet if this will work out, and judging from other projects, it won’t.

 

What about Security!  Surely you need Strong Naming!

For all of you saying “I need strong naming for security!”, just remember that many OSS projects commit their keys into source control.  I think that if you really want to certify that a signed assembly represents exactly the code you think it is, you probably need to compile it yourself from a forked and tagged repository that you control.  I think that the signed assemblies as security feature of .Net is very analogous to the checked exceptions feature in the Java language. I.e., something that its proponents think is very important, a source of extra work on the part of users, and a feature that isn’t felt to be important by any other development community.

To balance out my comments here, last week there was a thread on a GitHub issue for OctoKit about whether or not they should sign their released assemblies that generated a lot more pro-assembly signing sentiment than you’ll find here.  Most of the pro-strong naming comments seem to be more about giving users what they want rather than a discussion of whether or not strong naming adds any real value but hey, you’re supposed to make your users happy.

 

What about automatic redirects?

But Jeremy, doesn’t Nuget write the assembly redirects for you?  Our experience was that Nuget didn’t really get the assembly redirects right as often as not and it still required manual intervention– especially in cases where we were using config files that varied from the App.config/Web.config norm.  There is some new automatic redirect functionality in .Net 4.5.1 that should help, but I still think that plenty of issues will leak through and this is just a temporary bandaid until the CLR team makes a more permanent fix. I think what I’m saying about the automatic redirects is that I grew up in Missouri and you’ll just have to show me that it’s going to work.

 

My wish for .Net vNext 

I would like to see the CLR team build Semantic Versioning directly into the CLR assembly binding so that the strong named binding isn’t quite so finicky such that the CLR can happily load version 3.0.3 whenever that’s the present version even though the declared version in other assemblies is 3.0.1 rather than matching on exactly version 3.0.1 only. I’d like to see assembly redirect declarations in config files go away entirely. I think the attempts to build automatic redirects into VS2013 are a decent temporary patch, but the real answer is going to have to come at the CLR level.

 

 

Next time…. 

So the DevOps topics of strong naming, versioning, and continuous integration across multiple repositories is taking a lot more verbiage to cover than I anticipated and long time readers of mine know that I don’t really do “short” very well.  In following posts I’ll talk about why I think Semantic Versioning is so important, more about how Ripple solved some of our Nuget problems, recommendations for how to improve Nuget, and a specific post on doing branch per feature across multiple repositories.

OSS Bugs and the Participatory Community

I pushed the official StructureMap 3.0 release a couple weeks ago.  Since this is a full point release and comes somewhat close to being a full rewrite of the internals, there’s inevitably been a handful of bugs reported already.  While I’d love to say there were no bugs at all, I’d like to highlight a trend I’d love to see continue that’s quite different from what supporting StructureMap was like just a few years ago.  Namely, I’ve been getting failing unit tests in GitHub or messages on the list from users that demonstrate exactly what they’re trying to do and how things are failing for them — and I’d love to see this trend continue (as long as there are really bugs).

One of the issues that was reported a couple times early on was an issue with setter injection policies.  After the initial reports, I looked at my unit tests for the functionality and they were all passing, so I was largely shrugging my shoulders — until a user gave me the exact reproduction steps in a failing test on the GitHub issue showing me a combination of inputs and usage steps that brought out the problem.  Once I had that failing test in my grubby little hands, fixing the issue was a simple one line fix.  I guess my point here is that I’m seeing more and more StructureMap users jumping in and participating much more in how issues get fixed, where the exact problem is, and how it should be fixed and that’s making StructureMap work a lot less stressful, more enjoyable for me, and bugs are getting addressed much faster than in the past.

 

Pin down problems with new tests

An old piece of lore from Extreme Programming was to always add a failing test to your testing library before you fix a reported bug to ensure that the bug fix stays fixed.  Regression bugs are a serious source of overhead and waste in software development, and anything that prevents them from reoccurring is probably worth doing in my book.  If you look at the StructureMap codebase, you’ll see a namespace for bug regression tests that we’ve collected over the years.

 

The Participatory Community and .Net

In a way, now might be the golden age of open source development.  GitHub in particular supports such a more collaborative workflow than the older hosting options ever did. Nuget, for all the flaws that I complain about, makes it so much easier to release and distribute new releases.

In the same vein, even Microsoft of all people is trying to encourage an OSS workflow with their own tools and allowing .Net community members to jump in and contribute to their tools. I think that’s great, but it only matters if more folks in the greater .Net community will participate in the OSS projects.  Today I think that there’s a little too much passivity overall in the .Net community.  After all, our tools are largely written by the official .Net teams inside of Redmond, with OSS tools largely consigned to the fringes.  Most of us probably don’t feel like we can exert any control over our tooling, but every indication I see is that Microsoft itself actually wants to change that with more open development processes as they host more of their tools in GitHub or CodePlex and adopt out of band release cycles happening more often than once every couple years.

My case in point is an issue about Nuget usage that came up in a user list a couple weeks back that I think is emblematic of how the .Net community needs to change to make OSS matter.  The author was asking the Nuget team to do something to Nuget’s package restore feature to fix the accumulation of outdated Nuget packages in a codebase.  No specific recommendations, just asking the Nuget team to fix it.  While I think the Nuget team is perfectly receptive to addressing that down the road, the fubu community has already solved that very technical issue with our own open source Ripple tool that we use as a Nuget workflow tool.  Moreover, the author of that user list post could get a solution for his problem a lot faster even if he didn’t want to use our Ripple tool by contributing a pull request to fix Nuget’s package restore himself rather than wait for Microsoft to do it for him when they can get around to it. My point here is that the .Net community isn’t fully using all of its potential because we’re collectively just sitting back and waiting for a finite number of folks in the Gu’s teams to fix too many of our problems instead of just jumping in and collectively doing it ourselves.

Participating doesn’t have to mean taking on big features and issuing pull requests of outstanding technical merit, it can also mean commenting on GitHub issues, proving specific feedback to the authors, and doing what I think of as “sandpaper pull requests” — small contributions that clear up little usability issues in a tool.  It’s not a huge thing, but I really appreciate how some of my coworkers have been improving exception messages and the logging when they find little issues in their own usage of some of the FubuMVC related projects.  That kind of thing helps a great deal because I know that it’s almost impossible for me to foresee every potential usage or source of confusion.

We obviously use a lot of the FubuMVC family of tools at my workplace, and something I’ve tried to communicate to my colleagues is that they never have to live with usability problems in any of those frameworks or libraries because we can happily change those tools to improve their development experience (whenever it’s worth the effort to do so of course).  That’s a huge shift in how many developers think about their tools, but given the choice to be empowered and envision a better experience versus just accepting what you get, wouldn’t you like to have that control?

 

I wish MS would do even more in the open

Of course, I also think it would help if Microsoft could do even more of their own development in public. Case in point, I’m actually pretty positive about the technical work the ASP.Net team for one is talking about for forthcoming versions, but only the chosen ASP Insiders and MVP types are seeing any of that work (I’m not going to get myself yelled at for NDA violations, so don’t even ask for specifics).  They might just get a lot more useful involvement from the community if they were able to do that work in the open before the basic approach was completely baked in.

 

 

 

StructureMap 3.0 is Live

At long last, I pushed a StructureMap 3.0 nuget to the public feed today nearly a full decade after its very first release on SourceForge as the very first production ready IoC container in .Net.   I’d like to personally thank Frank Quednau for all his work on finally making StructureMap PCL compliant.

While this release don’t add a lot of new features, it’s a big step forward for usability and performance and I believe that StructureMap 3.0 will greatly improve the developer experience.  Most importantly, I think the work we’ve done on the 3.0 release has fixed all of the egregious internal flaws that have bothered me for years and *I* feel very good about the shape of the code.

 

What’s Different and/or Improved?

  • The diagnostics and exception messages are much more useful
  • The registration DSL has been greatly streamlined with a hard focus on consistency throughout the API
  • The core library is now PCL compliant and targets .Net 4.0.  So far SM3 has been successfully tested on WP8
  • I have removed strong naming from the public packages to make the world a better place.  I’m perfectly open to creating a parallel signed version of the Nuget, but I’m holding out for a pull request on that one:)
  • Nested container performance and functionality is vastly improved (100X performance in bigger applications!)
  • Xml configuration and the ancient attribute based configuration has been removed.
  • Interception has been completely rewritten with a much better mechanism for applying decorators (a big gripe of mine from 2.5+)
  • Resolving large object graphs is faster
  • The Profile support was completely rewritten and more effective now
  • Child containers (think client specific or feature specific containers)
  • Improvements in the usage of open generics registration for Jimmy Bogard
  • Constructor function selection and lifecycle configuration can be done per Instance (like every other IoC container in the world except for SM <3.0 😦 )
  • Anything that touches ASP.Net HttpContext has been removed to a separate StructureMap.Web nuget.
  • Conventional registration is more powerful now that the configuration model is streamlined and actually useful as a semantic model

 

 

Still to Come:

I’m still working on a new version of the StructureMap website (published with FubuDocs!), but it’s still a work in progress.  Since I desperately want to reduce the time and effort I spend on supporting StructureMap, look for it soon-ish (for real this time).

Someday I’d like to get around to updating my old QCon ’08 talk about lessons learned from a long-lived codebase with all the additional lessons from the past 6 years.

StructureMap 3 is gonna tell you what’s wrong and where it hurts

tl;dr:  StructureMap 3 introduces some cool new diagnostics, improves the old diagnostics, and makes the exception messages a lot better.  If nothing else scroll to the very bottom to see the new “Build Plan” visualization that I’m going to claim is unmatched in any other IoC container.

I’ve had several goals in mind with the work on the shortly forthcoming StructureMap 3.0 release.  Make it run FubuMVC/FubuTransportation applications faster, remove some clumsy limitations, make the registration DSL consistent, and make the StructureMap exception messages and diagnostic tools completely awesome so I don’t have to do so much work answering questions in the user list users will have a much better experience.  To that last end, I’ve invested a lot of energy into improving the diagnostic abilities that StructureMap exposes and adding a lot more explanatory information to exceptions when they do happen.

First, let’s say that we have these simple classes and interfaces that we want to configure in a StructureMap Container:

    public interface IDevice{}
    public class ADevice : IDevice{}
    public class BDevice : IDevice{}
    public class CDevice : IDevice{}
    public class DefaultDevice : IDevice{}

    public class DeviceUser
    {
        // Depends on IDevice
        public DeviceUser(IDevice device)
        {
        }
    }

    public class DeviceUserUser
    {
        // Depends on DeviceUser, which depends
        // on IDevice
        public DeviceUserUser(DeviceUser user)
        {
        }
    }

    public class BadDecorator : IDevice
    {
        public BadDecorator(IDevice inner)
        {
            throw new DivideByZeroException("No can do!");
        }
    }

Contextual Exceptions

Originally, StructureMap used the System.Reflection.Emit classes to create dynamic assemblies on the fly to call constructor functions and setter properties for better performance over reflection alone.  Almost by accident, having those generated classes made for a decently revealing stack trace when things went wrong.  When I switched StructureMap to using dynamically generated Expression‘s, I got a much easier model to work with for me inside of StructureMap code, but the stack trace on runtime exceptions became effectively worthless because it was nothing but a huge series of nonsensical Lambda classes.

As part of the effort for StructureMap 3, we’ve made the Expression building much, much more sophisticated to create a contextual stack trace as part of the StructureMapException message to explain what the container was trying to do when it blew up and how it got there.  The contextual stack can tell you, from inner to outer steps like:

  1. The signature of any constructor function running
  2. Setter properties being called
  3. Lambda expressions or Func’s getting called (you have to supply the description yourself for the Func, but SM can use an Expression to generate a description)
  4. Decorators
  5. Activation interceptors
  6. Which Instance is being constructed including the description and any explicit name
  7. The lifecycle (scoping like Singleton/Transient/etc.) being used to retrieve a dependency or the root Instance

So now, let’s say that we have this container configuration that experienced StructureMap users know is going to fail when we try to fetch the DeviceUserUser object:

        [Test]
        public void no_configuration_at_all()
        {
            var container = new Container(x => {
                // Notice that there's no default
                // for IDevice
            });

            // Gonna blow up
            container.GetInstance<DeviceUserUser>();
        }

will give you this exception message telling you that there is no configuration at all for IDevice.

One of the things that trips up StructureMap users is that in the case of having multiple registrations for the same plugin type (what you’re asking for), StructureMap has to be explicitly told which one is the default (where other containers will give you the first one and others will give you the last one in).  In this case:

        [Test]
        public void no_configuration_at_all()
        {
            var container = new Container(x => {
                // Notice that there's no default
                // for IDevice
            });

            // Gonna blow up
            container.GetInstance<DeviceUserUser>();
        }

Running the NUnit test will give you an exception with this exception message (in Gist).

One last example, say you get a runtime exception in the constructor function of a decorating type.  That’s way out of the obvious way, so let’s see what StructureMap will tell us now.  Running this test:

        [Test]
        public void decorator_blows_up()
        {
            var container = new Container(x => {
                x.For<IDevice>().DecorateAllWith<BadDecorator>();
                x.For<IDevice>().Use<ADevice>();
            });

            // Gonna blow up because the decorator
            // on IDevice blows up
            container.GetInstance<DeviceUserUser>();
        }

generates this exception.

Container.WhatDoIHave()

StructureMap has had a textual report of its configuration for quite a while, but the WhatDoIHave() feature gets some better formatting and the ability to filter the results by plugin type, assembly, or namespace to get you to the configuration that you want when you want it.

This unit test:

        [Test]
        public void what_do_I_have()
        {
            var container = new Container(x => {
                x.For<IDevice>().AddInstances(o => {
                    o.Type<ADevice>().Named("A");
                    o.Type<BDevice>().Named("B").LifecycleIs<ThreadLocalStorageLifecycle>();
                    o.Type<CDevice>().Named("C").Singleton();
                });

                x.For<IDevice>().UseIfNone<DefaultDevice>();
            });

            Debug.WriteLine(container.WhatDoIHave());

            Debug.WriteLine(container.WhatDoIHave(pluginType:typeof(IDevice)));
        }

will generate this output.

The WhatDoIHave() report will list each PluginType matching the filter, all the Instance’s for that PluginType including a description, the lifecycle, and any explicit name.  This report will also tell you about the “on missing named Instance” and the new “fallback” Instance for a PluginType if one exists.

It’s not shown in this blog post, but all of the information that feeds the WhatDoIHave() report is query able from the Container.Model property.

Container.AssertConfigurationIsValid()

At application startup time, you can verify that your StructureMap container is not missing any required configuration and generally run environment tests with the Container.AssertConfigurationIsValid() method.  If anything is wrong, this method throws an exception with a report of all the problems it found (build exceptions, missing primitive arguments, undeterminable service dependencies, etc.).

For an example, this unit test with a missing IDevice configuration…

        [Test]
        public void assert_container_is_valid()
        {
            var container = new Container(x => {
                x.ForConcreteType<DeviceUserUser>()
                    .Configure.Singleton();
            });

            // Gonna blow up
            container.AssertConfigurationIsValid();
        }

…will blow up with this report.

Show me the Build Plan!

I saved the best for last.  At any time, you can interrogate a StructureMap container to see what the entire “build plan” for an Instance is going to be.  The build plan is going to tell you every single thing that StructureMap is going to do in order to build that particular Instance.  You can generate the build plan as either a shallow representation showing the immediate Instance and any inline dependencies, or a deep representation that recursively shows all of its dependencies and their dependencies.

This unit test:

        [Test]
        public void show_me_the_build_plan()
        {
            var container = new Container(x =>
            {
                x.For<IDevice>().DecorateAllWith<BadDecorator>();
                x.For<IDevice>().Use<ADevice>();

            });

            var shallow = container.Model
                .For<DeviceUserUser>()
                .Default
                .DescribeBuildPlan();

            Debug.WriteLine("This is the shallow representation");
            Debug.WriteLine(shallow);

            var deep = container.Model
                .For<DeviceUserUser>()
                .Default
                .DescribeBuildPlan(3);

            Debug.WriteLine("This is the recursive representation");
            Debug.WriteLine(deep);
        }

generates this representation.