Adventures in Custom Testing Infrastructure

tl;dr: Sometimes the overhead of writing custom testing infrastructure can lead to easier development

 

Quick Feedback Cycles are Key

It’d be nice if someday I could write all my code perfectly in both structure and function the first time through, but for now I have to rely on feedback mechanisms to tell me when the code isn’t working correctly. That being said, I feel the most productive when I have the tightest feedback cycle between making a change in code and knowing how it’s actually working — and by “quick” I mean both the time it takes for me to setup the feedback cycle and how long the feedback cycle itself takes.

While I definitely like using quick twitch feedback tools like REPL’s or auto-reloading/refreshing web tools like our own fubu run or Mimosa.js’s “watch” command, my primary feedback mechanism for code centric tasks is usually automated tests. That being said, it helps when the tests are mechanically easy to write and run quickly enough that you can get into a nice “red/green/refactor” cycle. For whatever reasons, I’ve hit several problem domains in the last couple years where it was laborious in my time to set up the preconditions and testing inputs and also to measure and assert on the expected outcomes.

 

Maybe Invest in Some Custom Testing Infrastructure?

In some cases I knew right away that testing a feature was going to be a problem, so I started by asking myself “how do I wish I could express the test setup and assertions.” If it seems feasible, I’ll write custom ObjectMother if that’s possible or Test Data Builder‘s for the data setup in more complex cases. I’ve occasionally resorted to building little interpreters that read text and create data structures or files (I do this more often for hierarchical data than anything else I think) or perform assertions on the final state.

You can see an example of this in my old Storyteller2 codebase. Storyteller is a tool for automated acceptance tests and includes a tree view pane in the UI with the inevitable hierarchy of tests organized by suites in an n-deep hierarchy like:

Top Level Suite
  - Suite 1
    -Suite 2
    -Suite 3
      - Test 1
      - Test 2

In the course of building the Storyteller client, I needed to write a series of tests on the tree view state that had to start with a known hierarchy of suites and test files as inputs. After performing actions like filtering or receiving state updates within the UI, I needed to assert on the expected display in this test explorer pane (which tests and suites were visible and were they marked as running, failed, successful, or unknown).

First, to deal with the setup of the hierarchical data I created a little custom class that read flat text data and turned that into the desired hierarchy:

            hierarchy =
                StoryTeller.Testing.DataMother.BuildHierarchy(
                    @"
t1,Success
t2,Failure
t3,Success
s1/t4,Success
s1/t5,Success
s1/t6,Failure
s1/s2/t7,Success
s1/s2/t7,Success
");

Then in the “assertion” part of the test I created a custom specification class that could again read its expectations expressed as flat text and assert that the resulting tree view exactly matched the specified state:

        [Test]
        public void the_child_nodes_are_constructed_with_the_empty_suite()
        {
            var spec =
                new TreeNodeSpecification(
                    @"
suite:Empty
suite:s1
test:s1/t4
test:s1/t5
test:s1/t6
test:t1
test:t2
test:t3
");

            spec.AssertMatch(view.TestNode);
        }

As I recall, writing the simple text parsing classes just to make the expression of the automated tests made it pretty easy to add new behavior quickly. In this case, the time investment upfront for the custom testing infrastructure paid off.

 

FubuMVC’s View Engine Support

A couple months ago I finally got to carve off some time to finally go overhaul the view engine support code in FubuMVC. My main goals were to cut the unnecessarily complex internal code down to something more manageable as a precursor to optimizing both runtime performance and FubuMVC’s time to initialize an application. Since I was about to start monkeying around quite a bit with the internals of code that many of our users depend on, it’s a good thing that we had an existing suite of integration tests that acted as acceptance tests (think layouts, partials, HTML helpers, and our conventional attachment of views to routes) so that in theory I could safely make the restructuring changes without breaking existing behavior.

Going in though, I knew that there was some significant drawbacks to using our existing mechanism for testing the view engine support and I wasn’t looking forward to the inevitable test failures or formulating new integration tests.

 

Problems with the Existing Test Suite

In order to write end to end tests against the view engine support we had been effectively writing little mini FubuMVC applications inside our integration test libraries. Quite naturally, that often meant adding several view files and folders to simulate all the different permutations for layout rendering, using partials, sharing views from external Bottles (a superset of Area’s for you ASP.Net MVC folks), and view profiles (mobile vs. desktop for example). In the test fixtures we would spin up a FubuMVC application with Katana, run HTTP requests, and make assertions against the content that should or should not be present in the HTTP response body.

It wasn’t terrible, but it came with a serious drawbacks:

  1. It wasn’t complete and I’d need to add additional tests
  2. It was expensive in mechanical effort to create those little mini FubuMVC applications that had to be spread over so many different files and even folders
  3. Understanding the tests when something went wrong could be difficult because the expression of the test was effectively split over so many files

 

The New Approach

Before going too far into the code changes against the view engine support, I built a new test harness that would allow me to express in one testing class file:

  1. What all the views and layouts were in the entire system including the content of the views
  2. What the views were in external Bottles loaded into the application
  3. If necessary, configure a complete FubuMVC application if the defaults weren’t sufficient for the test
  4. Declare what content should and should not be rendered when certain routes were executed

The end result was a base class I called ViewIntegrationContext. Mechanically, I made TestFixture classes deriving from this abstract class. In the constructor function of the test fixture classes I would specify the location, content, and view model of any number of Spark or Razor views. When the test fixture class was first executed, it would:

  1. Create a brand new folder using a guid as the name to host the new “application” to avoid collisions with existing test runs (while the new test harness does try to clean up after itself, I’ve learned not to be very trusting of the file system during automated tests)
  2. Write out the Spark and Razor files based on the data specified in the constructor function to the new application folder
  3. Optionally load content Bottles and FubuMVC configurations inside the test harness (ignore that for now if you would, but it was a huge win for me)
  4. Load a new FubuMVC application in memory with the root directory pointing to our new folder for just this test

For each test, the ViewIntegrationContext object uses FubuMVC 2.0’s brand new in memory test harness (somewhat inspired by PlaySpecification from Scala) to execute a “Scenario” where I could declaratively specify what url to render and assert what content should or should not be present in the HTML output.

To make this concrete, the very simplest test to check that FubuMVC really can render a Spark view looks like this:

    [TestFixture]
    public class Simple_rendering : ViewIntegrationContext
    {
        public Simple_rendering()
        {
            SparkView<BreatheViewModel>("Breathe")
                .Write(@"
<p>This is real output</p>
<h2>${Model.Text}</h2>");
        }

        [Test]
        public void can_render()
        {
            Scenario.Get.Input(new AirInputModel{TakeABreath = true});
            Scenario.ContentShouldContain("<h2>Breathe in!</h2>");
        }
    }

    public class AirEndpoint
    {
        public AirViewModel TakeABreath(AirRequest request)
        {
            return new AirViewModel { Text = "Take a {0} breath?".ToFormat(request.Type) };
        }

        public BreatheViewModel get_breathe_TakeABreath(AirInputModel model)
        {
            var result = model.TakeABreath
                ? new BreatheViewModel { Text = "Breathe in!" }
                : new BreatheViewModel { Text = "Exhale!" };

            return result;
        }
    }

    public class AirRequest
    {
        public AirRequest()
        {
            Type = "deep";
        }

        public string Type { get; set; }
    }

    public class AirInputModel
    {
        public bool TakeABreath { get; set; }
    }

    public class AirViewModel
    {
        public string Text { get; set; }
    }

    public class BreatheViewModel : AirViewModel
    {

    }

 

So did this payoff? Heck yeah it did, especially for scenarios where I needed to build out multiple views and layouts. The biggest win for me was that the tests were completely self-contained instead of spread out over so many files and folders. Even better yet, the new in memory Scenario support in FubuMVC made the actual tests very declarative with decently descriptive failure messages.

 

It’s Not All Rainbows and Unicorns

I cherry picked some examples that I felt went well, but there have been some other times when I’ve gone down a rabbit hole of building custom testing infrastructure only to see it be a giant boondoggle. There’s a definite bit of overhead to writing this kind of tooling and you always have to consider whether you’ll save time in the whole compared to writing more crude or repetitive testing code. While I tend to be aggressive about building custom test harnesses, you might accurately call it a speculative exercise and hold off until you feel some pain in your testing.

Moreover, any kind of custom test harness where you decouple the expression of the test (inputs, actions, and assertions) from the actual code that’s being exercised obfuscates your traceability back to the actual code. I’ve seen plenty of cases where the “goodness” of making the expression of the test prettier and more declarative was more than offset by how hard it was to debug test failures because of the extra mental overhead of connecting the meaning of the test to the code that should be implementing it. It’s for that reason that I’ve never been a big fan of most Behavior Driven Development tools for testing that isn’t customer facing.

 

 

 

Final Thoughts on Nuget and Some Initial Impressions on the new KVM

There’s an index now for the FubuMVC Lessons Learned series of blogposts. I fully intend to keep going with more content in this series some day, but this post is going to wrap up my thoughts on DevOps kind of topics like Nuget, Ripple, continuous integration, multi-repository development, and build management. If you read this and say why is Jeremy being so negative about Nuget?, I’m writing this from three different perspectives:

  1. As a consumer of Nuget for complicated dependency trees
  2. As an author of value added tooling (Ripple) on top of Nuget
  3. As just an armchair software architect who enjoys looking at tools like Nuget and thinking about how I’d build that code differently if it was mine

Some of the recommendations I’m making here are either already on the Nuget team’s stated vNext roadmap or something I know they’ve thought about themselves.

My Thoughts on Project K

I’ve been piddling around with the content on this post for so long that in the meantime Microsoft finally publicly announced their Project K work (ASP.Net vNext) to the unwashed masses who aren’t some sort of ASP Insider or MVP. While I’m mostly positive about their general direction with Project K, I really wish the ASP.Net team had been much more transparent with their technical direction. Since most of Project K feels like catching up with other development platforms more than anything original or innovative, I don’t know what they buy for themselves by being so opaque.

The Project K runtime looks like it would have improved the FubuMVC team’s development experience over the current .Net framework and tooling:

  • The K runtime does not include strong naming, which was an almost unending source of unnecessary aggravation for me over the past 3 years or so. Bravo.
  • It looks like the K tooling has some level of support for our Ripple tool’s floating dependency concept.
  • Eliminating the csproj files should go a long way toward reducing the csproj merge hell problem, but I bet that the project.json file still ends up being the worst source of merge conflicts even so. Hopefully they carefully consider how they’re going to integrate Nuget into K development to avoid so much of the friction we had before we wrote Ripple.
  • Again, by eliminating the heavyweight csproj file baggage, it’s going to be much easier to write project and item generation tools like “rails new” or our own “fubu new.” I had to spend an inordinate amount of time last year building out our FubuCsProjFile library for csproj/sln file manipulation and project templating.
  • I’m still going to claim that FubuMVC with Bottles had the best modularity strategy of any .Net web framework, but much of what we did would probably be unnecessary or at least much easier if I’m correctly understanding what the Roslyn compiler is capable of. If it’s really more efficient to forgo distributing assemblies in favor of just letting Roslyn build everything into memory, then all that work we did to smuggle web content through assemblies for “feature bottles” is now unnecessary.
  • In my last post I talked about the auto-reloading/auto-refreshing development web server we built for FubuMVC development. While it sounds like it’s not usable yet, the ASP.Net team looks to be building something similar, but using the Roslyn compiler to recompile on code file changes should be a faster feedback loop than we have with our “fubu run” tool. I’ll be interested to see if they can create an experience comparable to Golang or Node.js in this regard.

Overall, I think that Microsoft has probably rode the Visual Studio.Net horse for too long.  Project K starts the work of making .Net development much more productive with lighter weight tools and better command line friendliness that can only help the community over time.

I’ve been asked a couple times on Twitter if I would consider restarting FubuMVC work after the Project K runtime is usable. My answer is an emphatic no, and moreover, I would have ditched many of our technical efforts around FubuMVC much earlier if I’d known about Project K earlier. If anything, I’d like to start over from scratch when Project K stabilizes a bit, but this time with much less ambitious goals.

 

Suggestions for Nuget in .Net Classic

  • Optionally remove strong naming during package restore. Who knows how long it’ll take to move all server side development to the new K runtime. In the meantime, strong naming is still a borderline disaster. Maybe the Nuget team should consider a function where the Nuget package restore functionality could either strip out all the strong naming of the downloaded nugets or strong name nuget assemblies on the  fly for hosting technologies that require naming. We discussed this functionality quite a bit for Ripple but it’s never gotten done.
  • Make the Nuget server API less chatty and ditch oData. I’m not the world’s foremost expert on software performance, but the very first thing I learned about the subject was to minimize the number of network round trips in your software. The Nuget client makes far too many network round trips because it seems to be treating every single dependency as a separate workflow instead of treating the entire dependency graph as one logical operation. I strongly recommend (and I’ve said this to them in person) that they add documented API alternatives where you can batch up your requirements in one request — and I think they need to go beyond oData to get this done.
  • Make the Nuget server API a standard. The Nuget server API wasn’t great to work with. We usually had to resort to using Fiddler to figure out what the built in clients were doing in order to accomplish things with Ripple. A packaged assembly to be a client to the Nuget server API would be a great way to promote value added tools on top of Nuget.
  • Decouple Nuget.Core from Visual Studio.Net. This is a big one. I think it was a huge mistake to depend on Visual Studio automation to modify the csproj files to make the assembly references. We first used crude Xml manipulation then later the FubuCsProjFile to manipulate csproj files in Ripple. Since we had no hard coupling to Visual Studio, we were able to provide valuable command line support for installing and updating Nuget packages that enabled our continuation integration across repositories approach.
  • Better Command Line Support. Just like Ripple, you should should be able to install, update, and remove nuget dependencies from the projects in your repository without Visual Studio being involved at all.
  • Modularize Nuget.Core. We found the Nuget.Core codebase to be poorly factored. Ripple would have been much easier to build if the Nuget.Core code had been reasonably modular. At a minimum, I’d recommend splitting the Nuget.Core code up in such a way that you could easily reuse the logic for applying a Nuget package to a csproj file all by itself. I’d again recommend pulling out the interaction with the Nuget server API’s. Making Nuget.Core more modular would open up opportunities for community built additions to the Nuget ecosystem.
  • Support Private Dependencies (Somehow). Several folks have mentioned to me on Twitter how Node.js’s NPM is able to isolate the dependencies of your dependencies.   Maybe Nuget could support something like ilrepack to inline assemblies that your dependencies depend on but your application itself doesn’t need otherwise.
  • Ripple Fix. Maybe it’s better now, but what we found in the early days of Nuget is that things could easily go off the rails. One of the features I’m most proud of in Ripple was our “ripple fix” command. What this did was check the declared dependencies of every single project in your repository and make everything right. If assembly references were missing for some reason, add them. If a dependency is missing, go install it. Whatever it takes, just make things work. And no, it wasn’t perfect but it still made using Nuget more reliable for us.
  • (Possibly) Adopt the Actor Model for Nuget Internals. I think that in order for batched Nuget operations across a complicated tree more efficient, Nuget needs to do a far better job of using parallelization to improve performance. Unfortunately, all the operations you do (find the latest of package A, download package B, etc.) are interrelated. The current architecture of Ripple is roughly an old-fashioned pipes and filters architecture. We’ve long considered using some kind of Actor model to better coordinate all the parallel actions and latch the code from performing unnecessary work.

 

 

A Better Development Web Server for .Net with FubuMVC 2.0

tl;dr: FubuMVC 2.0 includes an improved command line development web server for a better development time experience

First, a quick caveat. All the code in this post is part of the forthcoming FubuMVC 2.0 release and has not yet been publicly released and might not be for a couple more months. All the source code shown and referenced here is in the master branch of FubuMVC itself.

 

What’s the Problem?

If you ask me what I think are the best software development practices to come out of the past couple decades, I’d rattle off some things  like TDD/BDD, Continuous Integration, Continuous Delivery, and Iterative Development. If you take a step back and think about these “best practices” you’ll see that there’s a common thread of attempting to create more useful and definitely more rapid feedback cycles. The cruel truth about slinging code around and envisioning new software systems out of whole cloth is that it’s so very easy to be wrong — and that’s why software professionals have spent so much time and energy finding more ways to know when they’re efforts aren’t working and making it easier and less risky to introduce improvements.

Now, apply this idea of rapid feedback cycles to day to day web development. What I really want to do is to shorten the time between saving changes to any part of my web application, be it C# code or CSS/LESS or any kind of JavaScript, and seeing the impact of that change. Granted, I’d strongly prefer to use quick twitch unit tests to remove most of the potential problems in either the server side or client side code in isolation, but there are still plenty of issues where you’re going to need to do some manual testing with the complete application stack.

Enter “fubu run –watched”

Several web development frameworks include development servers that can automatically reload the application and sometimes even refresh a running browser when files in the application are changed. The Play framework in Java/Scala has the Play Console, we’re starting to use Mimosa.js and its watched server mode for pure client side development, and FubuMVC has our “fubu run” command line server that was originally inspired by PlayConsole.

While we’ve had this functionality for quite a while, I just pushed some big improvements yesterday that fixes our auto-refresh infrastructure. The new architecture looks like this:

Slide1

Installing the fubu gem* places the fubu.exe on your box’s PATH, so you can just go straight to the directory that contains your FubuMVC application and type:

fubu run -o --watched

The “o” flag just directs the command to open your default browser to the Url where the application is going to be hosted. While the fubu run process always auto-reloads the application on recompilation, the “watched” flag adds some additional mechanics to refresh the current page in your browser whenever certain files change in your application.

So how does it work?

First off, fubu run has to create a separate AppDomain to host your FubuMVC application. If you’ve done .Net development for any length of time you know that there is no way to unload and replace a loaded AppDomain. By running a separate AppDomain we’re able to tear down and recreate new AppDomain’s when you recompile your application. The other reason to use a separate AppDomain is to make the application run just like it would in a production server, and that means making the new AppDomain be based on the directory of the application instead of wherever the fubu.exe happens to be and making the new AppDomain use the correct web.config file for the application.

The FubuMVC community has some special sauce in our Bottles framework called the RemoteServiceRunner that makes it easier to setup and coordinate multiple AppDomain’s in code (I’ll happily blog about later if anyone wants me to).  As shown in this code,  fubu run loads the second AppDomain based on the right location and copies over any missing assemblies to the application bin path for FubuMVC, Katana, and their dependencies.

The next step is to bootstrap your FubuMVC application in the second AppDomain. As of FubuMVC 1.0, the idiomatic way to describe your application’s bootstrapping is with an implementation of the IApplicationSource interface. If there’s only a single concrete class implementing this interface in all of your application binaries, fubu run is smart enough to use that to bootstrap your application exactly the way it would be (with some development time differences I’ll discuss below). The simplest possible implementation might look like this class:

    public class SimpleApplicationSource : IApplicationSource
    {
        public FubuApplication BuildApplication()
        {
            return FubuApplication
                .DefaultPolicies()
                .StructureMap();
        }
    }

Part of our Bottles infrastructure is an EventAggregator class that was specifically created in order to easily send messages bidirectionally between AppDomain’s opened by our RemoveServiceRunner class. fubu run uses this EventAggregator to send and receive messages to the new AppDomain to start an embedded Katana web server and bootstrap a new FubuMVC application using the IApplicationSource class for your application. Likewise, fubu run waits to hear messages back from the 2nd AppDomain about whether or not the application bootstrapping was successful.

Watching for Changes

If in the “–watched” mode, fubu run starts up a class called FubuMvcApplicationFileWatcher to watch for changes to certain files inside the application directory and call back to an observer interface when file changes trigger certain logical actions:

    public interface IApplicationObserver
    {
        // Refresh the browser
        void RefreshContent();

        // Tear down and reload the entire AppDomain
        void RecycleAppDomain();

        // Restart the FubuMVC application
        // without restarting the 
        // AppDomain
        void RecycleApplication();
    }

To make this concrete, changes in:

  • .spark, .css, .js, or .cshtml files will trigger a refresh of the browser
  • web.config, .dll, or .exe files will cause a full recycling of the AppDomain
  • other *.config file changes will trigger an application recycle

 

Automatically Refreshing the Browser with WebSockets

The last piece of the puzzle is how we’re able to refresh the browser when the server content changes. In the currently released version of the fubu.exe tool I tried to use WebDriver to launch the browser in such a way that it would be easy to control the browser from fubu run without any impact on the html markup and the application itself. Let’s just say that didn’t work very well at all because of how easily WebDriver gets out of sync with rapid browser updates in real life.

For the FubuMVC 2.0 work, I went down a different path and used web sockets to send messages from the original fubu run process directly to the browser. My immediate and obvious goal was to pull this off without forcing any changes whatsoever onto the application’s HTML markup to support the auto-reloading. Instead, I used this work as an opportunity to revamp FubuMVC’s support for using OWIN middleware to use a new bit of custom middleware to squirt in a little bit of HTML markup into an HTML page’s <head> element after FubuMVC had rendered a page, but before the content is sent to the browser. While I’ll leave a discussion for how and why FubuMVC exposes OWIN middleware configuration much differently than other .Net frameworks for another day, it’s good enough to know that FubuMVC is only adding the auto-reloading content when the application detects that it is running inside of fubu run –watched.

On the client side, we just inject this wee bit of javascript code to listen at a supplied web sockets address (%WEB_SOCKET_ADDRESS%) for a message telling the page to refresh:

        var start = function () {
            var wsImpl = window.WebSocket || window.MozWebSocket;

            // create a new websocket and connect
            window.ws = new wsImpl('ws://localhost:%WEB_SOCKET_ADDRESS%');

            // when data is comming from the server, this metod is called
            ws.onmessage = function (evt) {
                location.reload();
            };

        }

	window.addEventListener('load', function(e){
	  start();
	});

Inside the fubu run process I used the lightweight Fleck library to run a web sockets server used to send messages to the browser. The code that uses Fleck is in the BrowserDriver class.

 

Roslyn for the Next Level of Awesomeness…

…or really just achieving parity with other web development platforms that already have a great auto-reload capability.

At this point, fubu run does not try to do the compilation for you based on file changes to *.cs files. I’m still not sure if I think that’s a good idea to do later or not. I’m perfectly okay with the “make changes, hit CTRL-SHIFT-B” workflow triggering a re-compilation which then in turn triggers fubu run to recycle the AppDomain and reload the application. Several folks have kicked around the idea of using the new, much faster Roslyn compiler behind the scenes to do the compilation and try to achieve a more rapid feedback cycle like you’d expect from a Node.js based solution. I think this would be a lot more appealing to me personally as my teams at work continue to break away from using Visual Studio.Net in favor of lightweight editors like Sublime.

 

Getting the Edge Nugets and Fubu.Exe

The edge nugets for FubuMVC 2 are on MyGet at https://www.myget.org/feed/Packages/fubumvc-edge. The fubu.exe gem can be manually downloaded from the artifacts of our TeamCity CI build.

 

 

* In an earlier post I discussed why we used Ruby Gems to distribute .Net executables instead of using Nuget. Long story short, Nuget by itself isn’t all that great for command line tools and Chocolately isn’t cross platform like Gems.

FubuMVC Lessons Learned: Semantic Versioning

TL;DR: I think that all .Net Nuget components should try to adopt Semantic Versioning, but it’s not all unicorns and rainbows

This series has a new home page at FubuMVC Lessons Learned. This post continues my discussion on componentized development in .Net that I started here and here. I’m going to write at least one or two more posts next week to seal this topic off.

 

Semantic Versioning

Here’s the deal, one of the huge advantages of Nuget (or any other package manager) is the ability to more easily package up new releases as an OSS publisher and apply updates as a consumer. Great, but if things are changing and evolving a lot more frequently than when we got a new version of .Net every couple years, how are we gonna keep all of these different components working together with our application when it’s so much more common to have conflicting dependency versions that may or may not be actually compatible? I strongly believe that the .Net community should go all in and SemVer all the things!

If you’ve never heard of it, Semantic Versioning (SemVer) is a standard for versioning  software libraries and tools that attempts to formalize rules for how to increment software versions based on compatibility and functionality. To summarize, a semantic version looks like MAJOR.MINOR.PATCH.BUILD, with the build number being optional. To simplify the SemVer rules:

  • Anything version below 1.0 means that anything goes and you have no reason to expect any public API to be stable (making it to FubuMVC 1.0 was a very big deal to me last year).
  • The major version should be incremented any time breaking changes are introduced
  • The minor version should be incremented with any purely additive functionality
  • The patch version should be incremented for backward compatible fixes

At the time I write this post, the current released version of StructureMap is 3.0.3.116. I introduced breaking changes to the public API in the initial 3.0 release, so StructureMap had to get a full point major release version which also resets the minor and patch versions. Since the 3.0 release I’ve had to make 3 different bug fix releases, but haven’t added any new functionality, so we’re up to 3.03. The last number (116) is the auto incrementing build number for traceability back to the exact revision in source control. If you adopted the original 3.0.0 version of StructureMap, you should be able to drop in any version less than some future 4.0 version and be confident that your code will still work assuming that StructureMap is following SemVer correctly.

Bundler and gems has specific support for expressing dependency constraints with SemVer like the following from FubuMVC’s Gemfile:

gem "rake", "~>10.0"
gem "fuburake", "~>1.2"

The version constraint ~>1.2 is the equivalent in Nuget to using the dependency constraint [1.2, 2.0). I’d love to see Nuget adopt something like this operator for shorthand SemVer constraints, and then ask the .Net community to adopt this type of dependency versioning as a common idiom. I think I’d even like to see Nuget get some kind of mode where SemVer constraints are applied by default, such that Nuget warns you when you’re trying to install version 3.0 of StructureMap when one of your other dependencies declares a dependency on StructureMap 2.6.* because Nuget should be able to recognize that the latest version of StructureMap has potentially breaking changes.

 

SemVer and Backward Compatibility

On the publishing side of things, adopting SemVer is a way to be serious about communicating compatibility about older versions to our users. Adopting SemVer also makes you much more cognizant of the breaking changes you introduce into code.

In FubuMVC, adopting SemVer made us much more cautious about changing any kind of change to public API’s. The downside I think is that proposed improvements tend to collect for a lot longer before you bite the bullet and make a brand new major release. The 1.0 release was a huge source of stress for me because I was trying really hard to clean up all the public API’s before we locked them in and we ended up introducing a lot of changes in a short time that while I still think were very beneficial, made many of our users reticent to update. It also made us think very hard about spinning out many ancillary functions that were still evolving into separate libraries and repositories so that we could lock down the core of FubuMVC to 1.0 and continue to allow the downstream libraries to evolve faster with a pre 1.0 version. We did go too far and I’m reversing some of that in FubuMVC 2.0, but I still think that was a valuable exercise over all and I’d recommend that any large sprawling framework at least consider doing that.

Looking back, I think that community participation in FubuMVC clearly tapered off after the 1.0 release when so many of our users just stopped keeping up with the latest version and I’m still not sure what we could have done to improve that situation. If I had to do it all over again, I’m not sure I would have put so many eggs into the big SemVer 1.0 basket and just spent time doing documentation and quick starts.

The 2.0 release of FubuMVC is introducing fewer changes, but I’m still endeavoring to eliminate as much technical debt as possible and simplify the usage of major subsystems like content negotiation, authorization, and framework configuration — and that inevitably means that yep, there’s going to be quite a few breaking changes all at once. I’m not entirely sure if making so many breaking changes all at one time is a great idea, but neither was dribbling out occasional breaking changes before the 1.0 version.

The easiest thing is to just be perfect upfront, but since you’ll never actually pull that off in your framework, you have to figure out some sort of strategy for how you’ll introduce breaking changes.

Another way to put this is that you can either hold the line on backward compatibility and continue to annoy your users with all the ill informed early decisions over time or really piss off your users one time by fixing your usability issues. Pick your poison I guess. Or know your users. Do they want improvements more or do they value stability over everything else?

 

Next time:

I started this series of posts about .Net componentization with the promise to the Nuget team that I’d write about what Ripple added to Nuget and my recommendations for Nuget itself, so that will be next.  I’ve also got a much shorter post coming up on doing branch per feature with Nuget across multiple repositories.

Until next time, have a great weekend one and all….

 

 

 

FubuMVC Lessons Learned — Strong Naming Woes and Workarounds

TL;DR:  .Net isn’t ready for the brave new world of componentization and smaller, more rapid updates quite yet, but I have some suggestions based on the development of FubuMVC.

To recap, I’m writing a series of posts about the FubuMVC community’s technical experiences over the past five or so years after deciding to give up on new development after the shortly forthcoming 2.0 version. So far, I’ve discussed…

  • Our usage of the Russian Doll model and where we fell in the balance between repetitive ceremonial code and using conventions for cleaner code
  • Command line bootstrapping, polyglot programming, and the value in standardizing codebase layout for the sake of tooling (I say yes, .Net community says no)
  • Lots of DevOps woes trying to develop across multiple repositories using Nuget, TeamCity, and our own Ripple tool.

…and a lot of commenters have repeatedly and quite accurately slammed the documentation (that’s shooting fish in a barrel) and got strangely upset over the fact that we used Rake for our own build automation even though FubuMVC itself had no direct coupling to Ruby.

 

Strong Naming is Hamstringing Modularity in .Net

If you follow me on Twitter, you probably know that I hate strong naming in .Net with a passion.  Some of you might be reading this and saying “it only takes a minute to set up strong naming on projects, I don’t see the big deal” and others are saying to yourselves that “gosh, I’ve never had any problem with strong naming, what’s all the teeth gnashing about?”

Consider this all too common occurrence:

  • Your application uses an OSS component called FancyLogging and you’re depending on the latest version 2.1.5.
  • You also use FancyServiceBus that depends on FancyLogging version 2.0.7.
  • You might also have a dependency on FancyIoC with in turn depends on a much older version of FancyLogging 2.0.0.

Assuming that the authors of FancyLogging are following Semantic Versioning (more on this in a later post), you should be able to happily use the latest version of FancyLogging because there are no semantically breaking changes between it and the versions that FancyServiceBus and FancyIoC were compiled against. If these assemblies are strong named however, you just set yourself up for a whole lot of assembly version conflicts because .Net matches on the entire version number. At this point, you’re going to be spending some quality time with the Fusion Log Viewer (you want this in your toolbox anyway).

Strong naming conflicts are a common issue when your dependencies improve or change rapidly, and you upgrade somewhat often, and especially when upstream dependencies like log4net, IoC containers, and Newtonsoft.Json are also used by your upstream dependencies. Right now I think this problem is felt much more by shops that depend more heavily on OSS tools that don’t originate in Redmond, but Microsoft itself is very clearly aiming for a world where .Net itself is much more modular and the new, smaller libraries will release more often. Unless the .Net community addresses the flaws in strong naming and adopts more effective idioms for Nuget packaging, my strong naming conflict versions are about to come to a mainstream .Net shop near you.

 

Strong Naming Woes and Workarounds

While I’ve never had any trouble whatsoever with using it, at one point a couple years ago assembly conflicts with Newtonsoft.Json was my single biggest problem in daily development.  Fortunately, I encounter very little trouble today due to Newtonsoft.Json’s strong naming.  Why was this one library such a huge headache and what can we learn from how myself and the .Net community as a whole alleviated most of the pain?

First off, Newtonsoft.Json was and is strong named.  It’s very commonly used in many of the other libraries that the projects I work with depend on for daily development, chief among them WebDriver and RavenDb.  I also use Newtonsoft.Json in a couple different fubumvc related projects (Bottles and Storyteller2). Newtonsoft.Json has historically been a very active project and releases often. Really due to its own success, Newtonsoft.Json became the poster child for strong naming hell.

Consider this situation as it was in the summer of 2012:

  • We depended upon WebDriver and RavenDb, both of which at that time had an external dependency upon Newtonsoft.Json.
  • WebDriver and RavenDb were both strong named themselves
  • Both WebDriver and RavenDb were releasing quite often and we frequently needed to pull in new versions of these tools to address issues and subsequent versions of these tools often changed their own dependency versions of Newtonsoft.Json
  • Our own Storyteller2 tool we used for end to end testing depended upon Newtonsoft.Json
  • Our Serenity library we used for web testing FubuMVC applications depends upon WebDriver and Storyteller2 and you guessed it, we were frequently improving Serenity itself as we went along
  • We would frequently get strong naming conflicts in our own code by installing the Newtonsoft.Json nuget to an additional project within the same solution and getting a more recent version than the other projects

I spent a lot of time that summer cursing how much time I was wasting just chasing down assembly version conflicts from Newtonsoft.Json and WebDriver and more recently from ManagedEsent. Things got much better by the end of that year though because:

  • WebDriver ilmerge’d Newtonsoft.Json so that it wasn’t exposed externally
  • WebDriver, partially at my urging, ditched strong naming — making it much easier for us
  • RavenDb ilmerge’d Newtonsoft.Json as well
  • We ilmerge’d Newtonsoft.Json into Storyteller2 and everywhere else we took that as a dependency after that
  • Newtonsoft.Json changed their versioning strategy so that they locked the assembly version but let the real Nuget version float within semantically versioned releases. Even though that does a lot to eliminate binding conflicts, I still dislike that strategy because it’s a potentially confusing lie to consumers. The very fact that this is the recommended approach by the Nuget team themselves as the least bad approach is a pretty good indication to me that strong naming needs to be permanently changed inside the CLR itself.
  • ManagedEsent was killing us because certain RavenDb nugets smuggle it in as an assembly reference, conflicting with our own declared dependency on ManagedEsent from within the LightningQueues library. Again, we beat this with ilmerge’ing our dependency on ManageEsent into LightningQueues and problem solved.
  • With Ripple, we were able to enforce solution wide dependency versioning, meaning that when we installed a Nuget to a project in our solution with Ripple it would always try to first use the same Nuget version as the other projects in the solution.  That made a lot of headaches go away fast. The same solution wide versioning applied to Nuget updates with Ripple.

 

My advice for library publishers and consumers

I definitely feel that much of the following list is a series of compromises and workarounds, but such is life:

  • Be cautious consuming any strong named library that revisions often
  • Don’t apply strong naming at all to your published assemblies unless you have to
  • Don’t bundle in secondary assemblies into your Nuget packages that you don’t control — i.e., the loose ManagedEsent assembly in RavenDb packages problem or this issue in GitHub for Ripple
  • Prefer libraries that aren’t strong named if possible (e.g., why we choose NLog over log4net now)
  • Privately ilmerge your dependencies into your libraries if the consumers when possible, which I’ll freely admit is a compromise that can easily cause you other problems later and some clumsiness in your build scripts.  Do make sure that your unit and integration tests run against the ilmerge’d copy of your assembly in continuous integration builds for best results
  • Do the Newtonsoft.Json versioning trick where the assembly version doesn’t change across releases — even though I hate this idea on principle

 

Rip out the Strong Naming?

We never actually did this (yet), but it’s apparently very possible to rip strong naming out of .Net assemblies using a tool like Mono.Cecil. We wanted to steal an idea from Sebastien Lambla to build a feature into Ripple where it would remove the signing out of assemblies as part of the Ripple package restore feature. If I do stay involved in .Net development and the fine folks in Redmond don’t fix strong naming in the next release of .Net, I’ll go back and finally build that feature into Ripple.

My Approach to Strong Naming 

We never signed any of the FubuMVC related assemblies. FubuMVC itself was never a problem because it’s a high level dependency that was only used by the kind of OSS friendly shops that generally don’t care about strong naming. StructureMap on the other hand is a foundational type of library that’s much more frequently used in a larger variety of shops and it had been signed in the releases from (I think) 2.0 in 2007 until the previous 2.6.4 release in 2012. I still decided to tear out the strong naming as part of the big StructureMap 3.0 release with the thinking that I’d support a parallel signed release at some point if there was any demand for strong naming — preferably if the people making those demands for strong naming would be kind enough to submit the pull request for the fancier build automation to support that. I can’t tell you yet if this will work out, and judging from other projects, it won’t.

 

What about Security!  Surely you need Strong Naming!

For all of you saying “I need strong naming for security!”, just remember that many OSS projects commit their keys into source control.  I think that if you really want to certify that a signed assembly represents exactly the code you think it is, you probably need to compile it yourself from a forked and tagged repository that you control.  I think that the signed assemblies as security feature of .Net is very analogous to the checked exceptions feature in the Java language. I.e., something that its proponents think is very important, a source of extra work on the part of users, and a feature that isn’t felt to be important by any other development community.

To balance out my comments here, last week there was a thread on a GitHub issue for OctoKit about whether or not they should sign their released assemblies that generated a lot more pro-assembly signing sentiment than you’ll find here.  Most of the pro-strong naming comments seem to be more about giving users what they want rather than a discussion of whether or not strong naming adds any real value but hey, you’re supposed to make your users happy.

 

What about automatic redirects?

But Jeremy, doesn’t Nuget write the assembly redirects for you?  Our experience was that Nuget didn’t really get the assembly redirects right as often as not and it still required manual intervention– especially in cases where we were using config files that varied from the App.config/Web.config norm.  There is some new automatic redirect functionality in .Net 4.5.1 that should help, but I still think that plenty of issues will leak through and this is just a temporary bandaid until the CLR team makes a more permanent fix. I think what I’m saying about the automatic redirects is that I grew up in Missouri and you’ll just have to show me that it’s going to work.

 

My wish for .Net vNext 

I would like to see the CLR team build Semantic Versioning directly into the CLR assembly binding so that the strong named binding isn’t quite so finicky such that the CLR can happily load version 3.0.3 whenever that’s the present version even though the declared version in other assemblies is 3.0.1 rather than matching on exactly version 3.0.1 only. I’d like to see assembly redirect declarations in config files go away entirely. I think the attempts to build automatic redirects into VS2013 are a decent temporary patch, but the real answer is going to have to come at the CLR level.

 

 

Next time…. 

So the DevOps topics of strong naming, versioning, and continuous integration across multiple repositories is taking a lot more verbiage to cover than I anticipated and long time readers of mine know that I don’t really do “short” very well.  In following posts I’ll talk about why I think Semantic Versioning is so important, more about how Ripple solved some of our Nuget problems, recommendations for how to improve Nuget, and a specific post on doing branch per feature across multiple repositories.

FubuMVC Lessons Learned — Misadventures in DevOps with Ripple, Nuget, TeamCity, and Gems

tl;dr: Large .Net codebases are a challenge, strong naming in .Net is awful, Nuget out of the box breaks down when things are changing rapidly, and the csproj file format is problematic — but we remedied some, but certainly not all, of these issues with a tool we built called Ripple.

At this point, we have collectively decided that FubuMVC did absolutely nothing right in regards to documentation, samples, getting started tutorials, and generally making it easier for new users to get going, so I’m going to automatically delete any comment deriding us yet again on these topics.  We did, however, do a number of things in the purely technical realm that might still be of use to other folks and that’s what I’m concentrating on throughout the rest of these posts.

I got to speak at the Monkeyspace conference last year (had a blast, highly recommend it this year in Dublin, I’m hoping to go again) and one of my talks was about our experiences with dependency management across the FubuMVC projects.  Just a sampling of the slides included “A Comedy of Errors”, “Merge Hell”, and “Strong Naming Woes.”  To put it mildly, we’ve had some technical difficulties with Nuget, TeamCity, Git, our own Ripple tool, and .Net CLR mechanics in general that have caused me to use language that my Grandmother would not approve of.

 

.Net Codebases Do Not Scale

There’s a line of thinking out there that says that you can happily use dynamic languages for smaller applications but when you get into bigger codebases you have to graduate to a grown up static typed language. While I don’t have any first hand knowledge about how well a Ruby on Rails or a Python Django codebase will scale in size, I can tell you that a .Net codebase can become a severe productivity problem as it becomes much larger.  Visual Studio.Net grinds down, ReSharper gets slower, your build script gets slower, compile times take longer, and basically every single feedback mechanism that a good development team wants to use gets slower.

The FubuMVC ecosystem became very large within just a year of constant development.  Take a brief glance at the sheer number of active projects we have going at the moment and the test counts.*  The sheer size of the codebase became a problem for us and I felt like the slower build times were slowing down development.

 

Split up Large Codebases into Separate Cohesive Repositories

The main FubuMVC GitHub repository quickly became quite large and sluggish. If you’ll take a look at a very old tagged branch from that time, you can probably see why.  The main FubuMVC.Core library was already getting big just by itself and the repository also included several related libraries and their associated testing libraries — and my consistent experience over the years has been that the number of projects in a solution to compile seems to make more difference in compile times than the raw number of lines of code.

The very obvious thing to do was to split off the ancillary libraries like FubuCore, FubuLocalization, and FubuValidation into their own git repositories. Great and all, but the next issue was that FubuMVC was dependent upon the now upstream build products from FubuCore and FubuLocalization.  So what to do?  The old way was to just check the FubuCore and FubuLocalization assemblies into the FubuMVC repository, but as I’ll discuss later, we found that to be problematic with git.  More importantly, even though in a perfect world the upstream projects were stable and would never introduce breaking changes, we would absolutely need a quick way to migrate changes from upstream to test against the downstream FubuMVC codebase.

 

Enter Ripple

As part of the original effort to break up the codebases, Joshua Flanagan and I worked on a series of tooling that we eventually named “Ripple”** to automate the flow of build products from upstream to downstream in cascading automated builds (by “cascading” I mean that a successful build of FubuCore would trigger a new CI build of FubuMVC using the latest FubuCore build products).  Ripple originally worked in two modes.  First, a “local” ripple that acted as a little bit of glue to build locally on your box and copy the build products to the right place in the downstream code and run the downstream build to check for any breaking changes without having to push any code changes to GitHub.  Secondly, a Nuget-based workflow that allowed us to consume the very latest Nuget version from the upstream builds in the downstream builds.  More on Ripple below.

Once this infrastructure was in place we were able to break the codebase into smaller, more cohesive codebases and reap the rewards of faster build times and smaller codebases — or we would have been if that new tooling hadn’t been so damn problematic as I’ll describe in sections below.

 

Thoughts on breaking up a codebase:

The following is a mixed bag of my thoughts on when and whether you should break up a large codebase.  Unfortunately, there is no black and white answer.  I’m still glad that we went through the effort of breaking up the main FubuMVC codebase, but in retrospect I would not have gone as far as we did in splitting up the main FubuMVC.Core library and I’ve actually partially reversed that trend for the 2.0 release.

  • Don’t break things up before what would be the upstream library or package has a stable API
  • Things that are tightly coupled and often need to change together should be built and released together
  • You better have a good way to automate cascading builds in continuous integration across related codebases before you even attempt to split up a codebase
  • It was helpful to pull out parts of the codebase that were relatively stable while isolating subsystems that were evolving much more quickly
  • Sometimes breaking up a larger library into smaller, more cohesive libraries makes the functionality more discoverable.  The FubuCore library for instance has support for command line tools, model binding, reflection helpers, and an implementation of a dependency graph.  We theorized over the years that we should have broken up FubuCore to make it more obvious what the various functions were.
  • Many .Net developers seem to be almost allergic to having more than a couple dependencies and we got feedback over the years that some folks really didn’t like how starting a new FubuMVC application required so many different libraries.  The fact that Nuget put it all together for you was irrelevant.  Unfortunately, I think this issue militates against getting too slap happy with dividing up your code repository and assemblies.
  • It was a little challenging to do release management across so many different repositories.  Even though we could conceivably release packages separately for upstream and downstream products, I usually ended up doing the releases together.  I don’t have a great answer for this problem and now I don’t have to now that we’re shutting things down;)
  • Don’t attempt to build a very large codebase with a very small team, no matter how good or passionate said team is

 

Don’t put binaries in Git

Git does NOT like it when you check your binaries into source control the way that we used to in the pre-Nuget/Subversion days.  We found this out the hard way when I was at Dovetail and we were rev’ing FubuMVC very hard at the same time and committing the FubuMVC binaries into our application’s git repository.  The end result was that the Java (and yes, I see where the problem might have been) client that TeamCity used for Git just absolutely choked and flopped over with out of memory exceptions.  It turned out that Jenkins *could* handle our git repository, but there’s still a very noticeable performance lag with the git repo’s that have way too many revisions of binary dependencies.

Other git clients can handle the binaries, but there’s a very noticeable hit to Git’s near legendary performance moving from a repository with almost all text files to a codebase that commits their binaries *cough* MassTransit (at the time that I wrote this draft) *cough*.

 

Enter ripple restore for cascading builds

In the end, we built the “ripple restore” feature (Ripple’s analogue to Nuget Package Restore, but for the record, we built our feature before the Nuget team did and Ripple is significantly faster than Nuget’s;) to find and explode out Nuget dependencies declared inside the codebase at build time as a precursor to compiling in our rake scripts on either the CI server or a local developer box.  We no longer have to commit any binaries to the repository that are delivered via Nuget and the impact on Git repository performance, especially for fresh clones, is very noticeable.

Ripple treats Nuget dependencies as either a “Fixed” dependency that is locked to a specific version or a “Float” dependency that is always going to resolve to the very latest published version.  In the case of FubuMVC’s dependencies today, the internal FubuCore dependency is a “Float”, while the external dependencies like NUnit, Katana, and WebDriver are “Fixed.”  When the FubuMVC build on our TeamCity server runs, it always runs against the very latest version of FubuCore.  Moreover, we use the cascading build feature of TeamCity to trigger FubuMVC builds whenever a FubuCore build succeeds.  This way we have very rapid feedback whenever an upstream change in FubuCore breaks something downstream in FubuMVC — and while I wish I could say that I was so good that that never happens, it certainly does.

Awesome, we’ve got cascading builds and a relatively quick feedback loop between our upstream and downstream builds.  Except now we ran into some new problems.

 

Nuget and CsProj Merge Hell

Ironically, Phil Haack has a blog post out this week on how bad merge conflicts are in csproj files, because the last time I saw Phil I was trying to argue with him that the way that Nuget embeds the package version number in the exploded file paths was a big mistake specifically because that caused us no end of horrendous merge conflicts when we were updating Nugets rapidly.  When we were doing development across repositories it wasn’t that uncommon for the same Nuget dependency to get updated in different feature branches, causing some of the worse merge conflicts you can possibly imagine with csproj and the Nuget Packages.config files.

Josh Arnold beat this issue permanently in Ripple 2.0 by using a very different workflow than Nuget out of the box.  The first step was to eliminate the !@#$%ing version number in our Nuget /packages folder by moving the version requirement to the level of the codebase instead of being project by project (another flaw in OOTB Nuget in my opinion).  Doing that meant that the csproj files only be change on Nuget package updates if the Nuget packages in question changed their own structure.  Bang, a whole bunch of merge issues went away just like that.

The second thing we did was to eliminate the Packages.config Xml files in each project folder and replace it with a simple flat file analogue that listed each project’s dependencies in alphabetic order.  That change also helped reduce the number of merge conflicts.

The end result was that we were able to move and revision faster and more effectively across multiple code repositories.  I still think that was a very big win.

Let’s just say this bluntly, it’s a big anti-pattern to have any kind of central file that has to frequently and simultaneously change by multiple people doing what should be parallel work — be it ORM mapping, IoC container configuration, the blasted csproj files, or some kind of routing table, it’s a hurtful design that causes project friction — and Xml makes it so much worse.  I think Microsoft tools to this day do not take adequate precautions to avoid merge conflict problems (EF configuration, *.csproj files, Web.config).

 

TeamCity as a Nuget Server

It’s easy to use, quick to set up, but I’m recommending that you don’t use it for much.  I feel like it didn’t hold up very well as the feed got bigger performance wise and it would often “lose its Nugets,” forcing you to re-build the Nuget index before the feed would work again.  The rough thing was that the feed wouldn’t fail, it would just return very old results and cause plenty of havoc for our users that depended on the edge feed.  To keep the performance to a decent level, we had to set up archive rules to delete all but say 10 versions of each Nuget.  Deleting the old version caused obvious trouble.

If I had to do it again, I would have opted for many fewer builds and made a point of treating builds that were triggered by cascading builds from builds that were triggered by commits to source control.  As it is, we publish new Nuget packages every single time a CI build succeeds.  In a better world we would have only published new Nugets on builds caused by changes to the source code repository.

 

Reproduceability of Builds with Floating Dependencies

The single worst thing we did that I’ve always regretted is not creating perfectly reproduce-able builds with our floating ripple dependencies.  To make things fully reproduce-able, we would have needed to be able to build a specific version of a codebase by using the exact same versions of all of its dependencies that were used at the time of the CI build.  I.e., when I try to work with FubuMVC #1200, we need it to be using the exact same version of FubuCore that was used inside the TeamCity CI build.  We got somewhat close.  Ripple would build a history digest of its dependencies and have that published to TeamCity’s artifacts — but we were archiving the artifacts to keep the build server running faster.  We also set up tagging on successful builds to add the build number to the GitHub repositories after successful builds (that’s an old part of the Extreme Programming playbook too, but we didn’t get that going upfront and I really wish we had).  What we probably needed was some additional functionality in Ripple to take our published dependency history and completely rebuild everything back to the way we needed it.  I’m still not exactly sure what we should have done to alleviate this issue.

This was mostly an issue for teams that wanted to try to reproduce problems with older versions of FubuMVC that were well behind the current edge version.  One of the things that I think helped sink FubuMVC was that so many of the teams that were very active in our community early on stopped contributing and being involved and we were stuck trying to support lots of old versions even while we were trying to push to the magic SemVer 1.0 release.

 

Nuget vs. gems vs. git submodules for build time dependencies

In my last post in this series I got plenty of abuse from people who think that having to install Ruby and learn how to type “rake” at the command prompt was too big of a barrier for .Net developers (that was sarcasm by the way).  Some commenters thought that we should have been using absolutely nothing but Nuget to fulfill build time dependencies on tools that we used within the automated builds themselves.  There’s just one little problem with that understandable ideal: Nuget was a terrible fit for command line executables within an automated build.

We use a couple different command line tools from within our Rake scripts:

  • Ripple for our equivalent of Nuget package restore and publishing build products as Nuget packages
  • The Bottles executable for packing web content like views and Javascript files into assemblies in pre-compile steps (think of an area in Rails or a superset of ASP.Net MVC portable areas)
  • FubuDocs for publishing documentation (and yes, it’s occurred to me many times that I spent much more time creating a new tool for publishing technical documentation than I did writing docs but we all know which activity was much more fun to do)

Yet again, having the package version number as part of the Nuget package folder made using command line tools resolved by Nuget a minor nightmare.  We had an awkward Ruby function that could magically determine the newest version of the Nuget package to find the right path to the bottles.exe/ripple.exe/fubudocs.exe tools.  In retrospect, we could have used the Nuget’s as is to continue distributing the executables after Ripple 2.0 fixed the predictable Nuget path problem, but we also wanted to be able to use these tools from the command line as well.

As it turned out, using Ruby gems to distribute and install our .Net executables was much more effective than Nuget was.  For one, gems is well integrated with Rake which we were already using.  Gems also has the ability to place a shim for an executable onto your Windows PATH, making our custom tools easier to use at the command line.

And yes, we could have used Chocolately to distribute our executables, but at one point we were much more invested in making our ecosystem be cross platform and Chocolately is strictly Windows only where gems is happily cross-platform.  Because Rob Reynolds is just that awesome, you can actually use Chocolately to install our gems and it’ll even install Ruby for you if it’s not already there.

And yeah, in the very early days because FubuMVC actually predates Nuget, we tried to distribute shared build utilities via Git submodules.  The less said about this approach, the better.  I’ll never willingly do that again.

Topics for Another Day:

I’m trying to keep my newfound blogging resurgence going, but we’ll see how it goes.  The stuff below got cut from this post for length:

  • Why semantic versioning is so important
  • My recommendations for improving Nuget (the Nuget team is asking the Ripple team for input and I’m trying to oblige)
  • One really long rant about how strong naming is completely broken
  • Why and how we think Ripple improves upon Nuget
  • Branch by feature across multiple repositories with Ripple

 

* For my money, the number of unit tests is my favorite metric to judge how large and complicated a codebase is, but only measured in the large.  Like all other metrics, I’d use this with a grain of salt and it’s probably also useless if the developers are cognizant of the unit test as metric usage.

** Get it, changes “ripple” from one repo to another.  I love the song “Ripple” by the Greatful Dead and it’s also pretty likely that I was listening to that song the day we came up with the name.

 

 

OSS Bugs and the Participatory Community

I pushed the official StructureMap 3.0 release a couple weeks ago.  Since this is a full point release and comes somewhat close to being a full rewrite of the internals, there’s inevitably been a handful of bugs reported already.  While I’d love to say there were no bugs at all, I’d like to highlight a trend I’d love to see continue that’s quite different from what supporting StructureMap was like just a few years ago.  Namely, I’ve been getting failing unit tests in GitHub or messages on the list from users that demonstrate exactly what they’re trying to do and how things are failing for them — and I’d love to see this trend continue (as long as there are really bugs).

One of the issues that was reported a couple times early on was an issue with setter injection policies.  After the initial reports, I looked at my unit tests for the functionality and they were all passing, so I was largely shrugging my shoulders — until a user gave me the exact reproduction steps in a failing test on the GitHub issue showing me a combination of inputs and usage steps that brought out the problem.  Once I had that failing test in my grubby little hands, fixing the issue was a simple one line fix.  I guess my point here is that I’m seeing more and more StructureMap users jumping in and participating much more in how issues get fixed, where the exact problem is, and how it should be fixed and that’s making StructureMap work a lot less stressful, more enjoyable for me, and bugs are getting addressed much faster than in the past.

 

Pin down problems with new tests

An old piece of lore from Extreme Programming was to always add a failing test to your testing library before you fix a reported bug to ensure that the bug fix stays fixed.  Regression bugs are a serious source of overhead and waste in software development, and anything that prevents them from reoccurring is probably worth doing in my book.  If you look at the StructureMap codebase, you’ll see a namespace for bug regression tests that we’ve collected over the years.

 

The Participatory Community and .Net

In a way, now might be the golden age of open source development.  GitHub in particular supports such a more collaborative workflow than the older hosting options ever did. Nuget, for all the flaws that I complain about, makes it so much easier to release and distribute new releases.

In the same vein, even Microsoft of all people is trying to encourage an OSS workflow with their own tools and allowing .Net community members to jump in and contribute to their tools. I think that’s great, but it only matters if more folks in the greater .Net community will participate in the OSS projects.  Today I think that there’s a little too much passivity overall in the .Net community.  After all, our tools are largely written by the official .Net teams inside of Redmond, with OSS tools largely consigned to the fringes.  Most of us probably don’t feel like we can exert any control over our tooling, but every indication I see is that Microsoft itself actually wants to change that with more open development processes as they host more of their tools in GitHub or CodePlex and adopt out of band release cycles happening more often than once every couple years.

My case in point is an issue about Nuget usage that came up in a user list a couple weeks back that I think is emblematic of how the .Net community needs to change to make OSS matter.  The author was asking the Nuget team to do something to Nuget’s package restore feature to fix the accumulation of outdated Nuget packages in a codebase.  No specific recommendations, just asking the Nuget team to fix it.  While I think the Nuget team is perfectly receptive to addressing that down the road, the fubu community has already solved that very technical issue with our own open source Ripple tool that we use as a Nuget workflow tool.  Moreover, the author of that user list post could get a solution for his problem a lot faster even if he didn’t want to use our Ripple tool by contributing a pull request to fix Nuget’s package restore himself rather than wait for Microsoft to do it for him when they can get around to it. My point here is that the .Net community isn’t fully using all of its potential because we’re collectively just sitting back and waiting for a finite number of folks in the Gu’s teams to fix too many of our problems instead of just jumping in and collectively doing it ourselves.

Participating doesn’t have to mean taking on big features and issuing pull requests of outstanding technical merit, it can also mean commenting on GitHub issues, proving specific feedback to the authors, and doing what I think of as “sandpaper pull requests” — small contributions that clear up little usability issues in a tool.  It’s not a huge thing, but I really appreciate how some of my coworkers have been improving exception messages and the logging when they find little issues in their own usage of some of the FubuMVC related projects.  That kind of thing helps a great deal because I know that it’s almost impossible for me to foresee every potential usage or source of confusion.

We obviously use a lot of the FubuMVC family of tools at my workplace, and something I’ve tried to communicate to my colleagues is that they never have to live with usability problems in any of those frameworks or libraries because we can happily change those tools to improve their development experience (whenever it’s worth the effort to do so of course).  That’s a huge shift in how many developers think about their tools, but given the choice to be empowered and envision a better experience versus just accepting what you get, wouldn’t you like to have that control?

 

I wish MS would do even more in the open

Of course, I also think it would help if Microsoft could do even more of their own development in public. Case in point, I’m actually pretty positive about the technical work the ASP.Net team for one is talking about for forthcoming versions, but only the chosen ASP Insiders and MVP types are seeing any of that work (I’m not going to get myself yelled at for NDA violations, so don’t even ask for specifics).  They might just get a lot more useful involvement from the community if they were able to do that work in the open before the basic approach was completely baked in.

 

 

 

FubuMVC Lessons Learned — Magic Conventions, Russian Dolls, and Ceremonial Code

tl;dr: FubuMVC stressed concise code and conventions over writing explicit code and that turns out to be polarizing

The typical way to be successful in OSS is to promote the hell out of your work before you give up on it, but I’m under a lot less pressure after giving up on FubuMVC and I feel like blogging again.  Over the next couple months I’m going to write about the technical approach we took on FubuMVC to share the things I think went well, the stuff I regret, how I wish we’d done it instead, discarded plans for 2.0, and how I’d do things differently if I’m ever stupid enough to try this again on a different platform.

 

Some Sample Code

So let’s say that you start a new FubuMVC project and solution from scratch (a topic for another blog post) by running:

fubu new Demo --options spark

You’ll get this (largely placeholder) code for the MVC controller part of the main home page of your new application:

namespace Demo
{
        // You'd generally do *something* in this method, otherwise
        // it's just some junk code to make it easier for FubuMVC
        // to hang a Spark or Razor view off the "/" route
        // For 2.0, we wanted to introduce an optional convention
        // to use an "action less view" for the home page
	public class HomeEndpoint
	{
		public HomeModel Index(HomeModel model)
		{
			return model;
		}
	}
}

To make things a little more clear, fubu new also generates a matching Spark view called Home.spark to render a HomeModel resource:

<viewdata model="Demo.HomeModel" />

<content:header></content:header>

<content:main>
Your content would go here
</content:main>

<content:footer></content:footer>

The code above demonstrates a built in naming convention in FubuMVC 1.0+ such that the home “/” route will point to the action “HomeEndpoint.Index()” if that class and method exists in the main application assembly.

Some additional endpoints (FubuMVC’s analogue to Controller’s in MVC frameworks) might look like the following:

    public class NameInput
    {
        public string Name { get; set; }
    }

    public class Query
    {
        public int From { get; set; }
        public int To { get; set; }
    }

    public class Results { }

    public class MoreEndpoints
    {
        // GET: name/{Name}
        public string get_name_Name(NameInput input)
        {
            return "My name is " + input.Name;
        }

        // POST: query/{From}/to/{To}
        public Results post_query_From_to_To(Query query)
        {
            return new Results();
        }
    }

 

What you’re not seeing in the code above:

  • No reference whatsoever to the FubuMVC.Core namespace.
  • No model binding invocation
  • No code to render views, write output, set HTTP headers
  • No code for authentication, authorization, or validation
  • No “BaseController” or “ApiController” or “CoupleMyCodeVeryTightlyToTheFrameworkGuts” base class
  • No attributes
  • No marker interfaces
  • No custom fluent interfaces that render your application code almost completely useless outside of the context of the web framework you’re using

What you are seeing in the code above:

  • Concrete classes that are suffixed with “Endpoint” or “Endpoints.”  This is an out of the box naming convention in FubuMVC that marks these classes as being Action’s.
  • Public methods that take in 0 or 1 inputs and return a single “resource” model (they can be void methods too).
  • Route patterns are derived from the method names and properties of the input model — more on this in a later post because this one’s already too long.

 

One Model In, One Model Out and Automatic Content Negotiation

As a direct reaction to ASP.Net MVC, the overriding philosophy from the very beginning was to make the code we wrote be as clean and terse as possible with as little coupling from the application to the framework code as possible.  We also believed very strongly in object composition in contrast to most of the frameworks of the time that required inheritance models.

To meet this goal, our core design idea from the beginning was the one model in, one model out principle.  By and large, most endpoints should be built by declaring an input model of everything the action needs to perform its work and returning the resource or response model object.  The framework itself would do most of the repetitive work of reading the HTTP request and writing things out to the HTTP response for you so that you can concentrate on only the responsibilities that are really different between actions.

At runtime, FubuMVC is executing content negotiation (conneg) to read the declared inputs (see the NameModel class above and how it’s used) from the HTTP request with a typical combination of model binding or deserialization, calling the action methods with the right input, and then rendering the resource (like HomeModel in the HomeEndpoint.Index() method) again with content negotiation.  As of FubuMVC 1.0, rendering views are integrated into the normal content negotiation infrastructure (and that ladies and gentlemen, was a huge win for our internals).    Exactly what content negotiation can read and write is largely determined by OOTB conventions.  For example:

  • If a method returns a string, then we write that string with the content-type of “text/plain”
  • If an action method returns a resource model, we try to “attach” a view that renders that very resource model type
  • In the absence of any other reader/writer policies, FubuMVC attaches Json and Xml support automatically with model binding for content-type=”application/x-www-form-urlencoded” requests

The automatic content negotiation conventions largely means that FubuMVC action methods just don’t have to be concerned about the details of how the response is going to be written out.

View resolution is done conventionally as well.  The easiest, simplest thing to do is to simply make your strongly typed Spark or Razor view render the resource model type (the return type) of an action method and FubuMVC will automatically apply that view to the matching action.  I definitely believe that this was an improvement over the ASP.Net MVC ViewResult mechanism and some other frameworks *cough* NancyFx *cough* adapted this idea after us.

The huge advantage of the one model in, one model out was that your action methods became very clean and completely decoupled from the framework.  The pattern was specifically designed to make unit testing action methods easy, and by and large I feel like we met that goal.  It’s also been possible to reuse FubuMVC endpoint code in contexts outside of a web request because there is no coupling to FubuMVC itself in most of the action methods, and I think that’s been a big win from time to time.  Try to do that with Web API, ASP.Net MVC, or a Sinatra-flavored framework like NancyFx!

The downside was the times when you really did need to exert more fine grained control over HTTP requests and responses.  While you could always happily take in constructor dependencies to read and write to the raw HTTP request/response, this wasn’t all that obvious.

 

The Russian Doll Behavioral Model

I’ve regretted the name “FubuMVC” almost from the beginning because we weren’t really a Model 2 MVC framework.  Our “Action” methods just perform some work within an HTTP request, but don’t really act as logical “Controller’s.”  It was also perfectly possible to build endpoints without Action methods and other endpoints that used multiple Action methods.

The core of FubuMVC’s runtime was the Russian Doll “Behavior” model I described way back in 2011 — in which action methods are called inside of a pre-built Behavior in the middle of a chain.  For example, our HomeEndpoint.Index() action above has a chain of nested behaviors in real life something like:

  1. AuthenticationBehavior
  2. InputBehavior — does content negotiation on the request to build the HomeModel input
  3. ActionCall –> HomeEndpoint.Index() — executes the action method with the input read by conneg and stores the output resource for later
  4. OutputBehavior — does conneg on the resource and “accepts” header to write the HTTP response accordingly

FubuMVC heavily uses additional Behavior’s for cross cutting concerns like authorization, validation, caching, and instrumentation that can be added into a chain of behavior to compose an HTTP pipeline.  Every web framework worth its salt has some kind of model like this, but FubuMVC took it farther by standardizing on a single abstraction for behaviors (everything is just a behavior) and exposing a model that allowed you to customize the chain of behaviors by either convention or explicitly on a single endpoint/route.

In my opinion, the Behavior model gave FubuMVC far more modularity, extensibility, and composability than our peers.  I would go so far as to say that this concept has been validated by the sheer number of other frameworks like Microsoft’s WebAPI that have adopted some form of this pattern.

 

Clean, terse “magical” code versus explicit code

The downside to the behavior model, and especially FubuMVC’s conventional construction of the nested Behaviors is the “magical” aspect.  Because the framework itself is doing so much more work for you, there isn’t a blob of explicit code in one place that tells a developer everything that’s happening in an HTTP request.  In retrospect, even though I personally wanna write the tightest, most concise code possible and avoid repetitive code, other developers are much happier writing and reading code that’s much more explicit — even when that requires them to write much more repetitive code.  It turns out that repetitive code ceremony is not a bad thing to a large number of developers. 

Other developers hated the way that FubuMVC doesn’t really do much to lead you to what to do next or make the framework capabilities discoverable because so much of the code was meant to be driven by FubuMVC conventions based on what your code looked like rather than you writing explicit code against FubuMVC API’s with Intellisense there to guide you along the way.  And yes, I’m fully cognizant that I just make an argument in favor of a Sinatra style fluent interface like NancyFx’s.  I know full well that many developers considered NancyFx much easier to learn than FubuMVC because of Nancy’s better discoverability.

We did offset the “magical” problem with diagnostics that I’ll discuss at a later time, but I think that the “magical” aspect of FubuMVC scared a lot of potential users away in retrospect.  If I had it to do over again, I think I would have pushed to standardize and describe our built in conventions much earlier than we did — but that’s another blog post altogether.

 

What I wanted to do in FubuMVC 2.0 

I had no intention of adopting a programming model more like Sinatra or Web API where you write more explicit code.  My feeling is that there is room in the world for more than one basic approach, so for 2.0, I wanted to double down on the “one model in, one model out” approach by extending more conventions for finer grained control over the HTTP request & response without losing the benefits.  Things like:

  • More built in model binding conventions to attach Cookie values, system clock values, IPrincipal’s, and whatever else I could think of into the OOTB model binding.
  • Built in conventions for writing header, response codes, and cookie values from resource model values to maintain the one model in, one model out motif while still allowing for more powerful HTTP API capabilities
  • We did change the content negotiation defaults to make views “additive” to Json/Xml endpoints, so that any endpoint that renders a view for accept=”text/html” can also return Json or Xml by default
  • We made it a little easier to replace the built in Json/Xml serialization
  • We did streamline the content negotiation internals to make customization smoother
  • Add new built in conventions to attach custom content negotiation readers and writers to the appropriate input and resource types

 

I’m throwing in the towel in FubuMVC

tl;dr:  I’m giving up on fubu and effectively retiring from OSS on .Net

 

Some things came to a head last week and I announced that I’m planning to cease work on FubuMVC after finishing some performance and startup time optimization work as a 2.0 release — which right now means that FubuMVC and its very large ecosystem of surrounding projects is effectively kaput.  Even just a couple weeks ago I was still excited for our 2.0 release and making big plans, but the writing has been on the wall for quite some time that FubuMVC no longer has enough community support to be viable and that the effort I think it would take to possibly change that situation probably isn’t worth it.

For me personally, FubuMVC has turned out to be a fantastic experience in my own learning and problem solving growth.  My current position is directly attributable to FubuMVC and I’ve generally enjoyed much of the technical work I’ve gotten to do over the past 2-3 years.  I’ve forged new relationships with the folks who I met through my work on FubuMVC.

On the downside, it’s also been a massive opportunity cost because of all the things I haven’t learned or done in the meantime because FubuMVC takes up so much of my time and that’s the main reason that it has to stop now.

Some History

 

Rewind to the summer of 2008.  I had just started a new job where we were going to do a big rewrite of my new company’s website application.  My little team had a choice in front of us, we could either choose Ruby on Rails (the hot thing of the day) or continue with .Net and use the brand new ASP.Net MVC framework coming down the pike.  If I had it all to do again, I would choose RoR in a heartbeat and just have gotten on with getting the project done.  At the time though, the team previous to us had completely flopped with RoR (not for technical reasons) and the company was understandably leery of using Rails again.  At the same time, this was at the tail end of the ALT.Net movement and I felt very optimistic about .Net at the time.  Plus, I had a large personal investment in StructureMap and Fluent NHibernate (yes Virginia, there was a time when we thought ORM’s were a good idea).

We opted for .Net and started working on early versions of ASP.Net MVC.  For various reasons, I disliked MVC almost immediately and started to envision a different way of expressing HTTP endpoints that wouldn’t require so much coupling to the framework, less cruft code, and better testability at the unit level.  For a little while we tried to work within ASP.Net MVC by customizing it very heavily to make it work the way we wanted, but MVC then and now isn’t a very modular codebase and we weren’t completely happy with the results.

From there, we got cocky and in December 2009 embarked on our own framework we called FubuMVC based on the “for us, by us” attitude because we believed that after all the bellyaching we did for years about how bad WebForms was (and it was) Microsoft gave us yet another heavily flawed framework with little input from the community that was inevitably going to be the standard in .Net.

Fast forward to today and FubuMVC is up to v1.3, spawned arguably a healthy ecosystem of extensions, and it’s used in several very large applications (fubu’s sweet spot was always larger applications).  It’s also failed miserably to attract or generate much usage or awareness in the greater .Net community — and after this long it’s time for me to admit that the gig is up.

 

Why I think it failed

Setting aside the very real question of whether or not OSS in .Net is a viable proposition (it’s largely not, no matter how hoarse Scott Hanselman makes himself trying to say otherwise), FubuMVC failed because we — and probably mostly me because I had the most visibility by far — did not do enough to market ourselves and build community through blog posts, documentation, and conference speaking.  At one time I think I went almost 2 years without writing any blog posts about fubu and I only gave 3-4 conference talks on FubuMVC total over the past 5 years.  I believe that if we’d just tried to get FubuMVC in front of many more people earlier and generated more interest we might have had enough community to do more, document more, and ground away the friction in FubuMVC faster through increased feedback.

We also didn’t focus hard enough on creating a good, frictionless getting started story to make FubuMVC approachable for newbies.  FubuMVC was largely written for and used on very large, multi-year projects, so it’s somewhat understandable that we didn’t focus a lot on a task that we ourselves only did once or twice a year, but that still killed off our growth as a community.  At this point, I feel good about our Rails-esque “fubu new” story now, but we didn’t have that in the early days and even now that freaks out most .Net developers who don’t believe anything is real until there’s a Visual Studio plugin.

I’ll leave a technical retrospective of what did and did not work well for a later time.

What I’m doing next

I turned 40 this January, but I feel like I’m a better developer than ever and I’m not really burnt out.  I tease my wife that she’s only happy when she’s planning something new for us, but I know that I’m happiest when I’ve got some kind of technical project going on the side that lets me scratch the creative itch.

I’d like to start blogging again because I used to enjoy it way back but I wouldn’t hold your breathe on that one.

We’re kicking the tires on Golang at work for server side development (I’m dubious about the productivity of the language, but the feedback cycle and performance is eye popping) and the entire Scala/TypeSafe stack.  I’m a little tempted to rewrite StructureMap in Scala as a learning experience (and because I think the existing Scala/Java IoC containers blow chunks).  Mostly though, Josh Arnold and I are talking about trying to rebuild Storyteller on a Node.js/Angular.js/Require.js/Mimosa.js/Bower stack so I can level up on my JavaScript skills because, you know, I’ve got a family to support and JavaScript quietly became the most important programming language on the planet all the sudden.

I’ll get around to blogging a strictly technical retrospective later.  Now that I’m not under any real pressure to deliver new code with fubu, I might just manage to blog about some of the technical highlights.  And if I can do it without coming off as pissed and bitter, I’ve got a draft on my thoughts about .Net and the .Net community in general that I’ll publish.

Introducing FubuCsProjFile for Project & Solution File Manipulation

tl;dr:  FubuCsProjFile is a new library from the fubu community for manipulating Visual Studio.Net project files and a new composable templating engine.

The FubuMVC community was busy last year building all new functionality for build automationdocumentation generation, and project templating.  What we haven’t done yet is actually talk about what we were doing in any kind of way that might make it possible for other folks to kick the tires on all that stuff.  This blog post and the heavily under construction website at http://fubuworld.com is an attempt to change that.

For a couple years we’ve had a couple one-off pieces of code to manipulate csproj files with raw Xml manipulation copied over some of our tooling.  When we started to get serious about rebuilding the “fubu new” functionality, we knew that we first needed a more serious way to add, query, remove, and modify items in .csproj files and .sln files.  I looked around for prior art, but found little besides the MSBuild libraries themselves which — shockingly! — did not work in Mono (wouldn’t even compile as I recall).  Fortunately, Monodevelop has very robust MSBuild manipulation code with all kinds of care taken to avoid unnecessary merge problems by maintaining line breaks and file formatting.  Because it has a permissive license, I mostly copied the csproj manipulation code out of Monodevelop and wrapped a little bit prettier object model around their very low level API.

On top of the csproj file manipulation, I added a hack-y class for adding and removing projects from Visual Studio.Net solution files and a from scratch templating engine we use heavily from our “fubu new” functionality.

We certainly don’t yet support every single thing you can do in a csproj file, but we’re already using FubuCsProjFile within Bottles to attach embedded resources, inside the forthcoming Ripple 3.0 release for querying and manipulating assembly references, and as part of the prototype functionality inside of the fubu.exe tool for generating Spark or Razor views.

FubuCsProjFile is available on Nuget under the permissive Apache 2.0 license.  We have received some reports that FubuCsProjFile has some unit tests that break on Mono (“\” instead of “/”, Unix vs. Windows line breaks, the normal stuff).  That’ll get resolved soon-ish, but that just means that I can’t claim that it will work flawlessly on Mono/*nix right now.