Let’s try this again, StructureMap 3.0 in en route as of now

I’ll be honest, I haven’t worked much on StructureMap since I originally shelved my original 3.0/rewrite work in the summer of 2010 — and yes, the documentation is almost worthless.  Now that FubuMVC reached that magic 1.0 mark I’m turning my attention back to StructureMap for a bit, but I think I want some feedback about what I’m thinking right now.

For background, read:

  1. Kicking off StructureMap 3.0 — I just re-read this, and I’m still thinking all of the same things here and all the feedback is still valid.
  2. Proposed StructureMap 2.7 Release

A month ago my plan was to do a small 2.7 release on the existing codebase to remove all the [Obsolete] API calls and grab some pull requests along the way.  Having done that, I would then turn my attention back to the 3.0 codebase where I planned to essentially rewrite the core of StructureMap and retrofit the existing API on top of the new, cleaner core.  A week or so into the work for the 2.7 release and I’ve changed my mind.  First off, by the rules of semantic versioning, I should bump the major version to 3.0.0 when I make the breaking API changes.  Secondly, I’m coming around to the idea of restructuring the existing code in place instead of a full rewrite.

To reiterate the major points, the 3.0 release means:

  1. All [Obsolete] API calls are going away
  2. Removing the strong naming — if you absolutely *have* to have this, maybe we can make separate nuget packages.  I suggest we name that “structuremap.masochistic.”
  3. Move to .Net 4.0.  I don’t think it’s time to go to 4.5 yet and I don’t really want to mess with that anyway
  4. Taking a dependency on FubuCore — if that causes pushback we’ll ilmerge it
  5. Streamlining the Xml support
  6. Rewrite the “Profile” feature completely
  7. Make nested containers not be a crime against computer science
  8. NOT adding every random brainfart “feature” that Windsor has
  9. Make it faster
  10. Make the diagnostics much better
  11. Removing some obscure, clumsy features I never use and really wish you wouldn’t either

Additionally, we have a new “living documentation” infrastructure baking for the Fubu projects.  I know some work already happened to transfer the StructureMap docs to Jekyll, but I’d far prefer to publish on the new fubu world website whenever that happens.

For right now, the 3.0 branch is in the original StructureMap repository at https://github.com/structuremap/structuremap/tree/three.

My Opinions on Data Setup for Functional Tests

I read Jim Holmes’s post Data Driven Testing: What’s a Good Dataset Size? with some interest because it’s very relevant to my work.  I’ve been heavily involved in test automation efforts over the past decade, and I’ve developed a few opinions about how best to handle test data input for functional tests (as opposed to load/scalability/performance tests).  First though, here’s a couple concerns I have:

  • Automated tests need to be reliable.  Tests that require external, environmental tests can be brittle in unexpected ways.  I hate that.
  • Your tests will fail from time to time with regression bugs.  It’s important that your tests are expressed in a way that makes it easy to understand the cause and effect relationship between the “known inputs” and the “expected outcomes.”  I can’t tell you how many times I’ve struggled to fix a failing test because I couldn’t even understand what exactly it was supposed to be testing.
  • My experience says loudly that smaller, more focused automated tests are far easier to diagnose and fix when they fail than very large, multi-step automated tests.  Moreover, large tests that drive user interfaces are much more likely to be unstable and unreliable.  Regardless of platform, problem domain, and team, I know that I’m far more productive when I’m working with quicker feedback cycles.  If any of my colleagues are reading this, now you know why I’m so adamant about having smaller, focused tests rather than large scripted scenarios.
  • Automated tests should enable your team to change or evolve the internals of your system with more confidence.

Be Very Cautious with Shared Test Data

If I have my druthers, I would not share any test setup data between automated tests except for very fundamental things like the inevitable lookup data and maybe some default user credentials or client information that can safely be considered to be static.  Unlike “real” production coding where “Don’t Repeat Yourself” is crucial for maintainability, in testing code I’m much more concerned with making the test as self-explanatory as possible and completely isolated from one another.  If I share test setup data between tests, there’s frequently going to be a reason why you’ll want to add a little bit more data for a new test which ends up breaking the assertions in the existing tests.  Besides that, using a shared test data set means that you probably have more data than any single test really needs — making the diagnosis of test failures harder.  For all of those reasons and more, I strongly prefer that my teams copy and paste bits of test data sets to keep them isolated by test rather than shared.

Self-Contained Tests are Best

I’ve been interested in the idea of executable specifications for a long time.  In order to make the tests have some value as living documentation about the desired behavior of the system, I think it needs to be as clear as possible what the relationship is between the germane data inputs and the observed behavior of the system.  Plus, automated tests are completely useless if you cannot reliably run them on demand or inside a Continuous Integration build.  In the past I’ve also found that understanding or fixing a test is much harder if I have to constantly ALT-TAB between windows or even just swivel my head between a SQL script or some other external file and the body of the rest of a script.

I’ve found that both the comprehensibility and reliability of an automated test are improved by making each automated test self-contained.  What I mean by that is that every part of the test is expressed in one readable document including the data setup, exercising the system, and verifying the expected outcome.  That way the test can be executed at almost any time because it takes care of its own test data setup rather than being dependent on some sort of external action.  To pull that off you need to be able to very concisely describe the initial state of the system for the test, and shared data sets and/or raw SQL scripts, Xml, Json, or raw calls to your system’s API can easily be noisy.  Which leads me to say that I think you should…

Decouple Test Input from the Implementation Details

I’m a very large believer in the importance of reversibility to the long term success of a system.  With that in mind, we write automated tests to pin down the desired behavior of the system and spend a lot of energy towards designing the structure of our code to more readily accept changes later.  All too frequently, I’ve seen systems become harder to change over time specifically because of tight coupling between the automated tests and the implementation details of a system.  In this case, the automated test suite will actually retard or flat out prevent changes to the system instead of enabling you to more confidently change the system.  Maybe even worse, that tight coupling means that the team will have to eliminate or rewrite the automated tests in order to make a desired change to the system.

With that in mind, I somewhat strongly recommend against expressing your test data input in some form of interpreted format rather than as SQL statements or direct API calls.  My team uses Storyteller2 where all test input is expressed in logical tables or “sentences” that are not tightly coupled to the structure of our persisted documents.  I think that simple textual formats or interpreted Domain Specific Language’s are also viable alternatives.  Despite the extra work to write and maintain a testing DSL, I think there are some big advantages to doing it this way:

  • You’re much more able to make additions to the underlying data storage without having to change the tests.  With an interpreted data approach, you can simply add fake data defaults for new columns or fields
  • You can express only the data that is germane to the functionality that your test is targeting.  More on this in below when I talk about my current project.
  • You can frequently make the test data setup be much more mechanically cheaper per test by simply reducing the amount of data the test author will have to write per test with sensible default values behind the scenes.  I think this topic is probably worth a blog post on its own someday.

This goes far beyond just the test data setup.  I think it’s very advantageous in general to express your functional tests in a way that is independent of implementation details of your application — especially if you’re going to drive a user interface in your testing.

Go in through the Front Door

Very closely related to my concerns about decoupling tests from the implementation details is to avoid using “backdoor” ways to set up test scenarios.  My opinion is that you should set up test scenarios by using the real services your application uses itself to persist data.  While this does risk making the tests run slower by going through extra runtime hoops, I think it has a couple advantages:

  • It’s easier to keep your test automation code synchronized with your production code as you refactor or evolve the production code and data storage
  • It should result in writing less code period
  • It reduces logical duplication between the testing code and the production code — think database schema changes
  • When you write raw data to the underlying storage mechanisms you can very easily get the application into an invalid state that doesn’t happen in production

Case in point, I met with another shop a couple years ago that was struggling with their test automation efforts.  They were writing a Silverlight client with a .Net backend, but using Ruby scripts with ActiveRecord to setup the initial data sets for automated tests.  I know from all of my ex-.Net/now Ruby friends that everything in Ruby is perfect, but in this case, it caused the team a lot of headaches because the tests were very brittle anytime the database schema changed with all the duplication between their C# production code and the Ruby test automation code.

Topics for later…

It’s Saturday and my son and I need to go to the park while the weather’s nice, so I’m cutting this short.  In a later post I’ll try to get more concrete with examples and maybe an additional post that applies all this theory to the project I’m doing at work.

Proposed StructureMap 2.7 Release

So StructureMap 3 hasn’t really gotten going again, but I still have intentions of doing so this year.  In the mean time, I’ve got a batch of pull requests stacked up in the StructureMap 2.6 codebase and it’s time for a new intermediate release.  At this time, what I’d like to do is rev up to StructureMap 2.7 and do this:

  1. Take in all the outstanding pull requests
  2. Remove all [Obsolete] API members
  3. Mark as [Obsolete] some various parts of the registration API that I know that I will not support in 3.0 (conditional construction comes to mine)
  4. Mark all StructureMap attributes except for [DefaultConstructor] as [Obsolete] as I think we will dump all the circa 2003 attributes that you used to need to use.
  5. Remove the strong naming because it’s death in combination with Nuget.  If this is an issue for you, I will happily take a pull request to make a separate nuget package for a signed version of StructureMap
  6. Ideally, I’d like to clean up the more coarse grained unit tests in a new namespace called “Acceptance” in order to get these ready for usage in StructureMap 3 and maybe provide a level of living documentation for later.
  7. MAYBE — take a look at cleaning up the exception stack traces to give you more contextual information about where StructureMap caught an exception.  We lost a lot of contextual information when I eliminated the Reflection.Emit usage in favor of compiling Expression’s.

 

Thoughts?

Clean Database per Automated Test Run? Yes, please.

TL;DR We’re able to utilize RavenDb‘s support for embedded databases, some IoC trickery, and our FubuMVC.RavenDb library to make automated testing far simpler by quickly spinning up a brand new database for each individual automated test to have complete control over the state of our system.  Oh, and removing ASP.Net and relational databases out of the equation makes automated functional testing far easier too.

Known inputs and expected outcomes is the mantra of successful automated testing.  This is generally pretty simple with unit tests and more granular integration tests, but sooner or later you’re going to want to exercise your application stack with a persistent database.  You cannot sustain your sanity, much less be successful, while doing automated testing if you cannot easily put your system in a known state before you try to exercise the system.  Stateful elements of your application architecture includes things like queues, the file system, and in memory caches, but for this post I’m only concerned with controlling the state of the application database.

On my last several projects we’ve used some sort of common test setup action to roll back our database to a near empty state before a test adds the exact data to the database that it needs as part of the test execution (the “arrange” part of arrange, act, and assert). You can read more about the ugly stuff I’ve tried in the past at the bottom of this post, but I think we’ve finally arrived at a solution for this problem that I think is succeeding.

Our Solution

First, we’re using RavenDb as a schema-less document database.  We also use StructureMap to compose the services in our system, and RavenDb’s IDocumentStore is built and scoped as a singleton.  In functional testing scenarios, we run our entire application (FubuMVC website hosted with an embedded web server, RavenDb, our backend service) in the same AppDomain as our testing harness, so it’s very simple for us to directly alter the state of the application.  Before each test, we:

  1. Eject and dispose any preexisting instance of IDocumentStore from our main StructureMap container
  2. Replace the default registration of IDocumentStore with a new, completely empty instance of RavenDb’s EmbeddedDocumentStore
  3. Write a little bit of initial state into the new database (a couple pre-canned logins and tenants).
  4. Continue to the rest of the test that will generally start by adding test specific data using our normal repository classes helpfully composed by StructureMap to use the new embedded database

I’m very happy with this solution for a couple different reasons.  First, it’s lightning fast compared with other mechanics I’ve used and describe at the bottom of this post.  Secondly, using a schema-less database means that we don’t have much maintenance work to do to keep this database cleansing mechanism up to date with new additions to our persistent domain model and event store — and I think this is a significant source of friction when testing against relational databases.

Show me some code!

I won’t get into too much detail, but we use StoryTeller2 as our test harness for functional testing.  The “arrange” part of any of our functional tests gets expressed like this taken from one of our tests for our multi-tenancy support:

----------------------------------------------
|If the system state is |

|The users are                                |
|Username |Password |Clients                  |
|User1    |Password1|ClientA, ClientB, ClientC|
|User2    |Password2|ClientA, ClientB         |
----------------------------------------------

In the test expressed above, the only state in the system is exactly what I put into the “arrange” section of the test itself.  The “If the system state is” DSL is implemented by a Fixture class that runs this little bit of code in its setup:

Code Snippet
  1.         public override void SetUp(ITestContext context)
  2.         {
  3.             // There’s a bit more than this going on here, but the service below
  4.             // is part of our FubuPersistence library as a testing hook to
  5.             // wipe the slate clean in a running application
  6.             _reset = Retrieve<ICompleteReset>();
  7.             _reset.ResetState();
  8.         }

As long as my team is using our “If the system state is” fixture to setup the testing state, the application database will be set back to a known state before every single test run — making the automated tests far more reliable than other mechanisms I’ve used in the past.

The ICompleteReset interface originates from the FubuPersistence project that was designed in no small part to make it simpler to completely wipe out the state of your running application.  The ResetState() method looks like this:

Code Snippet
  1.         public void ResetState()
  2.         {
  3.             // Shutdown any type of background process in the application
  4.             // that is stateful or polling before resetting the database
  5.             _serviceResets.Each(x => {
  6.                 trace(“Stopping services with {0}”, x.GetType().Name);
  7.                 x.Stop();
  8.             });
  9.             // The call to replace the database
  10.             trace(“Clearing persisted state”);
  11.             _persistence.ClearPersistedState();
  12.             // Load any basic state that has to exist for all tests.  
  13.             // I’m thinking that this is nothing but a couple default
  14.             // login credentials and maybe some static lookup list
  15.             // data
  16.             trace(“Loading initial data”);
  17.             _initialState.Load();
  18.             // Restart any and all background processes to run against the newly
  19.             // created database
  20.             _serviceResets.Each(x => {
  21.                 trace(“Starting services with {0}”, x.GetType().Name);
  22.                 x.Start();
  23.             });
  24.         }

The method _persistence.ClearPersistedState() called above to rollback all persistence is implemented by our RavenDbPersistedState class.  That method does this:

Code Snippet
  1.         public void ClearPersistedState()
  2.         {
  3.             // _container is the main StructureMap IoC container for the
  4.             // running application.  The line below will
  5.             // eject any existing IDocumentStore from the container
  6.             // and dispose it
  7.             _container.Model.For<IDocumentStore>().Default.EjectObject();
  8.             // RavenDbSettings is another class from FubuPersistence
  9.             // that just controls the very intial creation of a
  10.             // RavenDb IDocumentStore object.  In this case, we’re
  11.             // overriding the normal project configuration from
  12.             // the App.config with instructions to use an
  13.             // EmbeddedDocumentStore running completely
  14.             // in memory.
  15.             _container.Inject(new RavenDbSettings
  16.             {
  17.                 RunInMemory = true
  18.             });
  19.         }

The code above doesn’t necessarily create a new database, but we’ve set ourselves up to use a brand new embedded, in memory database whenever something does request a running database from the StructureMap container.  I’m not going to show this code for the sake of brevity, but I think it’s important to note that the RavenDb database construction will use your normal mechanisms for bootstrapping and configuring an IDocumentStore including all the hundred RavenDb switches and pre-canned indices.

All the code shown here is from the FubuPersistence repository on GitHub.

Conclusion

I’m generally happy with this solution.  So far, it’s quick in execution and we haven’t required much maintenance as we’ve progressed other than more default data.  Hopefully, this solution will be applicable and reusable in future projects out of the box.  I would happily recommend a similar approach to other teams.

But, but, but…

If you did read this carefully, I think you’ll find some things to take exception with:

  1. I’m assuming that you really are able to test functionality with bare minimum data sets to keep the setup work to a minimum and the performance at an acceptable level.  This technique isn’t going to be useful for anything involving performance or load testing — but are you really all that concerned about functionality testing when you do that type of testing?
  2. We’re not running our application in its deployed configuration when we collapse everything down to the same AppDomain.  Why I think this is a good idea, the benefits, and how we do it are a topic for another blog post.  Promise.
  3. RavenDb is schema-less and that turns out to make a huge difference in how long it takes to spin up a new database from scratch compared to relational databases.  Yes, there may be some pre-canned indices that need to get built up when you spin up the new embedded database, but with an empty database I don’t see that as a show stopper.

Other, less successful ways of controlling state I’ve used in the past

Over the years I’ve done automated testing against persisted databases with varying degrees of frustration.  The worst possible thing you can do is to have everybody testing against a shared relational database in the development and testing environments.   You either expect the database to be in a certain state at the start of the test, or you ran a stored procedure to set up the tables you wanted to test against.  I can’t even begin to tell you how unreliable this turns out to be when more than one person is running tests at the same time and fouling up the test runs.  Unfortunately, many shops still try to do this and it’s a significant hurdle to clear when doing automated testing.  Yes, you can try to play tricks with transactions to isolate the test data or try to use randomized data, but I’m not a believer in either approach.

Having an isolated relational database per developer, preferably on their own development box, was a marked improvement, but it adds a great deal of overhead to your project automation.  Realistically, you need a developer to be able to build out the latest database on the fly from the latest source on their own box.  That’s not a deal breaker with modern database migration tools, but it’s still a significant about of work for your team.  The bigger problem to me is how you tear down the existing state in a relational database to put it into a known state before running an automated test.  You’ve got a couple choices:

  1. Destroy the schema completely and rebuild it from scratch.  Don’t laugh, I’ve seen people do this and the tests were as painfully slow as you can probably imagine.  I suppose you could also script the database to rollback to a checkpoint or reattach a backed up copy of the database, but again, I’m never going to recommend that if you have other options.
  2. Execute a set of commands that wipes most if not all of the data in a database before each test.  I’ve done this before, and while it definitely helped create a known state in the system, this strategy performed very poorly and it took quite a bit of work to maintain the “clean database” script as the project progressed.  As a project grows, the runtime of your automated test runs becomes very important to keep the feedback loop useful.  Slow tests hamper the usefulness of automated testing.
  3. Selectively clean out and write data to only the tables affected by a test.  This is probably much faster performance wise, but I think it will require more coding inside of the testing code to do the one off, set up the state code.

* As an aside, I really suggest keeping the project database data definition language scripts and/or migrations in the same source control system as the code so that it’s very easy to trace the version of the code running against which version of the database schema.  The harsh reality in my experience is that the same software engineering rigor we generally use for our application code (source control, unit testing, TDD, continuous integration) is very often missing in regards to the relational database DDL and environment. If you’re a database guy talking to me at a conference, you better have your stuff together on this front before you dare tell me that “my developers can’t be trusted to access my database.”

FubuMVC Turns 1.0

The FubuMVC team has published a 1.0 version of the main libraries (FubuMVC.Core, FubuMVC.StructureMap, FubuMVC.AutoFac, FubuMVC.Core.UI, and the view engines) to the public nuget feed.  We’re certainly not “done,” and we’re severely lacking in some areas (*cough* documentation *cough*), but I’m happy with our technical core and feel like we’re able to make that all important, symbolic declaration of “SemVer 1/the major public API signatures are stable.”

It’s been a long journey from Chad Myers and I’s talk at KaizenConf all the way back in 2008 to CodeMash in 2013 and in this highly collaborative OSS on GitHub world, we’ve had a lot of collaborators.  In particular, I’d like to thank Chad Myers and Josh Flanagan for their help at the beginning, Josh Arnold for being my coding partner the past couple years, Corey Kaylor for being the grown up in the room, and Alex Johannessen for his boundless enthusiasm.  I’ve genuinely enjoyed my interactions with the FubuMVC community and I look forward to seeing us grow in the new year.

There’s plenty more to do, but for a week or so, my only priority is rest (and finishing the last couple hundred pages of A Memory of Light) — and that doesn’t have anything to do with HATEOAS or hypermedia.

 

What’s not there yet…

I saw somebody on Twitter last week saying that the “U” in FubuMVC stands for “undocumented,” and that it’s so bad that we had to use two “U’s.”  I’m very painfully aware of this, and I think we’re ready to start addressing the issue permanently.

  • A good “quick start” nuget and guide.  The FubuMVC team made a heroic effort over the past couple months to make the FubuMVC 1.0 release just before our CodeMash workshop this week, and I dropped the ball on updating the old “FubuMVC” nuspec file to be relevant to the streamlined API’s as they are now.  
  • The new “FubuWorld” website with documentation on all of the major and hopefully most of the minor FubuMVC projects (including StructureMap and StoryTeller as well).  We effectively wrote our own FubuMVC-hosted version of readthedocs, but we haven’t yet exploited this capability and gotten a new website with updated documentation online.  I’m deeply scarred by my experiences with documenting StructureMap and how utterly useless it has been.   This time the projects will have strong support for living documentation.
  • Lots of Camtasia videos
  • Lots of google-able blog posts

Continuous Design and Reversibility at Agile Vancouver (video)

In November I got to come out of speaking retirement at Agile Vancouver.  Over a couple days I finally got to meet Mike Stockdale in person, have some fun arguments with Adam Dymitruk, see some beautiful scenery, and generally annoy the crap out of folks who are hoarding way too much relational database cheese in my talk called Continuous Design and Reversibility (video via the link).

I think the quality of reversibility in your architecture is a very big deal, especially if you have the slightest interest in effectively doing continuous design.  Roughly defined, reversibility is your ability to alter or delay elements of your software architecture.  Low reversibility means that you’re more or less forced to get things right upfront and it’s expensive to be wrong — and sorry, but you will be wrong about many things in your architecture on any non-trivial project.  By contrast, using techniques and technologies that have higher reversibility qualities vastly improves my ability to delay technical decisions so that I can focus on one thing at a time like say, building out the user interface for a feature to get vital user feedback quickly without having to first lay down every single bit of my architecture for data access, security or logging first.  In the talk, I gave several concrete examples from my project work including the usage of document databases instead of relational databases.

Last Responsible Moment

I think we can all conceptually agree with the idea of the “Last Responsible Moment,” meaning that the best time to make a decision is as late in the project as possible when you have the most information about your real needs.  How “late” your last responsible moment is for any given architectural decision is largely a matter of reversibility.

For the old timers reading this, consider the move from VB6 with COM to .Net a decade and change ago.  With COM, adding a new public method to an existing class or changing the signature of an existing public method could easily break the binary compatibility, meaning that you’d have to recompile any downstream COM components that used the first COM component.  In that scenario, it behooved you to get the public signatures locked down and stable as fast as possible to avoid the clumsiness and instability with downstream components — and let me tell you youngsters, that’s a brittle situation because you always find reasons to change the API’s when you get deep into your requirements and start stumbling into edge cases that weren’t obvious upfront.  Knowing that you can happily add new public members to .Net classes without breaking downstream compatibility, my last responsible moment for locking down public API’s in upstream components is much later than it was in the VB6 days.

The original abstract:

From a purely technical perspective, you can almost say that Extreme Programming was a rebellion against the traditional concept of “Big Design Upfront.” We spent so much time explaining why BDUF was bad that we might have missed a better conversation on just how to responsibly and reliably design and architect applications and systems in an evolutionary way.

I believe that the key to successful continuous or evolutionary design is architectural “reversibility,” the ability to reverse or change technical decisions in the code. Designing for reversibility helps a team push back the “Last Responsible Moment” to make more informed technical decisions.

I work on a very small technical team building a large system with quite a bit of technical complexity. In this talk I’ll elaborate on how we’ve purposely exploited the concept of reversibility to minimize the complexity we have to deal with at any given time. More importantly, I’ll talk about how reversibility led us to choose technologies like document databases, how we heavily exploit conventions in the user interface, and the testing process that made it all possible. And finally, just to make the talk more interesting, I’ll share the times when delaying technical decisions blew up in our faces.

 

Building a Simple Bottle to Extend FubuMVC

WARNING:  I dashed out this code *fast* just for the blog post.  All the same, the code referenced here is on GitHub.

If you follow FubuMVC at all you’ve surely seen the term “Bottles” being tossed around in regards to extensibility and modularity in FubuMVC applications, but not a lot of detail about what it is or how to use it.  While I think that most people focus on using Bottles as a way to split functional areas of your application into separate projects or even code repositories, you can also use Bottles to create shared infrastructure extensions to FubuMVC — and that’s the topic of this blog post.

I wanna use a DatePicker control for all Date fields!

The easiest example I can think of is to build a Bottle that “drops in” an html convention to make every DateTime property whose name ends in “Date” be edited by the jquery.ui datepicker control, implemented by the new FubuMVC.DatePicker project I threw together specifically for this blog post (as in, just a demonstrator, not testing very hard).  We really do want to make this be completely “drop in” with no other configuration necessary, so the bottle is going to have to contain its own JavaScript assets and have a way to inject both the html convention policies and assets into the base FubuMVC project it gets added to.  So, how do we do that?  Well first, let’s…

Grab Some Dependencies

The FubuMVC.DatePicker depends on a couple other FubuMVC related nugets:

  1. FubuMVC.Core – duh.
  2. FubuMVC.Core.UI – the html convention support for FubuMVC applications among other things
  3. FubuMVC.Core.Assets — the asset pipeline for FubuMVC.  Conveniently enough, this Bottle comes with *a* fairly recent version of jquery just in case the application doesn’t already contain jquery.
  4. FubuMVC.JQueryUI — integrates jquery.ui into a FubuMVC application and also contains a default version of jquery.ui and a jquery.ui theme.  All of the assets can be happily overridden in the application, so don’t think you’re stuck with whatever comes in this Bottle.

Now, we’ll add the…

Client Side Script Support

We’re using the jquery.ui datepicker plugin for the client side date picker support because it’s simple to use.  I’ll do the client side activation with a new file called “DatePickerActivator.js” added at /content/scripts under the root of our FubuMVC project:

DatePickerActivator.js
  1. $(document).ready(function () {
  2.     $(‘.datepicker’).datepicker();
  3. });

The code above isn’t going to help when you’re using client side templates in a “Single Page Application” type approach, but that’s beyond the scope of this post, so we’ll call the code above good enough for now.  Now, DateTimeActivator.js does have some dependencies, so let’s declare those to the asset pipeline with a file called “datepicker.asset.config“:

DatePickerActivator.js requires jquery, jqueryui, jqueryuicss

Couple things to note up above:

  1. Any file named “*.asset.config” is picked up by FubuMVC’s asset pipeline and interpreted as asset configuration
  2. The asset pipeline supports a small textual DSL to express dependencies, asset groups, and other asset rules
  3. “jquery” and “jqueryui” are aliases that the asset pipeline will resolve into an actual asset name.  In the asset pipeline library, there’s another declaration:  “jquery is jquery-1.8.2.min.js” so that asking for “jquery” resolves to “jquery-1.8.2.min.js”  at runtime.  This was done purposely to make it easier to upgrade javascript libraries that embed versions into the file name without breaking the rest of the application.
  4. Anytime we request “DatePickerActivator.js” in a view, the asset pipeline will ensure that jquery and jquery.ui libraries are also added into the page.

Html Convention Support

Next we need to build out the actual html convention policy in the following code:

DatePickerBuilder
  1.     public class DatePickerBuilder : ElementTagBuilder
  2.     {
  3.         public override bool Matches(ElementRequest token)
  4.         {
  5.             return token.Accessor.PropertyType.IsDateTime() && token.Accessor.Name.EndsWith(“Date”);
  6.         }
  7.         public override HtmlTag Build(ElementRequest request)
  8.         {
  9.             // Add the DatePickerActivator.js into the asset pipeline for this page
  10.             request.Get<IAssetRequirements>().Require(“DatePickerActivator.js”);
  11.             string text = null == request.RawValue || DateTime.MinValue.Equals(request.RawValue)
  12.                               ? “”
  13.                               : request.RawValue.As<DateTime>().ToShortDateString();
  14.             return new HtmlTag(“input”).Text(text).Attr(“type”, “text”).AddClass(“datepicker”);
  15.         }
  16.     }

It’s not a lot of code, but it’s enough to declare when the new html convention applies (properties of type DateTime that end in “Date”), and how to build the html tag for the editor.  The only fancy thing is that this code can inject the script requirements into the asset pipeline.  It’s important to note here that the call to IAssetRequirements.Require(asset name) above does not write out a script tag right there and then, it simply tells the asset pipeline that “DatePickerActivator.js” and all its dependencies (and their dependencies) are required on this page.  Somewhere in the view (typically in the footer), there’s a single call to write the script tags that will emit script tags for all of the pending script dependencies (the asset pipeline can optionally do script compression and combinations with the assets injected by html conventions to avoid making extra http requests).

You do not have to use FubuMVC’s asset pipeline with a FubuMVC application, but it is necessary to make these types of Bottle extensibility mechanisms work.

Registering the new Html Convention

When a FubuMVC application spins up, it searches through all the assemblies loaded by Bottles looking for any concrete class that implements the IFubuRegistryExtension interface, creates an instance of that type, and applies the configuration in that IFubuRegistryExtension to the FubuMVC application spinning up.  In order to apply the html convention class that we built above, we need to add a new IFubuRegistryExtension class to our new assembly:

Code Snippet
  1.     public class DatePickerRegistryExtension : IFubuRegistryExtension
  2.     {
  3.         public void Configure(FubuRegistry registry)
  4.         {
  5.             registry.Import<HtmlConventionRegistry>(x => {
  6.                 x.Editors.Add(new DatePickerBuilder());
  7.             });
  8.         }
  9.     }

Bottle-ize the Assembly

Bottles doesn’t go around willy nilly loading every assembly it finds in the application as a Bottle, so we have to do something to mark our assembly as appropriate for Bottles to load it into FubuMVC applications.  The easiest and most common way is to just add the [FubuModule] marker attribute at the assembly level like this code below:

Code Snippet
  1. using System.Reflection;
  2. using FubuMVC.Core;
  3. [assembly: AssemblyTitle(“FubuMVC.DatePicker”)]
  4. [assembly: FubuModule]

Modifying the Build Script to Embed Content in the Bottle Assembly

Lastly, we need to embed any kind of content (JavaScript files, stylesheets, CoffeeScript, Spark or Razor views, asset config files) that isn’t part of the C# code into the Bottle assembly.  Bottles does this by sweeping the project for all files that match a set criteria (the out of the box criteria is anything that isn’t C# code or related to Visual Studio), and making a single zip file called “pak-WebContent.zip” and embedding that into the assembly.  There’s no need to make every single non-C# file an embedded resource, but we do need to call the Bottles functionality to “bottle up” the contents when you rebuild the project.

There’s an executable called “bottles.exe” in the tools folder of the Bottles nuget that you can use to “bottle up” an assembly.  I usually add the call to Bottles directly into the compile step of one or our rake scripts like so:

desc "Compiles the app"
task :compile => [:restore_if_missing, :clean, :version] do
  bottles("assembly-pak src/FubuMVC.DatePicker -p FubuMVC.DatePicker.csproj")

  MSBuildRunner.compile :compilemode => COMPILE_TARGET, :solutionfile => 'src/FubuMVC.DatePicker.sln', :clrversion => CLR_TOOLS_VERSION

  target = COMPILE_TARGET.downcase
end

def self.bottles(args)
  bottles = Platform.runtime(Nuget.tool("Bottles", "BottleRunner.exe"))
  sh "#{bottles} #{args}"
end

That’s actually all of the code, except for a nuspec file to package this up for nuget.  And tests, which don’t exist right now:(.

Consuming this Bottle

Now it’s time to use our new Bottle and apply the html convention for date properties.  You’re really got just a couple steps:

  1. Add an assembly reference in your main application to the new FubuMVC.DatePicker library through nuget or however you want to do that
  2. Do make sure that you are calling “WriteScriptTags()” somewhere at the end of your views so that the asset pipeline writes out the script files declared by the new html convention class.  We typically do this by putting that call in the main application Spark layout file.
  3. You might have to manually declare the “jqueyruicss” asset in the head of your view to make sure the “jquery-ui.css” file is present in any view that uses the new datepicker convention.  The asset pipeline handles the scripts pretty well, but the CSS files are a little trickier because they usually get written into the page before the html convention even fires
  4. Use the html convention on a page with the “InputFor” extension method like this in Spark: !{this.InputFor(x => x.SomeDate)} or <Input name=”SomeDate” /> if you’re using Spark bindings (not standardized in the FubuMVC.Spark nuget yet, but will be soon).

Stuff I think could be better

  • The javascript activation could stand to be standardized in a way that is more conducive to using the html conventions with backbone, knockout, or something of that ilk.  I’m hoping that my colleague Bob Pace will add in his module/activator magic he uses in his projects as a standard trick in FubuMVC.
  • We need to get a FubuMVC.QuickStart nuget going that bootstraps more of the asset pipeline setup and layout files to get users going faster
  • I’d like to see some enhancements to our “fubu.exe” tool to deal with more of the repetitive Bottle project setup.
  • I’m not a big fan of the way we smuggle the Bottles executable through nuget today.  It leaves us with the ugly hack in the rake scripts to magically find where nuget decided to put the bottles executable.  I know how we’re going to beat this inside the fubu project ecosystem with our ripple tool, but I’m not sure how best to do this for folks not using our build tools.  I’m not willing to accept that Bottles has to be installed on a user box before running the build file.
  • Today, FubuMVC has a limitation that asset files have to be under the /content directory for the asset pipeline.  We’re absolutely committed to changing this in the near term, but it won’t happen before our 1.0 release.  I get aggravated every time I hear somebody say that FubuMVC is just an attempt to copy Ruby on Rails.  Ironically enough, this asset file limitation is the only single thing in FubuMVC that was copied directly from RoR — and every body, including me, hates it.

What can I do with Bottles to extend FubuMVC?

This is bravado until we have enough documentation and samples to prove it, but I think that FubuMVC has the best story for extensibility and modularity in the entire .Net Web Framework universe — and honestly, I don’t think it’s even close.

So what sorts of extensibility things can you do with the FubuMVC/Bottles combination?  The short answer is every single thing that you can do with a FubuMVC application can be externalized into a Bottle and added into the base application without any seams (except for content extensions in existing views) or special handling in the base application.

“Code Complete” is a polite fiction, “Done, done, done” is the hard truth

This is a mostly rewritten version of an old blog post of mine from ’06, but the content is still important and I don’t see folks talking about it very often.  Don’t you think for one second that I’ve done this perfectly on every project I’ve worked on for the past decade, but it’s still an ideal I’d like to aim for.

Before you flame me, I’m not talking about the canonical book by Steve McConnell.  What I mean is that this statement is a lie – “we’re code complete on feature xyz.”  That phrase is a lie, or at least misleading, because you really aren’t complete with the feature.  Code complete doesn’t tell you anything definitive about the quality of the code, or most importantly, the readiness of the code for actual deployment.  Code complete just means that the developers have reached a point where they’re ready to turn the code over for testing.  You say the phrase “code complete” to mark off a gate on a schedule or claim earned credit for coding work done on a project schedule.  Using “code complete” to claim earned value is tempting, yet dangerous because it doesn’t translate into business value.  It could have lots of bugs and issues yet to be uncovered by the testers.  If the code hasn’t gone through user acceptance testing, it might not even be the right functionality.

One of my favorite aspects of eXtreme Programming from back in the day was the emphasis on creating working software instead of obsessing over intermediate progress gates and deliverables.  In direct contrast to “Code Complete,”  XP teams used the phrase “Done, done, done” to describe a feature as complete.  “Done, done, done” means the feature is 100% ready to deploy to production.

There’s quite a bit of variance from project to project, but the “story wall” statuses that I prefer to use for a Kanban type approach would go something like:

    1. On deck/not started/make sure it’s ready to be worked on
    2. In development
    3. In testing
    4. Ready for review
    5. Done, done, done

The other columns besides “done, done, done” are just intermediate stages that help the team coordinate activities and handoffs between team members.  The burndown chart informs management on the state of the iteration and helps spot problems and changes in the iteration plan, but the authoritative progress meter is the number stories crossing the line into the “done” column.

That workflow above is a little bit like playing the game of Sorry! as a kid (or parent of a kid about that age).  If you don’t remember or never played Sorry!, the goal of the game was to get your tokens into the home area (production).  There was also a “safe zone” where your tokens are almost to home base, but once in awhile you manage to draw cards that force you to send your tokens back into the danger area.

Just like the game of Sorry!, you don’t “win” at your software project until you push all your stories into a deployable state.

So, how do I use this “knowledge?”

I can’t claim to be any kind of Kanban expert, but I do know that myself and my teams bog down badly when we have too many balls up in the air.  We always seem to do best when we’re working a finite number of features and working them to completion serially rather than having more parallel efforts running simultaneously in various states of not quite done.  By the same token, I also know that I’m much, much quicker solving problems in the code I just worked on than in code I worked on last month.  That’s a roundabout way of saying that I want the testing and user approval of a new feature or user story to happen as close to my development effort for that feature as possible.  In a perfect world this translates to more or less keeping the developers, testers, and maybe even the UX designers and customers focused on the same user story or feature at any given time.

Digging into another old blog post, I strongly recommend Michael Feather’s old blog post on Iteration Slop, in specific what he describes as “trailer hitched QA.”  Right now, I don’t think my current team is doing a good enough job preventing the “trailer hitched QA” problem.  We’re trying to cut into this problem by doing more upfront “executable specifications” to bring the testers and developers onto the same page before working too much on a story.  We’re also changing from a formal iterative approach that’s tempted us into just getting to “code complete” for all the stories we guessed, I’m sorry, estimated that we could do in an iteration to a continuous flow Kanban style.  My hope is that we stop worrying so much about artificial deadlines and focus more on delivering production quality features one or two features at a time.  I’m also hoping that this leads to us and the testers working more smoothly together.

Presenting at CodeMash 2013

In my continuing bid to rejoin the development world, I’m going to be co-presenting two workshops at CodeMash 2013 with Corey Kaylor and Josh Arnold.

Making Test Automation with Web Applications Work

Let’s assume you’ve accepted the arguments in favor of automating at least part of the testing against your web application and you’ve generally nailed down all the soft fuzzy process and collaboration issues, now you’re simply left with the very hard problem of doing effective automated testing — and that’s what this workshop will concentrate on.  We’re going to be light on software process issues but very heavy on concrete technical problems and solutions.  We’ll talk about how we try to make our automated tests more reliable, faster, able to accept changes in the user interface, and less work to author.   We will be showing examples using our own .Net and FubuMVC flavored stack of Storyteller2, Serenity, WebDriver, and Jasmine, but I think that the concepts and strategies should directly transfer to other platforms and tools.

Fully Operational FubuMVC 1.0

I’m very consciously using CodeMash as a forcing function to make FubuMVC arrive at a 1.0 release — documentation, new nugets, tutorials, diagnostics and stable API’s.  I think we’re going to be able to make a pretty compelling case for why FubuMVC is worth exploring even in a world crowded with Ruby on Rails, Play, Lift, Node.js, and Sinatra.

If we manage to pull off a healthy fraction of the demos that we have planned, I’m going to do the “Now witness the firepower of this fully ARMED and OPERATIONAL battle station!” thing, but hopefully without getting thrown down an inexplicably placed well by Darth Vader afterward.  Seriously, why was there a giant, uncovered hole right in the emporer’s throne room?

Neither of these workshops will be filmed, but sometime within the next couple of months we will release Camtasia recordings of the same demo’s as part of our 1.0 release.

Once I’m done with the two workshops I’m going to rest my voice, take in as many talks as I can, catch up with some friends I haven’t seen in quite a while, and just generally mingle.  In particular I’m looking forward to the sessions on Continuous Deployment, client side web development, Clojure, and I want to see the Play framework in action.

See you all there in January.

Abstractions and Models aren’t Infallible

Last week I made a comment on Twitter as a little reminder to myself (link) that you have to occasionally challenge and even change the basic abstractions and domain model of your application.  I was working to extend the new FubuMVC.Authentication project so that we could use Windows and form based authentication on the same application at work.  The core of FubuMVC.Authentication was harvested from Josh and I’s previous project that had much simpler requirements.  I tried for far too long to push the new square shaped functionality through the existing round-holed model.  Once I took a step back and laid out the requirements and how functionality did and did not vary from Windows to form based to Twitter/Facebook/OAuth authentication, the basic abstractions changed quite a bit and I finally got some stuff done that had me stuck.

In the same week, we started to do some detailed analysis for a new user story that everybody thought could be tricky.  Once we got the business partners to give us concrete scenarios of the problems we faced, my team realized that we flat out have to change the core of our Domain Model and the relationships between entities to avoid turning our code into the kind of thing that makes me snarky on Twitter.  In this case it’s not a terrible thing because it won’t break the existing end to end acceptance tests or even much about the user interaction design.

I think we’re going to be just fine with both the authentication and the Domain Model, but for the hat trick last week, I griped on Twitter about a silly little API usability problem with an OSS tool that we use and had just upgraded.  I’ve spent a quite bit of time looking through this OSS tool’s codebase because we interact with its internals extensively in another FubuMVC related project.  Without getting too detailed, I think that the way this OSS project models its problem domain internally makes the code more complicated and certainly harder to use for us than it could be if the basic abstractions were changed to another design.  In this case, I’m familiar with the project’s history and it’s easy to see how its internal model probably worked very well with the initial, relatively simple use cases from the first release.  This project is very successful by any standards (even the fact that people gripe about it so frequently in my twitter feed is a testament to how used the tool really is), but they still have to be paying an opportunity cost in building out their newer features.

Re-thinking previously made decisions isn’t an obvious move in most cases, but it’s still something you’re going to have to do as a software developer.