Some Thoughts on the new .Net

The big announcements on the new ASP.Net runtime were made a couple weeks ago, but I write very slowly these days, so here’s my delayed reaction.

 

Less Friction and a Faster Feedback Cycle Maybe? 

I’ve started to associate .Net “classic” with seemingly constant aggravations like strong naming conflicts, csproj file merge hell, slow compilation, slow nuget restores, and how absurdly heavyweight and bloated that Visual Studio.Net has become over the years. Plenty of it was self-inflicted, but I felt at the end that way too much of our efforts in FubuMVC development was really about working around limitations in the .Net ecosystem or ASP.Net itself (the existing System.Web infrastructure is awful for modularity in my opinion and clearly shows its age and the MSBuild model is just flat out nuts).

In contrast, my brief exposure to Node.js development has been a breathe of fresh air. My preferred workflow is to just start mocha in “watch” mode in a terminal window with Growl notifications on and continuously code in Sublime. The feedback cycle between making a change in code to test results is astonishing fast compared to .Net (admittedly less awesome when using lots of generator functions or using WebPack to bundle up React.js JSX files or running tests in Karma) and the editor is snappy.

ASP.Net vNext is very clearly a reaction to the improved development workflows you can get with Node.js or Golang. In particular, I’m positive or at least hopeful that:

  • The Roslyn compiler is fast enough that auto-test tools in .Net “feel” much snappier
  • The new package resolution functionality is much faster than the current Nuget
  • The new runtime is much more command line friendly. I do particularly like how easy they’ve made it to expose commands from Nuget packages in the project.json declaration. That was a big irritation to me in the existing Nuget that we eventually solved by moving to gems instead for CLI tools.
  • The new project.json model is so much cleaner and simpler than the existing MSBuild schema. I hope that translates to easier tooling for scaffolding and much less merge hell than .Net classic
  • I’m impressed with how much activity is happening around the OmniSharp project, because I’d like to be productive in C# without having to fire up Visual Studio.Net at all — or at least only when I want ReSharper to do a lot of heavy refactoring work.
  • The abomination that was strong naming is gone

 

The Spirit of Openness

I like that the ASP.Net team has posted their code for their next generation work on GitHub. The best part of that for me is that they moved the code over before it was completely baked and you can actually see what they’re up to. Even crazier to me is that the ASP.Net team seems to actually be listening to the feedback (I’ve felt in the past that the ASP.Net team hasn’t been all that receptive to feedback).

Compare that to how the tool that became Nuget was developed. Nuget was largely conceived and built in secret at the same time that a pair of community efforts were trying to build out a new package management strategy for .Net (Seb’s OpenWrap and the Nu project to adapt RubyGems to .Net that I was somewhat involved in). Contrary to the pleasant sounding legend that’s grown up around it, Nuget is not a continuation of a community project. Nuget partially took over the name and asked the guys doing Nu to just stop.

The Nuget episode is irritating to me, not just because of how they de facto killed off the community efforts, but because I think they made some design decisions that caused some severe problems for usage (I described the problems I saw and my recommendations for Nuget earlier this year). Maybe, just maybe, the usage and performance problems could have been solved by more contributions, awareness, and feedback from the larger community if Nuget had been built in the open.

 

Kill the Insider Groups

Most of us didn’t known anything about Nuget or the new ASP.Net vNext platform before it was announced (I did, but not through official channels), but Microsoft has several official “Insider” groups that get much more advance warning and theoretically provide feedback to Microsoft on proposals and early work. I’d like to propose right now that these self-selected “insider” groups be disbanded and much more of these conversations happen in more public forums. My reasoning is pretty simple: there are far more strong developers outside of Redmond and that list than there are on the official list. My fear is that those types of insider, true believer clubs can too easily turn into echo chambers with little exposure to the world outside of .Net — and I feel that my limited involvement with that list frankly confirms that suspicion.

I was not a member of the ASP.Net Insider list until somewhat recently. If I had been aware of the plans for “Project K” earlier, I would have shut down FubuMVC development much earlier than I did because so much of the improvements in ASP.Net vNext make some very large development efforts we did in FubuMVC irrelevant or unnecessary.

 

OSS and the Community Thing

I’m guilty of spreading the “.Net community sucks” meme, but that’s not precisely true, it’s just different than other communities. What makes .Net so much different than many other development communities is how the majority of it revolves around what one particular company is doing.

So much of .Net is open source now and they even take contributions. Awesome, great, but my very first reaction was that it doesn’t matter much because the .Net community as a whole isn’t as participatory as other communities and that would have to change before ASP.Net vNext being OSS matters. It’ll be interesting to me to see if that changes over time.

With few exceptions (NancyFx comes to mind), OSS projects in .Net struggle to get an audience and build a community. I just don’t think there are enough venues for community to coalesce around OSS efforts in .Net, and I think that’s the reason why we have so much duplication of OSS projects without any one alternative getting much involvement beyond the initial authors. I do suspect that the .Net OSS community is much better in Europe now than North America, but that might just me having become such a homebody over the past 5 years.

 

Mono is More Viable and Docker(!)

I thought that trying to support running FubuMVC on Mono was a borderline disaster, to the point where I didn’t want to touch Mono again afterward (it’s a longer conversation, but beyond the normal aggravations like the file system differences between Windows and *nix we hit several Mono bugs). With the recent announcements about open sourcing so much more of the .Net runtime, the PCL standardization, and the better collaboration between Microsoft and Xamarin on display in Miguel de Icaza’s blogpost on the .Net announcements, I think that building applications on server side Mono is going to be much more viable.

Like just about every development shop around, we’re very interested in using Docker for deployments. The ability to eventually deploy our existing .Net applications as Docker containers could be a very big win for us.

 

We wanna code on the Mac

I resisted switching over to Mac’s for years, even when so many of my friends were doing even .Net development on their Macs, just because I was too cheap. After a couple years now of using a Mac, I’d really prefer to stay on that side of things and hopefully give my Windows VM much more time off. Mac OS being a first class citizen for the new .Net and the progress on the OmniSharp tools for Sublime or MacVim is going to make the new ASP.Net vNext runtime a much easier sell in my shop.

 

I think we might just head back to the new .Net

We’ve been working toward eventually moving off of .Net and I have been trying to concentrate my side work efforts on Node.js related things, but the truth of the matter is that we have a huge amount of existing code on .Net and a heavy investment in .Net OSS tools that we just can’t make go away overnight. If the feedback cycle is really much better in the new runtime, we can code on Macs, and use Docker, our thinking is that we’re still better off in the end to just adopt the better .Net instead of rewriting the existing systems on another platform.

 

vNext & Me

I have no intentions of restarting FubuMVC itself, but we’ve put together a concept for a  partial reboot called “Jasper” that may or may not become reality someday. I do want to move our FubuTransportation service bus to vNext next year. At work we *might* try to extend the new version of ASP.Net MVC to act more like FubuMVC to make migrating our existing applications easier if we don’t try to do the Jasper idea. StructureMap 3 is already PCL compliant, so it *should* be easy to get up and going on vNext.

In the meantime, I think I’m going to start a blog series just reviewing bits of the code in the ASP.Net GitHub organizations to get myself more familiar with the new tooling.

 

StructureMap 3 Documentation

Shockingly, my efforts to complete the documentation on StructureMap 3 have taken much, much longer than I had hoped — but there’s some real progress worth talking about. This time around, I’m adopting the idea of “living documentation” where the code samples are taken directly out of the code in the main GitHub repository at publishing time so that the documentation never gets out of sync with the code. For the most part, I’m using unit test code to demonstrate API usage in the documentation with the thinking there that the resulting documentation is much less ambiguous and again, cannot be out of sync with how the code actually works as long as those unit tests are passing.

If you’re curious, I’ve been using our (now abandoned) FubuDocs project with FubuMVC.CodeSnippets to author and publish the documentation. All the documentation is in GitHub in the StructureMap.Docs project (and yes, I certainly take pull requests).

The new documentation has moved to GitHub pages hosting at http://structuremap.github.io. I’m not sure what’s going to happen to the old structuremap.net website yet.

Some highlights of the documentation so far:

 

I’m committed to finishing the documentation, but I’m obviously not sure when it will be complete. I’d like to say it’s “done” before starting any significant new OSS project and I’m using that to force myself to finally finish;)

How We Do Strong Typed Configuration

TL;DR: I’ve used a form of “strong typed configuration” for the past 5-6 years that I think has some big advantages over the traditional approach to configuration in .Net. I still like this approach and I’m seeing something similar showing up in ASP.Net vNext as well. I’m NOT trying to sell you on any particular tool here, just the technique and concepts.

What’s Wrong with Traditional .Net Configuration?

About a decade ago I came late into a project where we were retrofitting some sort of new security framework onto a very large .Net codebase.* The codebase had no significant automated test coverage, so before we tried to monkey around with its infrastructure, I volunteered to create a set of characterization tests**. My first task was to stand up a big chunk of the application architecture on my own box with a local sql server database. My first and biggest challenge was dealing with all the configuration needs of that particular architecture (I’m wanting to say it was >75 different items in the appSettings collection).

As I recall, the specific problems with configuration were:

  1. It was difficult to easily know what configuration items any subset of the code (assemblies, classes, or subsystems) needed in order to run without painstakingly tracing the code
  2. It was somewhat difficult to understand how items in very large configuration files were consumed by the running code. Better naming conventions probably would have helped.
  3. There was no way to define configuration items on the fly in code. The only option was to change entries in the giant web.config Xml file because the code that used the old System.Configuration namespace “pulled” its configuration items when it needed configuration. What I wanted to do was to “push” configuration to the code to change database connections or file paths in test scenarios or really any time I just needed to repurpose the code. In a more general sense, I called this the Pull, Don’t Push rule in my CodeBetter days.

From that moment on, I’ve been a big advocate of approaches that make it much easier to both trace configuration items to the code that consumes it and also to make application code somehow declare what configuration it needs.

 

Strong Typed Configuration with “Settings” Classes 

For the past 5-6 years I’ve used an approach where configuration items like file paths, switches, url’s, or connection information is modeled on simple POCO classes that are suffixed with “Settings.” As an example, in the FubuPersistence package we use to integrate RavenDb with FubuMVC, we have a simple class called RavenDbSettings that holds the basic information you would need to connect to a RavenDb database, partially shown below:

    public class RavenDbSettings
    {
        public string DataDirectory { get; set; }
        public bool RunInMemory { get; set; }
        public string Url { get; set; }
        public bool UseEmbeddedHttpServer { get; set; }


        [ConnectionString]
        public string ConnectionString { get; set; }

        // And some methods to create in memory datastore's
        // or connect to the specified external datastore
    }

Setting aside how that object is built up for the moment, the DocumentStoreBuilder class that needs the configuration above just gets this object through simple constructor injection like so: new DocumentStoreBuilder(new RavenDbSettings{}, ...);. The advantages of this approach for me are:

  • The consumer of the configuration information is no longer coupled to how that information is resolved or stored in any way. As long as it’s giving the RavenDbSettings object in its constructor, DocumentStoreBuilder can happily go about its business.
  • I’m a big fan of using constructor injection as a way to create traceability and insight into the dependencies of a class, and injecting the configuration makes the dependency on configuration from a class much more transparent than the older “pull” forms of configuration.
  • I think it’s easier to trace back and forth between the configuration items and the code that depends on that configuration. I also feel like it makes the code “declare” what configuration items it needs through the signature of the Settings classes

 

Serving up the Settings with an IoC Container

I’m sure you’ve already guessed that we just use StructureMap to inject the Settings objects into constructor functions. I know that many of you are going to have a visceral reaction to the usage of an IoC container, and while I actually do respect that opinion, it’s worked out very well for us in practice. Using StructureMap (I think most of the other IoC containers could do this as well) we get a couple big benefits in regards to default configuration and the ability to swap out configuration at runtime (mostly for testing).

Since the Settings classes are generally concrete classes with no argument constructors, StructureMap can happily build them out for you even if StructureMap has no explicit registration for that type. That means that you can forgo any external configuration or StructureMap configuration and your code can still work as long as the default values of the Settings class is useful.  To use the example of RavenDbSettings from the previous section, calling new RavenDbSettings() creates a configuration that will connect to a new embedded RavenDb database that stores its data to the file system in a folder called /data parallel to the project directory (you can see the code here).

The result of the design above is that a FubuMVC/FubuTransportation application was completely connected to a working RavenDb database by simply installing the FubuMVC.RavenDb nuget with zero additional configuration.

I demoed that several times at conferences last year and the audiences seemed to be very unimpressed and disinterested. Either that’s not nearly as impressive as I thought it was, too much magic,  I’m not a good presenter, or they don’t remember what a PITA it used to be just to install and configure everything you needed just to get a blank development database going. I still thought it was cool.

The other huge advantage to using an IoC container to deliver all the configuration to consumers is how easy that makes it to swap out configuration at runtime. Again going to the RavenDbSettings example, we can build out our entire application and swap out the RavenDb connection at will without digging into Xml or Json files of any kind. The main usage has been in testing to get a clean database per test when we do end to end testing, but it’s also been helpful in other ways.

 

So where does the Setting data come from?

Alright, so the actual data has to come from somewhere outside the codebase at some point (like every developer of my age I have a couple good war stories from development teams hard coding database connection strings directly into compiled code). We generally put the raw data into the normal appSettings key/value pairs with the naming convention “[SettingsClassName].[PropertyName].” The first time a Settings object is needed within StructureMap, we read the raw key/value data from the configuration file and use FubuCore’s model binding support to create the object and do all the type coercion necessary to create the Settings object. An early version of this approach was described by my former colleague Josh Flanagan way back in 2009. The actual mechanics are in the FubuMVC.StructureMap code in the main fubumvc repository.

Some other points:

  • The data source for the model binding in this configuration setup was pluggable, so you could use external files or anything else that could be exposed as key/value pairs. We generally just use the appSettings support now, but on a previous project we were able to centralize the configuration files to a common location for several related processes
  • The model binding is flexible enough to support deep objects, enumerable properties, and just about anything you would need to use in your Settings objects
  • We also supported a model where you could combine key/value information from multiple sources, but layer the data in precedence to enable overrides to the basic configuration. My goal with that support was to avoid making all customer or environment specific configuration be in the form of separate override files without having to duplicate so much boilerplate configuration. In this setup, when the Settings objects were bound to the raw data, profile or environment specific information just got a higher precedence than the default configuration.
  • As of FubuMVC 2.0, if you’re using the StructureMap integration, you no longer have to do anything to setup this settings provider infrastructure. If StructureMap encounters an unknown dependency that’s a concrete type suffixed by “Settings,” it will try to resolve it from the model binding via a custom StructureMap policy.

 

Programmatic Configuration in FubuMVC 

We also used the “Settings” configuration idea to programmatically specify configuration for features inside the application configuration by applying alterations to a Setting object like this code from one of our active projects:

// Sets up a custom authorization rule on any diagnostic 
// pages
AlterSettings<DiagnosticsSettings>(x => {
	x.AuthorizationRights.Add(new AuthorizationCheckPolicy<InternalUserPolicy>());
});

// Directs FubuMVC to only look for client assets
// in the /public folder
AlterSettings<AssetSettings>(x =>
{
	x.Mode = SearchMode.PublicFolderOnly;
});

The DiagnosticSettings and AssetSettings objects used above are injected into the application container as the application bootstraps, but only after the alterations show above are applied. Behind the scenes, FubuMVC will first resolve the objects using any data found in the appSettings key/value pairs, then apply the alternation overrides in the code above. Using the alteration lambdas instead of just injecting the Settings objects directly allowed us to also embed settings overrides in external plugins, but ensure that the main application overrides always “win” in the case of conflicts.

I’m still happy with how this has turned out in real usage and I’ve since noticed that ASP.Net vNext uses a remarkably similar mechanism to configure options in their IApplicationBuilder scheme (think SetupOptions<MvcOptions>(Action<MvcOptions>)). I’m interested to see if the ASP.Net team will try to exploit that capability to provide much better modularity than the older ASP.Net MVC & Web API frameworks.

 

 

Other Links

 

* That short project was probably one of the strongest teams I’ve ever been a part of in terms of talented individuals, but it also spawned several of my favorite development horror stories (massive stored procedures, non-coding architects, the centralized architect team anti-pattern, harmful coding standards, you name it). In my career I’ve strangely seen little correlation between the technical strength of the development team (beyond basic competence anyway) and the resulting success of the project. Environment, the support or lack thereof from the business, and the simple wisdom of doing the project in the first place seem to be much more important.

** As an aside, that effort to create characterization tests as a crude regression test suite did pay off and we it did find some regression errors after we started making the bigger changes with that test suite. I think the Feathers’ playbook for legacy systems, where I got the inspiration for those characterization tests, is still very relevant some 10+ years later.

Transaction Scoping in FubuMVC with RavenDb and StructureMap

TL;DR – We built unit of work and transactional boundary mechanics into our RavenDb integration with FubuMVC (with StructureMap supplying the object scoping) that I’m still happy with and I’d consider using the same conceptual design again in the future.

Why write this now long after giving up on FubuMVC as an OSS project? We still use FubuMVC very heavily at work, I’m committed to supporting StructureMap as long as it’s relevant, and RavenDb is certainly going strong. I’ve seen other people trying to accomplish the same kind of IoC integration and object scoping that I’m showing here on other frameworks and this might be useful to those folks. Mostly though, this post might benefit some of our internal teams and almost every enterprise software project involves some kind of transaction management.

Transaction Boundaries

If you’re working on any kind of software system that persists state, you probably need to be consciously aware of the ACID rules (AtomicityConsistencyIsolationDurability) to determine where your transaction boundaries should be. Roughly speaking, you want to prevent any chance of a system getting into an invalid state because some database changes succeeded while other database changes within the same logical operation fail. The canonical example is transfering money between two different bank accounts. For that use case, you’re probably making at least three different changes to the underlying database:

  1. Subtract the amount of the transfer from the first account
  2. Add the amount of the transfer to the second account
  3. Write some sort of log record for the transfer

All three database operations above need to succeed together, or all three should be rolled back to prevent your system from getting into an invalid state. Failing to manage your transaction boundaries could result in the bank customer either losing some of their funds or the bank magically giving more money to the consumer. Following the ACID rules, we would make sure that all three operations are within the same database transaction so that they all have to succeed or fail together.

Using event sourcing and eventual consistency flips this on its head a little bit. In that case you might just capture the log record, then rebuild the “read side” views for the two accounts asynchronously to reflect the transfer. Needless to say, that can easily generate a different set of problems when there’s any kind of lag in the “eventual.”

The Unit of Work Pattern

Historically, one of the easier ways to collect and organize database operations into transactions has been the Unit of Work pattern that’s now largely built into most every persistence framework. In RavenDb, you use the IDocumentSession service as the logical unit of work. Using RavenDb, we can just collect all the documents that need to be persisted within a single transaction by calling the Delete() or Store() methods. When we’re ready to commit the entire unit of work as a transaction, we just call the SaveChanges() method to execute all our batched up persistence operations.

One of the big advantages to using the unit of work pattern is the ability to easily coordinate transaction boundaries across multiple unrelated classes, services, or functions — as long as each is working against the same unit of work object. The challenge then is to control the object lifecycle of the RavenDb IDocumentSession object in the lifecycle of a logical operation. While you can certainly use some sort of transaction controller (little ‘c’) to manage the lifecyle of an IDocumentSession object and coordinate its usage within the actual workers, we built IDocumentSession lifecycle handling directly into our FubuMVC.RavenDb library so that you can treat an HTTP POST, PUT, or DELETE in FubuMVC or the handling of a single message in the FubuTransportation service bus as a logical unit of work and simply let the framework itself do all your transaction boundary kind of bookkeeping for you.

IDocumentSession Lifecycle Management 

First off, FubuMVC utilizes the StructureMap Nested Container feature to create an isolated object scope per HTTP request — meaning that any type registered with the default lifecyle that is resolved by StructureMap within a request (or message) will be the same object instance. In the case of IDocumentSession, this scoping means that every concrete type built by StructureMap will have the same shared instance of IDocumentSession.

To make it concrete, let’s say that you have these three classes that all take in an IDocumentSession as a constructor argument:

    public class RavenUsingEndpoint
    {
        private readonly IDocumentSession _session;
        private readonly Worker1 _worker1;
        private readonly Lazy<Worker2> _worker2;

        public RavenUsingEndpoint(IDocumentSession session, Worker1 worker1, Lazy<Worker2> worker2)
        {
            _session = session;
            _worker1 = worker1;
            _worker2 = worker2;
        }

        public string post_session_is_the_same()
        {
            _session.ShouldBeTheSameAs(_worker1.Session);
            _session.ShouldBeTheSameAs(_worker2.Value.Session);

            return "All good!";
        }
    }

    public class Worker1
    {
        public IDocumentSession Session { get; set; }

        public Worker1(IDocumentSession session)
        {
            Session = session;
        }
    }

    public class Worker2 : Worker1
    {
        public Worker2(IDocumentSession session) : base(session)
        {
        }
    }

When a FubuMVC request executes that uses the RavenUsingEndpoint.post_session_is_the_same() method, FubuMVC is creating a RavenUsingEndpoint object, a Worker1 object, a Lazy<Worker2> object, and one single IDocumentSession object that is injected both into RavenUsingEndpoint and Worker1. Additionally, if the Lazy<Worker2> is evaluated, a Worker2 object will be created from the active nested container and the exact same IDocumentSession will again be injected into Worker2. With this type of scoping, we can use RavenUsingEndpoint, Worker1, and Worker2 in the same logical transaction without worrying about how we’re passing around IDocumentSession or dealing with any sort of less efficient transaction scoping tied to the thread.

TransactionalBehavior

So step 1, and I’d argue the hardest step, was making sure that the same IDocumentSession object was used by each handler in the request. Step 2 is to execute the transaction.

Utilizing FubuMVC’s Russian Doll model, we insert a new “behavior” into the request handling chain for simple transaction management:

    public class TransactionalBehavior : WrappingBehavior
    {
        private readonly ISessionBoundary _session;

        // ISessionBoundary is a FubuMVC.RavenDb service
        // that just tracks the current IDocumentSession
        public TransactionalBehavior(ISessionBoundary session)
        {
            _session = session;
        }

        protected override void invoke(Action action)
        {
            // The 'action' below is just executing
            // the handlers nested inside this behavior
            action();

            // Saves all changes *if* an IDocumentSession
            // object was created for this request
            _session.SaveChanges();
        }
    }

The sequence of events inside a FubuMVC request using the RavenDb unit of work mechanics would be to:

  1. Determine which chain of behaviors should be executed for the incoming url (routing)
  2. Create a new StructureMap nested container
  3. Using the new nested container, build the entire “chain” of nested behaviors
  4. The TransactionalBehavior starts by delegating to the inner behaviors, one of which would be…
  5. A behavior that invokes the RavenUsingEndpoint.post_session_is_the_same() method
  6. The TransactionBehavior object, assuming there are no exceptions, executes all the queued up persistence actions as a single transaction
  7. At the end of the request, FubuMVC will dispose the nested container for the request, which will also call Dispose() on any IDisposable object created by the nested container during the request, including the IDocumentSession object.

How does the TransactionalBehavior get applied?

In the early days we got slap happy with conventions and the simple inclusion of the FubuMVC.RavenDb library in the applications bin path was enough to add the TransactionBehavior to any route that responded to the PUT, POST, or DELETE http verbs. For FubuMVC 2.0 I made this behavior “opt in” instead of being automatic, so you just had to either include a policy to specify which routes or message handling chains needed the transactional semantics:

    public class NamedEntityRegistry : FubuRegistry
    {
        public NamedEntityRegistry()
        {
            Services(x => x.ReplaceService(new RavenDbSettings { RunInMemory = true}));

            // Applies the TransactionalBehavior to all PUT, POST, DELETE routes
            Policies.Local.Add<TransactionalBehaviorPolicy>();
        }
    }

You could also use attributes for fine grained application if there was a need for that.

Explicit Transaction Boundaries

I fully realize that many people will dislike the API usage with the nested closure below, but it’s worked out well for us in my opinion — especially when it’s rarely used as the exception case rather than the normal every day usage.

All the above stuff is great as long as your within the context of an HTTP request or a FubuTransportation message that represents a single logical transaction, but that leaves plenty of circumstances where you need to explicitly control transaction boundaries. To that end, we’ve used the ITransaction service in FubuPersistence to mark a logical transaction boundary by again executing a logical transaction within the scope of a completely isolated nested container for the operation.

Let’s say you have a simple transaction worker class like this one:

    public class ExplicitWorker
    {
        private readonly IDocumentSession _session;

        public ExplicitWorker(IDocumentSession session)
        {
            _session = session;
        }

        public void DoSomeWork()
        {
            // do something that results in writes
            // to IDocumentSession
        }
    }

To execute the ExplicitWorker.DoSomeWork() method in its own isolated transaction, I would invoke the ITransaction service like this:

// Just spinning up an empty FubuMVC application
// FubuMVC.RavenDb is picked up automatically by the assembly
// being in the bin path
using (var runtime = FubuApplication.DefaultPolicies().StructureMap().Bootstrap())
{
    // Pulling a new instance of the ITransaction service
    // from the main application container
    var transaction = runtime.Factory.Get<ITransaction>();
            
    transaction.Execute<ExplicitWorker>(x => x.DoSomeWork());
}

At runtime, the sequence of actions is:

  1. Create a new StructureMap nested container
  2. Resolve a new instance of ExplicitWorker from that nested container
  3. Execute the Action<ExplicitWorker> passed into the ITransaction against the newly created ExplicitWorker object to execute the logical unit of work
  4. Assuming that the action didn’t throw an exception, submit all changes to RavenDb as a single transaction
  5. Dispose the nested container

Use this strategy when you need to execute transactions outside the scope of FubuMVC HTTP requests or FubuTransportation message handling, or anytime when it’s appropriate to execute multiple transactions for a single HTTP request or FT message.

Building an EventStore with User Defined Projections on top of Postgresql and Node.js

EDIT 3/28/2016: Since this blog post still gets plenty of reads, here’s an update. The work described here never got used in a real project (#sadtrombone), but fear not, the projection support is going to live again in a new project called Marten that seeks to turn Postgresql into a very functional Document Db and Event Store with projections.

I did an internal talk on the tooling and concepts in this post at our Salt Lake City office a couple months ago. The recording, for what it’s worth, is here. I’m assuming that you’re at least somewhat familiar with the concepts of Event Sourcing and CQRS, but if you’re not, there are links to descriptive explanations of these concepts in the body of the post.

For most of the 2000’s my goto strategy for application persistence was to use some sort of object relational mapping to persist and read the object structures that I wanted to work with in my code. Sometimes I used hand rolled code to do the mapping, and other times my teams used NHibernate. In the past couple years I’ve been on projects that used the RavenDb document database with mixed success. I’ve also worked on a couple codebases that used an event sourcing strategy to persist meaningful business events, sometimes with RavenDb as the underlying storage engine and another project that uses an older version of NEventStore with Sql Server as the storage mechanism.

For various reasons, we’ve chosen to use a Node.js based stack to rewrite an old WPF application that is a suitable candidate for event sourcing on the backend (Corey Kaylor explained his take on this decision in a blog post). Since we already wanted to replace Sql Server (and probably RavenDb) with Postgresql in the long run, at Corey’s suggestion I have been working on and off to try leveraging  to create a new event store suitable for Node.js development that supports user-defined projections. Lacking all originality, I’m calling this new library “pg-events.”  You can find pg-events hosted under my GitHub account (my very first foray back into OSS post-FubuMVC).

 

Feature Set

  • Support the basic event sourcing pattern by appending the raw business events as JSON to the event store
  • Track events by a “stream” of related events that probably relates directly to some kind of business concept or workflow
  • Support user-defined projections of the raw event data to create “read side” views for clients
  • Support aggregated views of a stream (really just another projection). Use a basic snapshotting strategy of the aggregate state for efficiency
  • Build time tooling to initialize a postgresql database with the custom schema objects and import javascript libraries to postgresql
  • A crude, partial implementation of CommonJS that runs within postgresql

 

Conceptual Architecture

The first thing to know is that we’re making a very large bet on the portability of Javascript code and the ability to run at least a subset of this new event store code hosted in Postgresql, Node.js, embedded in other programming, or even potentially in a browser. The user-defined projections could potentially be executed in any of the pieces below, and we think that flexibility will pay off down the road for both performance and scalability tuning.

So far, I think the end state is going to consist of these four pieces:

 

pg-events

  1. Postgresql Database
    1. Custom schema objects largely based on Greg Young’s Building an Event Store paper to store events and stream metadata.
    2. Tables to persist projected views. Most projection views will be persisted to separate tables instead of one giant “pge_views” table for better query performance
    3. Stored procedures to update and query data in the event storage tables, mostly using postgresql’s Javascript support
    4. Executes and updates aggregate snapshots and synchronous projections. More on that in the following section.
  2. A Node.js Client
    1. Exposes methods to append events to the store
    2. Exposes methods to query for projected views, aggregate snapshots, and raw event stream data
    3. See https://github.com/jeremydmiller/pg-events/wiki/Client for more information
  3. Admin CLI Tool
    1. Build the necessary schema objects into a Postgresql
    2. Loads the user defined projections and other pg-events libraries into the database
    3. Can reset the event storage and projected view tables to an empty state for testing
    4. Eventually, this tool will also support “recapitulation” to rebuild projection data from the raw events when the definition of a projection changes
  4. Background Projection Runner
    1. Executes and updates projected views in a background process. This is my very next coding adventure. I’m going to build it out first with Node.js, then try my hand at implementing it again with a standalone Golang executable that uses an embedded V8 engine to execute the projections. Expect my twitter feed to be entertaining when I’m able to start that work. I’ll blog about this later when I know what it’s going to actually look like;)

 

 

User Defined Projections 

We looked at EventStore at first and I definitely liked their first class support for user defined projections. Our implementation of projections is very obviously influenced by EventStore’s.

I think that the event sourcing efforts I’ve been a part of have been successful overall, but “projecting” the raw event stream into a persisted read side or view model has been challenging. For pg-events, we’re expressing the projections with simple transformation functions that will take in the initial state and the raw event data and simply return the new state (it’s a logical fold left operation for the projections that work across multiple events).

For a sample event sourcing domain, I’ve been using the idea of a quest from the way too many fantasy books I’ve read over my lifetime. During a quest, our heroes might record events like “QuestStarted”, “MembersJoined”, “MembersDeparted”, or “TownReached.” To know or understand the exact composition of a quest party at any time, we need to replay some of the change events (Gandalf stayed behind to fight the Balrog, Boromir was killed, Frodo and Sam ran off, Gollum joined up, etc.) for the quest.

Say we write a projection for a new view across the events in a single quest called “Party” just to understand the membership. From the unit tests, that projection looks like:

require("../../lib/projections")
	.projectStream({
		name: 'Party',
		stream: 'Quest', 
		mode: 'sync',

		$init: function(){
			return {
				active: true,
				traveled: 0,
				location: null,
				members: []
			}
		},

		QuestStarted: function(state, evt){
			state.active = true;
			state.location = evt.location;
			state.members = evt.members.slice(0);

			state.members.sort();
		},

		TownReached: function(state, evt){
			state.location = evt.location;
			state.traveled += evt.traveled;
		},

		EndOfDay: function(state, evt){
			state.traveled += evt.traveled;
		},

		QuestEnded: function(state, evt){
			state.active = false;
			state.location = evt.location;
		},

		MembersJoined: function(state, evt){
			state.members = state.members.concat(evt.members);
			state.members.sort();
		},

		MembersDeparted: function(state, evt){
			state.location = evt.location;

			for (var i = 0; i < evt.members.length; i++){
				var index = state.members.indexOf(evt.members[i]);
				state.members.splice(index, 1);
			}

			state.members.sort();
		}


	});

You’ll notice that there’s a field called “mode” with a value of “sync.” Using the portability of Javascript, we’re planning for these modes:

  1. sync – A ‘sync’ projection will be executed synchronously inside postgresql within the same transaction as the event capture
  2. async – In progress. An ‘async’ projection will be calculated in a background process instead of at event capture time (Eventual Consistency).
  3. live – Forthcoming. These projections will only be calculated upon demand. I’m not yet sure if we’ll do the actual projection transformations within the database or the Node.js client. I guess we could allow two different “live” modes if there’s value in doing that.

So, eventual consistency killing you in your current event sourcing efforts because you hit errors by querying off of stale data? Opt for synchronous projections. Have lots of writes, but relatively few reads? Use asynchronous or even live projections that are only calculated on demand. Have lots of reads but very few writes? I think I would again opt for synchronous projections.

I worked on a system a couple years ago in a failed startup that ran projections in a browser to do historical point in time simulations. I don’t see any reason why we couldn’t do something similar in pg-events if that is ever valuable.

Believe it or not, I have a decent start on documenting the projection support at https://github.com/jeremydmiller/pg-events/wiki/Projections.

 

Why Postgresql? 

Postgresql is the people pleaser of database engines. Want all the normal RDBMS capabilities? Would you be more productive using Postgresql as a document database? Want to write stored procedures with a language that closely resembles Oracle’s PL/SQL (no, a thousand times no, never again)?  Would you even want to use Javascript inside the database itself? Regardless of how you answered any of those questions, Postgresql is trying really hard to be what you want. In our case, I like that we can use postgresql as an event store, a document database for things that don’t fit into the event sourcing model, and a classic RDBMS if that’s what we want in some circumstances.

Mostly though, we like that Postgresql has a proven track record and we suspect that the DevOps support tools will be more effective than we’ve experienced with other OSS database tools.

Of course, the only reason why pg-events is viable in the first place is that postgresql has outstanding JSON support and the ability to author stored procedures with Javascript using Google’s V8 engine. With our project timeline, it’s also safe to assume that Postgresql 9.4 with its significant improvements to the JSON storage will be available before we go live to production.

 

Why CQRS isn’t crazy

I’ll feely admit that the first time I saw Greg Young talking about the Command Query Responsibility Segregation (CQRS) style of architecture in 2008 I thought it was nuts. Specifically, I was afraid that doing the transformations between the “write side” model and the “read side” model consumed by the clients would lead to far too much repetitive “left hand, right hand” code. The reality is, of course, is that I was already doing a lot of work to map database tables to object graphs, transforming domain model objects to DTO’s to send over the wire, and crafting database views to transform our raw data into something more conducive to reporting requirements. In a way, CQRS just explicitly calls out a large part of software development efforts that is often overlooked. If we simply accept the idea that different consumers and producers of the persisted state in our system naturally have different needs as far as how the same information is written, structured, and consumed, CQRS isn’t really “crazy talk” or extra work. One of the biggest differences is that with event sourcing + CQRS you probably try to pre-build and persist the read side views instead of trying to create views or DTO’s on the fly from the “one true database model.”

 

Some thoughts on Relational Databases

I’m very much in the camp that says that the database is strictly for persistence and your business logic and/or user interface should never be tightly coupled to whatever the database is, so the idea of just consuming the raw database tabular data in business logic code is a non-starter for me — not to mention that a flat database table structure is very rarely the exact structure that you’d want in your business logic code outside of CRUD-centric applications. I’ve been a part of technical arguments with database-centric folks for so long that I’m simply happy to say “agree to disagree” on these issues and let us all go on our way.

There’s a tremendous amount of inertia and investment in tooling in our industry in regards to the usage of relational databases as the de facto standard for just about all persistence needs. Additionally, most developers, testers, and even the business people seem to naturally understand the relational database model. Even so, as alternative models like document or graph databases build up more tooling, acceptance, and developer familiarity, I think that relational databases will eventually be consigned to reporting applications or pure CRUD applications (but even then I prefer document databases).

That being said, I think that the future really is “polyglot persistence” and that our children are going to laugh at us in decades to come when we explain how we built systems against relational databases.

StructureMap 3.1

I pushed a new minor release version 3.1 of the StructureMap, StructureMap.Web, StructureMap.AutoMocking, StructureMap.AutoMocking.Moq, and StructureMap.AutoFactory packages to Nuget.org this morning. You can see a list of the closed issues in this release here.

Thank you to Matt Honeycutt, Marco Cordeiro, and Jimmy Bogard for their help in making this incremental release.

 

Future Plans

  • The documentation is still in flight and will probably be so for quite some time. What little is there so far is up at http://structuremap.github.io.
  • Xamarin support. StructureMap 3.0 is already built with PCL compliance and runs on WP8, so getting it to run on Xamarin runtimes should be a piece of cake, right? Right?
  • Continue to make bug fix releases as needed. I hate how many bugs have popped up since the 3.0 release, but at least it’s much easier to make incremental releases in the Nuget era than it was with Sourceforge back in the day.

I’m still holding the line on not strong naming StructureMap unless someone does the pull request to support multiple signed and unsigned versions of the Nuget. I’m starting to get asked about a signed version every couple weeks and I still don’t want to do that. You’re always welcome to just clone the repository and sign the code yourself.

 

Building with IContext.Root

A couple StructureMap users have asked over the years to support contextual resolution of injected loggers with tools like log4net or NLog. While StructureMap 2.5+ supported this pattern, some of the support got lost in the big restructuring work for 3.0 and this release brings it back.

Say you’re using a logging tool that allows you to specify different logging rules and mechanisms by namespaces, types, or assemblies. Your logging tool probably has some construct like the following code to build the right logger for a given type:

    public static class LogManager
    {
        // Give me a Logger with the correctly
        // configured logging rules and sources
        // matching the type I'm passing in
        public static Logger ForType(Type type)
        {
            return new Logger(type);
        } 
    }

Now, let’s say that you want StructureMap to inject a Logger into constructor arguments of the objects it’s going to build. If you want to create a Logger that’s suitable for the topmost concrete type being built by a service location request to StructureMap. You could use code similar to this:

            // IContext.RootType is new for 3.1
            var container = new Container(_ => {
                _.For<Logger>()
                    .Use(c => LogManager.ForType(c.RootType))
                    .AlwaysUnique(); 
            });

If you wanted to build the Logger individually to match each type in the object graph being created, use this code instead:

            // Resolve the logger for the type one level up
            container = new Container(_ => {
                _.For<Logger>().Use(c => LogManager.ForType(c.ParentType))
                    .AlwaysUnique();
            });

The AlwaysUnique() lifecycle is important here to force StructureMap to create a new Logger instance every time one is necessary in the object graph to prevent the very first Logger created from being shared throughout the entire object graph. This is one of the very few use cases for the “unique” lifecycle.

 

Child Containers

New for StructureMap 3.0 are “child containers,” which should not be confused for nested containers — and all of that would be much more clear if I’d ever get around to writing the big blog post on nested container behavior that I’ve promised a half dozen people this year. Child containers are meant for stateful client development where you might want to pop a child container for a region, pane, or specific view of the application to override some of the main application services while being able to gracefully fallback to the application container for everything else.

 

Better IEnumerable<T>/IList<T>/T[] Support

Since at least version 2.5, if StructureMap encounters constructor or setter arguments of IEnumerable<T>, IList<T>, or T[] in a concrete type where the dependency is not explicitly configured, those arguments will be fulfilled by creating an enumerable of all configured instances of the type T in the container in the order in which they were registered. Great, and it was valuable in several usages within FubuMVC, but other folks are wanting to resolve the enumeration types directly or as Lazy<IList<T>> or Func<T[]>. StructureMap 3.1 will now resolve enumeration types that are not explicitly configured by returning all the known configured instances of the type T. To make that concrete, see the acceptance tests for this behavior.

Composable Generators in Javascript

I’m getting to work with Node.js for the first time and generally enjoying the change of pace from .Net — especially the part where my full Mocha test suite that includes integration tests runs faster than my souped up i7/SSD box can compile even the simplest .Net library. For my part, I’ve been playing with a new library to support event sourcing user defined with projections by leveraging Postgresql’s native JSON features and embedded Javascript support (not much to see yet, but the code is here if you’re curious).

Since it is Node.js, every action you do that touches the database or file system really wants to be asynchronous. I started by using promises with the Bluebird library, and it was okay for simple cases. However, the code became very hard to follow when I started chaining more than 3-4 operations together. Just this week I finally got a chance to try out the usage of the new ES6 generators feature with my codebase in the places where I was having to combine so many asynchronous operations and I’m very happy with the results so far.

To my eyes, generators are a big improvement in readability over promises. Consider two versions of the same method from my event store library. First up, this is the original version using promises to execute a stored procedure in Postgresql (yep, I said sproc) then transform the asynchronous result to return to the calling client:

	append: function(){
		var message = this.toEventMessage(arguments);

		return pg.connectAsync(this.options.connection).spread(function(client, release){
			return client.queryAsync('select pge_append_event($1)', [message])
				.finally(release);
		})
		.then(function(result){
			return Promise.resolve(result.rows[0].pge_append_event);
		});
	},

Now, see the newer equivalent method using generators with some help from the postgres-gen library:

	append: function(){
		var message = this.toEventMessage(arguments);

		return this.db.transaction(function*(t){
			return (yield t.query('select pge_append_event(?)', message)).rows[0].pge_append_event;
		});
	},

I’m going to claim that the generator version on the bottom is cleaner and easier to write and understand than the version that only uses promises above. Some of that is due to my usage of the postgres-gen library to smooth out the interaction with Postgresql, but I think that not having to nest Javascript functions is a big win (yes I know that Coffeescript or arrow functions with Traceur would help too by making the inline function syntax cleaner, but I’d still rather avoid the nesting anyway).

The biggest difference to me came when I wanted to start dynamically composing several asynchronous operations together. As I said earlier, I’m trying to build an event store library with user defined projections. To test the projection support I needed to repeatedly store a sequence of events and then fetch and evaluate the value of the calculated projection to verify the new functionality. I very quickly realized that embedding the original promise mechanics into the code was making the tests laborious to write and hard to read. To that end, I built a tiny DSL just for testing. You can see the raw code in Github here, but consider this sample:

	scenario('can update a view across a stream to reflect latest', function(x){
		var id = uuid.v4();

                // stores several pre-canned events to the event store
		x.append(id, 'Quest', e1_1, e1_2, e1_3);

                // does an assertion that the named view for the event
                // stream above exactly matches this document
		x.viewShouldBe(id, 'Party', {
			active: true,
			location: 'Baerlon',
			traveled: 16,
			members: ['Egwene', 'Mat', 'Moiraine', 'Perrin', 'Rand', 'Thom']
		});

                // do some more events and check the new view
		x.append(id, e1_4, e1_5);
		x.viewShouldBe(id, 'Party', {
			active: true,
			location: 'Shadar Logoth',
			traveled: 31,
			members: ['Egwene', 'Mat', 'Moiraine', 'Perrin', 'Rand']
		});

	});

Every call to the append() or viewShouldBe() methods requires an asynchronous operation to either post or query data from the underlying Postgresql database. Originally, I implemented the testing DSL above by creating an empty promise, then running that promise through all the “steps” defined in the scenario to chain additional promise operations. Earlier this week I got a chance to switch the testing DSL to using generators underneath the language and that made a big difference.

The first thing to know is that you can embed a generator function inside another generator by using the yield* keyword to designate that you want an inner generator function to be executed inline. Knowing that, the way that I implemented the testing DSL shown above was to create an empty array called steps as a member on a scenario object. In the testing expression methods like the append(), I just pushed a new generator function into the steps array like this:

	this.append = function(){
		var message = client.toEventMessage(arguments);
		self.lastId = message.id;

		// Adding a new generator function
		// to the steps array
		this.steps.push(function*(){
			yield client.append(message);
		});
	}

After the scenario is completely defined, my testing DSL just executes the steps as a single generator function as shown in the code below:

	// I'm using the Bluebird Promise.coroutine()
	// method to treat this single aggregated
	// generator method as a single promise
	// that can be happily executed and tracked
	// by the Mocha test harness
	this.execute = function(client){
		return Promise.coroutine(function*(){
			this.steps.forEach(function(s){
				yield* s;
			});
		});
	}

You can see the commit diff of the code here that demonstrates the difference between using just promises and using a generator to compose the asynchronous operation. The full implementation for the “scenario” testing DSL is here.

As an aside, if you’re playing Design Pattern Bingo at home, my testing DSL uses Fowler’s Nested Closure pattern to first define the steps of a test specification in a nested function before executing the steps. I’ve used that pattern successfully several times for little one off testing tools and seen plenty of other folks do similar things.

 

Using generators as a composable Russian Doll Model with Koa.js

My organization settled on a Node.js based stack for a rewrite of an older .Net system, but when we first started discussing a possible new platform Node.js was almost summarily dismissed because of its callback hell problems. Fortunately, I got a tip on twitter to look at the Koa.js framework and its support for generators to avoid the older callback hell problems and I was sold. Since our timeline is long enough that we feel safe betting on ES6 Harmony features, we’re starting with Koa.js.

One of the things I feel was successful in FubuMVC was our use of what I’ve always called the Russian Doll model. In this model, we could compose common behaviors like validation, authorization, and transaction management into reusable “behaviors” that could be applied individually or conventionally to the handlers for each HTTP route in a similar manner to aspect oriented programming (but with object composition instead of IL weaving or dynamic proxies). One of the things that appeals to me about Koa.js is their usage of generators as a middleware strategy that’s conceptually equivalent to FubuMVC’s old behavior model (the current Connect middleware is “single pass” where the Russian Doll model requires nesting the middleware handlers to do optional before and after operations).

At some point I’d like to experiment with building the FubuMVC “BehaviorGraph” model of configuring middleware per route for better fine grained control, but that’s a ways off.

React.js plays nicely with other tools and other thoughts

tl;dr: I’m not completely sure about React.js yet just because it’s such a different approach, but I really appreciate how easy it is to use it in conjunction with other client side tools.

My shop is looking very hard at moving to React.js for new development and I’m scrambling a bit to catch up as I’ve been mostly on the server side for years. Partially as a learning experience, I’ve been using React.js to rebuild the UI for our FubuMVC.Diagnostics package in FubuMVC 2.0 as a single page application. While my usage of React.js on the diagnostics is mostly about read only data displays, I did hit a case where I wanted a “typeahead” type search bar. While a google search will turn up a couple nascent projects to build typeahead type components purely with React.js, none looked to me to be nearly as mature and usable as the existing typeahead.js query plugin*.

Fortunately, it turns out to be very easy to use jquery plugins inside of React classes so I just used typeahead.js as is. For the StructureMap diagnostics, I wanted a search bar that allowed users to search for all the assemblies, namespaces, and types configured in the main application container with typeahead recommendations. Since I was going to want to reuse that search bar on a couple different little screens, I did things the idiomatic way React wants to be used anyway and created the small component shown below:

In the not unlikely chance that the code below is mangled and unreadable, you can find the source code in GitHub here.

var SearchBox = React.createClass({
	componentDidMount: function(){
		var element = this.getDOMNode();
		$(element).typeahead({
		  minLength: 5,
		  highlight: true
		},
		{
		  name: 'structuremap',
		  displayKey: 'value',
		  
		  // Just a little object that implements the necessary 
		  // signature to plug into typeahead.js
		  source: FubuDiagnostics.StructureMapSearch.findMatches,
		  templates: {
			empty: '<div class="empty-message">No matches found</div>',
			suggestion: _.template('<div><p><%= display%> - <small><%= type%></small></p><p><small><%= value%></small></p></div>')
		  }
		});
		
                // Behind the scenes, this is just delegating to Backbone's router
                // to 'navigate' the main pane of the page to a different view
		$(element).on('typeahead:selected', function(jquery, option){
			var url = 'structuremap/search-results/' + option.type + '/' + option.value;
			FubuDiagnostics.navigateTo(url);
		});
	},
	
	componentWillUnmount: function(){
		var element = this.getDOMNode();
		$(element).typeahead('destroy');
	},

	render: function(){
		return (
			<input type="search" name="search" ref="input" 
                          className="form-control typeahead structuremap-search" 
                          placeholder="Search the application container" />
		);
	}
});

 

There really isn’t that much to the code above, but the salient points are:

  • The componentDidMount() method fires just after React.js renders the component, giving me the chance to apply the typeahead jquery plugin to the DOM element.
  • In this case the component is a single HTML element, so I could just use the getDOMNode() method to get at the actual HTML DOM element. In more complicated components you can also use the refs collection.
  • I cleaned up and deactivated the typeahead plugin in the componentWillUnmount() method whenever React.js is tearing down the component to clean up after myself

To use the component above, I just embedded it into another component as more or less a custom tag in the jsx markup:

var Summary = React.createClass({
	render: function(){
		var items = [];
		
		this.props.assemblies.forEach(function(assem, i){		
			items.push(AssemblySummaryItem(assem));
			
			assem.namespaces.forEach(function(ns){
				items.push(NamespaceSummaryItem(ns));
			});
			
		});
		
		return (
			<div>
                        
                        // This is all you have to do to include it
			<SearchBox />
			
			<hr />
			
			<ul className="list-group">
				{items}
			</ul>
			
			</div>
		);
	}
});

 

So what do I think of React.js so far?

Um, well, React is interesting. I know that my initial reaction was similar to many others: “you seriously want me to embed HTML markup directly into JavaScript?” Once you get past that, it’s been very easy for straight forward usages of rendering JSON from the server into HTML displays. I won’t make any permanent judgement of React.js until I use it on something with much more complicated user interactions.

To speak to one of React.js’s major design philosophies, I’ve never liked two way model binding in any user interface technology I’ve ever used because I think it so easily obfuscates the behavior of screens and causes unexpected side effects. I’m not completely sure how well the one way data flow thing is going to work out in bigger applications, but I’m at least initially on board with the React.js team’s basic design philosophy.

One thing to consider is that React.js is much more of a library that you’d use as opposed to an all encompassing framework like Angular.js or Ember.js that you have to wrap your application around. The downside of that is that React.js doesn’t really do anything to encourage separation of concerns or help you structure your application beyond the view layer, but the old presentation patterns that came out of older technologies still apply and are still useful if you just treat React.js as the view technology. You’re just on your own to do your own thinking. The upside of React.js being just a library is how easy it is to use it in conjunction with other tools like the entire jquery ecosystem or bits of Backbone.js or even as something within frameworks like Angular or Ember.

I’ve also done a little bit of a proof of concept of using React.js as the view layer in a possible rewrite of the Storyteller UI into a pure web application. That work will probably generate much more interesting blog posts later as it’s going to require much deeper hierarchies of components, a lot more complicated user interaction, and will push us into exploring React.js usage with separated presentation patterns and much more unit testing.

At least in the circles I run in, there’s been a bit of a meme the past couple years that Functional Programming is the one true path to software programming and a corresponding parallel backlash against almost everything to do with OOP. If React.js becomes more popular, I think that it might help start a rediscovery of the better ways to do OOP. Namely a focus on thinking about responsibilities, roles, collaboration patterns, and object composition as a way to structure complicated React.js components instead of the stupid “Class-centric,” inheritance laden, data-centric approach to OOP that’s been so easy for the FP guys to caricature.

 

* I’m not really working much on the client side, so what do I really know? All the same, I think the backlash against jquery feels like a bit of throwing the baby out with the bathwater. Especially if you’re constraining the DOM manipulation to the purely view layer of your code.

Integration Testing with FubuMVC and OWIN

tl;dr: Having an OWIN host to run FubuMVC applications in process made it much easier to write integration tests and I think OWIN will end up being a very positive thing for the .Net development community.

FubuMVC is still dead-ish as an active OSS project, but the things we did might still be useful to other folks so I’m continuing my series of FubuMVC lessons learned retrospectives — and besides, I’ve needed to walk a couple other developers through writing integration tests against FubuMVC applications in the past week so writing this post is clearly worth my time right now.

One of the primary design goals of FubuMVC was testability of application code, and while I think we succeeded in terms of simpler, cleaner unit tests from the very beginning (especially compared to other web frameworks in .Net), there are so many things that can only be usefully tested and verified by integration tests that execute the entire HTTP stack.

Quite admittedly, FubuMVC was initially very weak in this area — partially because the framework itself only ran on top of ASP.Net hosting. While we had good unit test coverage from day one, our integration testing story had to evolve as we went in roughly these steps:

  1. Haphazardly build sample usages of features in an ASP.Net application that had to run hosted in either IIS or IISExpress that had to be executed manually. Obviously not a great solution for regression testing or for setting up test scenarios either.
  2. A not completely successful attempt to run FubuMVC on the early “Delegate of Doom” version of the OWIN specification and the old Kayak HTTP server. It worked just well enough to get my hopes up but didn’t really fly (there were some scenarios where it would just hang). If you follow me on Twitter and seen me blast OWIN as a “mystery meat” API, it’s largely because of lingering negativity from our first attempt.
  3. Running FubuMVC on top of Web API’s Self Host libraries. This was a nice win as it enabled us to embed FubuMVC applications in process for the first time, and our integration test coverage improved quickly. The Self Host option was noticeably slow and never worked on Mono* for us.
  4. Just in time for the 1.0 release, we finally made a successful attempt at OWIN hosting with the less insane OWIN 1.0 specification, and we’ve ran most of our integration tests and acceptance tests on Katana ever since. Moving to Katana and OWIN was much faster than the Web API Self Host infrastructure and at least in early versions, worked on Mono (the Katana team has periodically broken Mono support in later versions).
  5. For the forthcoming 2.0 release I built a new “InMemoryHost” model that can execute a single HTTP request at a time using our OWIN support without needing any kind of HTTP server.

I cannot overstate how valuable it has been to have an embedded hosting model has been for automated testing. Debugging test failures, using the application services exactly as the real application is configured, and setting up test data in databases or injecting in fake services is much easier with the embedded hosting models.

Here’s a real problem that I hit early this year. FubuMVC’s content negotiation (conneg) logic for selecting readers and writers was originally based only on the HTTP accepts and content-type headers, which is great and all except for how many ill behaved clients there are out there that don’t play nice with the HTTP specification. I finally went into our conneg support earlier this year and added support for overriding the accepts header, with an in the box default that looked for a query string on the request like “?format=json” or “format=xml.” While there are some unit tests for the internals of this feature, this is exactly the kind of feature that really has to be tested through an entire HTTP request and response to verify correctness.

If you’re having any issues with the formatting of the code samples, you can find the real code on GitHub at the bottom of this page.

 

I started by building a simple GET action that returned a simple payload:

    public class OverriddenConnegEndpoint
    {
        public OverriddenResponse get_conneg_override_Name(OverriddenResponse response)
        {
            return response;
        }
    }

    public class OverriddenResponse
    {
        public string Name { get; set; }
    }

Out of the box, the new “conneg/override/{Name}” route up above would respond with either a json or xml serialization of the output model based on the value of the accepts header, with “application/json” being the default representation in the case of wild cards for the accepts header. In the functionality, content negotiation needs to also look out for the new query string rules.

The next step is to define our FubuMVC application with a simple IApplicationSource class in the same assembly:

    public class SampleApplication : IApplicationSource
    {
        public FubuApplication BuildApplication()
        {
            return FubuApplication.DefaultPolicies().StructureMap();
        }
    }

The role of the IApplicationSource class in a FubuMVC application is to bootstrap the IoC container for the application and define any custom policies or configuration of a FubuMVC application. By using an IApplicationSource class, you’re establishing a reusable configuration that can be quickly applied to automated tests to make your tests as close to the production deployment of your application as possible for more realistic testing. This is crucial for FubuMVC subsystems like validation or authorization that mix in behavior to routes and endpoints off of conventions determined by type scanning or configured policies.

 

Using Katana and EndpointDriver with FubuMVC 1.0+

First up, let’s write a simple NUnit test for overriding the conneg behavior with the new “?format=json” trick, but with Katana and the FubuMVC 1.0 era EndpointDriver object:

        [Test]
        public void with_Katana_and_EndpointDriver()
        {
            using (var server = EmbeddedFubuMvcServer
                .For<SampleApplication>())
            {
                server.Endpoints.Get("conneg/override/Foo?format=json", acceptType: "text/html")
                    .ContentTypeShouldBe(MimeType.Json)
                    .ReadAsJson<OverriddenResponse>()
                    .Name.ShouldEqual("Foo");
            }
        }

Behind the scenes, the EmbeddedFubuMvcServer class spins up the application and a new instance of a Katana server to host it. The “Endpoints” object exposes a fluent interface to define and execute an HTTP request that will be executed with .Net’s built in WebClient object. The EndpointDriver fluent interface was originally built specifically to test the conneg support in the FubuMVC codebase itself, but is usable in a more general way for testing your own application code written on top of FubuMVC.

 

Using the InMemoryHost and Scenarios in FubuMVC 2.0

EndpointDriver was somewhat limited in its coverage of common HTTP usage, so I hoped to completely replace it in FubuMVC 2.0 with a new “scenario” model somewhat based on the Play Framework’s Play Specification tooling in Scala. I knew that I also wanted a purely in memory hosting model for integration tests to avoid the extra time it takes to spin up an instance of Katana and sidestep potential port contention issues.

The result is the same test as above, but written in the new “Scenario” style concept:

        [Test]
        public void with_in_memory_host()
        {
            // The 'Scenario' testing API was not completed,
            // so I never got around to creating more convenience
            // methods for common things like deserializing JSON
            // into .Net objects from the response body
            using (var host = InMemoryHost.For<SampleApplication>())
            {
                host.Scenario(_ => {
                    _.Get.Url("conneg/override/Foo?format=json");
                    _.Request.Accepts("text/html");

                    _.ContentTypeShouldBe(MimeType.Json);
                    _.Response.Body.ReadAsText()
                        .ShouldEqual("{\"Name\":\"Foo\"}");
                });
            }
        }

 

The newer Scenario concept was an attempt to make HTTP centric testing be more declarative. C# is not as expressive as Scala is, but I was still trying to make the test expression as clean and readable as possible and the syntax above probably would have evolved after more usage. The newer Scenario concept also has complete access to FubuMVC 2.0’s raw HTTP abstractions so that you’re not limited at all in what kinds of things you can express in the integration tests.

If you want to see more examples of both EmbeddedFubuMvcServer/EndpointServer and InMemoryHost/Scenario in action, please see the FubuMVC.IntegrationTesting project on GitHub.

 

Last Thoughts

If you choose to use either of these tools in your own FubuMVC application testing, I’d highly recommend doing something at the testing assembly level to cache the InMemoryHost or EmbeddedFubuMvcServer so that they can be used across test fixtures to avoid the nontrivial cost of repeatedly initializing your application.

While I’m focusing on HTTP centric testing here, using either tool above also has the advantage of building your application’s IoC container out exactly the way it should be in production for more accurate integration testing of underlying application services.

 

If I had to do it all over again…

We would have had an embedded hosting model from the very beginning, even if it had been on a fake, “only one HTTP request at a time” model. Moreover, if we were to ever reboot a new FubuMVC like web framework in the future KVM, I would vote for wrapping any new framework completely around the OWIN signature as our one and only model of an HTTP request. In any theoretical future effort, I’d invest time from the very beginning in something like FubuMVC 2.0’s InMemoryHost model early to make integration and acceptance testing easier and faster.

With the recent release of StructureMap 3, I’d also opt for a new child container model such that you can fake out application services on a test by test basis and rollback to the previous container state without having to re-initialize the entire application each time for faster testing.

 

* Mono support was a massive time sink for the FubuMVC project and never really paid off. Maybe having Xamarin involved much earlier in the new KVM .Net runtime and the emphasis on PCL supportwill make that situation much better in the future.

Some Thoughts on Collective Ownership and Knowledge

This is a mild rewrite of an old blog post of mine from 2005 that I think is still relevant today. While this was originally about pair programming, I’ve tried to rewrite this to be more about working collaboratively in general. 

My shop is having some constructive internal discussions about our development process and practices. While I don’t think that Pair Programming is likely to be a big part of our daily routine, I think we could still use a bigger dose of collective ownership in our various code base’s and less silo-ing of developers. Since I’m a typical introverted developer who doesn’t handle hours of pair programming and I don’t think my attitude is uncommon at all, we need to look for ways to get the same type of shared understanding that you would get from 100% paired programming.

One of my suggestions is at a minimum to stop assigning coding stories to a single developer. Even if they aren’t pair programming all the time, they can still collaborate on the approach and split tasks so they can code in parallel. On one hand this should increase our team’s shared understanding of the codebase, and on the other hand it’ll definitely help us work discrete stories serially to completion instead of having so many half-done stories laying around at any one time.

 

“This is the Way We Do It”

My favorite metaphor for software design these days is a fishing tackle box.  I want a place for everything and everything in its place.  When I need a top water lure, I know exactly where to look.  I put data access code here, business logic there, and hook the two things up like this.  When I’m creating new code I want to organize it along a predictable structure that anyone else on the project will instantly recognize.  When other coders are making changes I want them to follow the same basic organization so I can find and understand their code later.

Each application is a little bit different so the application’s design is always going to vary.  In any agile project you should hit an inflection point in the team’s velocity that I think of as the “This is the Way We Do It” moment.  Things become smoother.  There are fewer surprises.  Story estimates become more consistent and accurate.  When any pair starts a new story, they understand the general pattern of the system structure and can make the mechanical implementation with minimal fuss.

You really want to get to this point as soon as you can.  There are two separate issues to address before you can reach this inflection point:

  1. Determining the pattern and architecture for the system under development
  2. Socializing the design throughout the development team

To the first point, pairing allows you bring to bear the knowledge and experience of everybody on the team to the work at hand.  It’s just not possible for any one developer to understand every technology and design pattern in the world. By having every developer active in the project design, you can often work out a workable approach faster than a solo architect ever could.  On the one project I’ve done with theoretical 100% pairing, we had a couple of developers with a lot of heavy client experience and me with more backend and web development experience.  By pairing together with our disparate knowledge we could rapidly create a workable general design strategy for the system as a whole by bringing a wider skill set to any coding task.

Right now I’m working some with a system that’s being built by two different teams that work in two very different ways with very different levels of understanding of some of the core architecture. That’s really not an ideal situation and it’s causing plenty of grumbling. There’s a couple specific subsystems that generate complaints about their usability. In one case it just needs to be a full redesign, but that really needs to take place with more than one or two people involved. In the other case, the problems might be from the team that didn’t build the subsystem not understanding how it works and why it was built that way — or it might be because the remaining developer who worked on it doesn’t completely understand the problems that the other team is having. Regardless of what the actual problem is, more active collaboration between the teams might help that subsystem be more usable.

If you’re a senior developer or the technical lead, one of your responsibilities is fostering an understanding of the technical direction to the other developers.  Nothing else I’ve ever done as a lead (design sessions, documentation, presentations, “do this,” etc.) beats working shoulder to shoulder with other developers as a mechanism for creating a shared understanding of the project strategy.  By making every developer be involved or at least exposed to the thinking behind the design, they’ll have much more contextual information about the design.

We have a great policy of doing internal brown bag presentations at our development office and it’s giving our guys a lot more experience in presentation skills and opportunities to learn about new tools and techniques along the way. While I’d like to see that continue and I like learning about programming languages we don’t yet use, I also think our teams should probably speak about their own work much more often to try to create better shared understanding of the architectures and code structures that they use every day.

Part of any explanation or knowledge sharing about your system’s architecture, structure, and build processes needs to include a discussion of why you’ve chosen or arrived at that architecture. One unpleasant fact I’ve discovered over and over again is that the more detailed instructions you give to another developer, the worse the code is that comes back to you.  I simply can’t do the thinking for someone else, especially if I’m trying to do all the thinking upfront independently of the feedback you’d naturally get from working through the problem at hand.  If the developer doing the work understands the “why” of your design or instructions, they’ll often do a better job and make improvements as they go — and that’s especially important if you gave out instructions that turn out to be the wrong approach and they should really do something different altogether.

For example, I’ve spent the last couple years learning where some of the more rarely used cooking utensils and gadgets in our kitchen go. Instead of trying to memorize where each thing I find unloading the dishwasher goes, my wife has explained her rationale for how the kitchen is organized and I’m somewhat able to get things put up in the right place now. Organizing code isn’t all that different.

 

Improving Our Code vs. Defensiveness about My Code

Don’t for one second discount the psychological advantages of pair programming or even just a more collaborative coding process.  Formal or even just peer code reviews can be nasty affairs.  They can often amount to a divide between the prosecution and the accused.  In my admittedly limited experience, they’ve been largely blown off or devolve into meaningless compliance checks with coding style standards.  Even worse is the fact that they are generally used as a gating process just prior to the next stage of the waterfall, eliminating the usefulness because it’s too late to make any kind of big change.

The collective ownership achieved with pair programming can turn this situation on its head.  My peers and I can now start to talk about how to improve our code instead of being defensive or sensitive to criticism about my code.  Since we’ve all got visibility now into the majority of the code, we can have informed conversations about the technical direction overall.  The in depth code review happens in real time, so problems are caught sooner.  Add in the shifting of different coders through different areas of the code and you end up with more eyes on any important piece of code. The ability to be self-critical about existing code, without feeling defensive, helps to continuously improve the system design and code.  I think this is one of the primary ways in which agile development can lead to a better, more pleasant workplace.