Category Archives: FubuMVC

Health Monitoring and Task Reassignment in our Service Bus Applications

FubuMVC 3.0 actually has a full blown service bus framework that started as an add on project called “FubuTransportation.” We’ve used it in production for 3 years, we’re generally happy with it, and it’s the main reason why we’ve done an about face and decided to continue with FubuMVC again.

Corey Kaylor and I have been actively working on FubuMVC again. We’re still planning a reboot of at least the service bus functionality to the CoreCLR with a more efficient architecture next year (“Jasper“), but for right now we’re just working to improve the performance and reliability of our existing service bus applications. The “health monitoring” and persistent task functionality explained here has been in our codebase for a couple years and used a little bit in production, but we’re about to try to use it for something mission critical for the first time. I’d love to have any feedback or suggestions for improvements you might have. All the code shown here is pulled from this namespace in GitHub.

A Distributed System Spread Over Several Nodes

For the sake of both reliability and the potential for horizontal scaling later, we want to be able to deploy multiple instances of our distributed application to different servers (or separate processes on the same box as shown below:



A distributed application behind a load balancer

We generally employ hardware load balancers to distribute incoming requests through all the available nodes. So far, all of this is pretty typical and relatively straight forward as long as any node can service any request.

However, what if your architecture includes some kind of stateful “agent” that can, or at least should, be active on only one of the nodes at a time?

I’m hesitant to describe what we’re doing as Agent Oriented Programming, but that’s what I’m drawing on to think through this a little bit.


“Agent” worker processes should only be running on a single node

In our case, we’re working with a system that is constantly updating a “grid” of information stored in memory and directing work through our call centers. Needless to say, it’s a mission critical process. What we’re attempting to do is to make the active “agent” for that planning grid be managed by FubuMVC’s service bus functionality so that it’s always running on exactly one node in the cluster. That means that we need to be able to:

  • Have the various nodes constantly checking up on each other to make sure that agent is running somewhere and the assigned node is actually up and responsive
  • Be able to initiate the assignment of that agent to a new node if it is not running at all
  • Potentially shut down any extraneous instances of that agent if there is more than one running

Years ago, Chris Patterson of MassTransit fame explained something to me called the Bully Algorithm that can be used for exactly this kind of scenario. With a lot of help from my colleague Ryan Hauert, we came up with the approach described in this post.


Persistent Tasks

I reserve the right to change the name later (IAgent maybe?), but for now the key interface for one of these sticky agents is shown below:

public interface IPersistentTask
    Uri Subject { get; }

    // This is supposed to be the health check
    // Should throw an exception if anything is wrong;)
    void AssertAvailable();
    void Activate();
    void Deactivate();
    bool IsActive { get; }

    // This method would perform the actual assignment
    Task<ITransportPeer> SelectOwner(IEnumerable<ITransportPeer> peers);

Hopefully the interface is largely self descriptive. We were already using Uri’s throughout the rest of the  code, and that made sense to us to use that to identify the persistent tasks. This interface gives developers the hooks to start or stop the task from running, a way to do health checks, and a way to apply whatever kind of custom owner selection algorithm you want.

These persistent tasks are added to a FubuMVC application by registering an instance of this interface into the application container (there is a simple recipe for standalone tasks that deals with both interfaces in one class):

public interface IPersistentTaskSource
    // The scheme or protocol from the task Uri's
    string Protocol { get; }

    // Subjects of all the tasks built by this
    // object that should be running
    IEnumerable<Uri> PermanentTasks();

    // Create a task object for the given subject
    IPersistentTask CreateTask(Uri uri);

The IPersistentTaskSource might end up going away as unnecessary complexity in favor of just directly registering IPersistentTask’s. It was built with the idea of running, assigning, and monitoring agents per customer/tenant/region/etc. I’ve built a couple systems in the past half decade where it would have been very advantageous to have had that functionality.

The ITransportPeer interface used in the SelectOwner() method models the available nodes and it’s described in the next section.


Modeling the Nodes

The available nodes are modeled by the ITransportPeer shown below:

public interface ITransportPeer
        // Try to make this node take ownership of a task
	Task<OwnershipStatus> TakeOwnership(Uri subject);

        // Tries to ask the peer what the status is for all
        // of its assigned tasks
	Task<TaskHealthResponse> CheckStatusOfOwnedTasks();

	void RemoveOwnershipFromNode(IEnumerable<Uri> subjects);

	IEnumerable<Uri> CurrentlyOwnedSubjects();

	string NodeId { get; }
	string MachineName { get; }
	Uri ControlChannel { get; }

        // Shutdown a running task
	Task<bool> Deactivate(Uri subject);

ITransportPeer’s come in just two flavors:

  1. A class called PersistentTaskController that directly controls and manages the tasks on the executing node.
  2. A class called TransportPeer that represents one of the external nodes. The methods in this version send messages to the control channel of the node represented by the peer object and wait for a matching response. The other nodes will consume those messages and make the right calls on the local PersistentTaskController.


Reassigning Tasks

Now that we have a way to hook in tasks and a way to model the available peers, we need some kind of mechanism within IPersistentTask classes to execute the reassignment. Right now, the only thing we’ve built and used so far is a simple algorithm to assign a task based on an order of preference using the OrderedAssignment class shown below:

public class OrderedAssignment
	private readonly Uri _subject;
	private readonly ITransportPeer[] _peers;
	private int _index;

	public OrderedAssignment(Uri subject, IEnumerable<ITransportPeer> peers)
		_subject = subject;
		_peers = peers.ToArray();
		_index = 0;

	public async Task<ITransportPeer> SelectOwner()
		return await tryToSelect().ConfigureAwait(false);

	private async Task<ITransportPeer> tryToSelect()
		var transportPeer = _peers[_index++];

			var status = await transportPeer.TakeOwnership(_subject).ConfigureAwait(false);

			if (status == OwnershipStatus.AlreadyOwned || status == OwnershipStatus.OwnershipActivated)
				return transportPeer;
		catch (Exception e)

		if (_index >= _peers.Length) return null;

		return await tryToSelect().ConfigureAwait(false);

Inside of an IPersistentTask class, the ordered assignment could be used something like this:

public virtual Task<ITransportPeer> SelectOwner(IEnumerable<ITransportPeer> peers)
    // it's lame, but just order by the control channel Uri
    var ordered = peers.OrderBy(x => x.ControlChannel.ToString());
    var completion = new OrderedAssignment(Subject, ordered);

    return completion.SelectOwner();


Health Monitoring via the Bully Algorithm

So now we have a way to model persistent tasks, reassign tasks, and model the connectivity to all the available nodes.

Inside of PersistentTaskController is this method that checks all the known persistent task state on every known running node:

public async Task EnsureTasksHaveOwnership()
	// Go run out and check the status of all the tasks that are
	// theoretically running on each node
	var healthChecks = AllPeers().Select(async x =>
		var status = await x.CheckStatusOfOwnedTasks().ConfigureAwait(false);
		return new { Peer = x, Response = status };

	var checks = await Task.WhenAll(healthChecks).ConfigureAwait(false);

	// Determine what corrective steps, if any, should be taken
        // to ensure that every known task is running in just one place
	var planner = new TaskHealthAssignmentPlanner(_permanentTasks);
	foreach (var check in checks)
		planner.Add(check.Peer, check.Response);

	var corrections = planner.ToCorrectionTasks(this);

	await Task.WhenAll(corrections).ConfigureAwait(false);

	_logger.Info(() => "Finished running task health monitoring on node " + NodeId);

In combination with the TaskHealthAssignmentPlanner class, this method is able to jumpstart any known tasks that either aren’t running or were running on a node that is no longer reachable or reports that its tasks are in an error state.

The EnsureTasksHaveOwnership() method is called from a system level polling job running in a FubuMVC application. There’s an important little twist on that though. To try to ensure that there’s much less chance of unpredictable behavior from the health monitoring checks running on each node simultaneously, the timing of the polling interval is randomized from this settings class:

public double Interval
        // The *first* execution of the health monitoring takes
        // place 100 ms after the app is initialized
        if (_initial)
            _initial = false;
            return 100;
        // After the first call, the polling interval is randomized
        // between each call
        return Random.Next(MinSeconds, MaxSeconds) * 1000;

I found an article advising you to randomize the intervals somewhere online at the time we were building this two years ago, but I don’t remember where that was:(

By using the bully algorithm, we’re able to effectively make a cluster of related nodes able to check up on each other and start up or reassign any tasks that have gone down. We’re utilizing this first to do a “ready standby” failover of an existing system.

Actually Doing the Health Checks

The health check needs to run some kind of “heartbeat” action implemented through the IPersistentTask.AssertAvailable() method on each persistent task object to ensure that it’s really up and functioning. The following code is taken from PersistentTaskController where it does a health check on each running local task:

public async Task<TaskHealthResponse> CheckStatusOfOwnedTasks()
	// Figure out which tasks are running on this node right now
	var subjects = CurrentlyOwnedSubjects().ToArray();

	if (!subjects.Any())
		return TaskHealthResponse.Empty();

	// Check the status of each running task by calling the
	// IPersistentTask.AssertAvailable() method
	var checks = subjects
		.Select(async subject =>
			var status = await CheckStatus(subject).ConfigureAwait(false);
			return new PersistentTaskStatus(subject, status);

	var statusList = await Task.WhenAll(checks).ConfigureAwait(false);

	return new TaskHealthResponse
		Tasks = statusList.ToArray()

public async Task<HealthStatus> CheckStatus(Uri subject)
	var agent = _agents[subject];

	return agent == null 
		? HealthStatus.Unknown 
		: await checkStatus(agent).ConfigureAwait(false);

private static async Task<HealthStatus> checkStatus(IPersistentTaskAgent agent)
	return agent.IsActive
		? await agent.AssertAvailable().ConfigureAwait(false)
		: HealthStatus.Inactive;



Subscription Storage

Another obvious challenge is how does each node “know” about its peers? FubuMVC pulls that off with its “subscription” subsystem. In our case, each node is writing information about itself to a shared persistence store (mostly backed by RavenDb in our ecosystem, but we’re moving that to Marten). The subscription persistence also enables each node to discover its peers.

Once the subscriptions are established, each node can communicate with all of its peers through the control channel addresses in the subscription storage. That basic architecture is shown below with the obligatory boxes and arrows diagram:


The subscription storage was originally written to enable dynamic message subscriptions between systems, but it’s also enabled our health monitoring strategy shown in this post.



Control Queue

We need the health monitoring and subscription messages between the various nodes to be fast and reliable. We don’t want the system level messages getting stuck in queues that might be backed up with normal activity. To that end, we finally put the idea of a designated “control channel” into FubuMVC so that you can designate a single channel as the preferred mechanism for sending control messages.

The syntax for making that designation is shown below in a code sample taken from FubuMVC’s integrated testing:

public ServiceRegistry()
    // The service bus functionality is "opt in"

    // I explain what "Service" in the next code sample
    Channel(x => x.Service)
        // Listen for incoming messages on this channel

        // Designate this channel as preferred for system level messages         

        // Opts into LightningQueue's non-persistent mode               

    // I didn't want the health monitoring running on this node


If you’re wondering what in the world “x => x.Service” refers to in the code above, that just ties into FubuMVC’s strong typed configuration support (effectively the same concept as the new IOptions configuration in ASP.Net Core, just with less cruft;)). The application described by ServiceRegistry shown above also includes a class that holds configuration items specific to this application:

public class TestBusSettings
    public Uri Service { get; set; } = "lq.tcp://localhost:2215/service".ToUri();
    public Uri Website { get; set; } = "lq.tcp://localhost:2216/website".ToUri();

The primary transport mechanism we use is LightningQueues (LQ), an OSS library built and maintained by my colleague Corey Kaylor. LQ is normally a “store and forward” queue, but it has a new, opt-in “non persistent” mode (like ZeroMQ, except .Net friendly) that we can exploit for our control channels in FubuMVC. In the case of the control queues, it’s advantageous to not persist those messages anyway.


My Concerns

It’s damn complicated and testing was obscenely hard. I’m a little worried about network hiccups causing it to unnecessarily try to reassign tasks. We might put some additional retries into the health checks. The central subscription persistence is a bit of a concern too because that’s a single point of failure.

Reliable and “Debuggable” Automated Testing of Message Based Systems in a Crazy Async World

In my last post on Automated Testing of Message Based Systems, I talked about how we are able to collapse all the necessary pieces of distributed messaging systems down to a single process and why I think that makes for a much better test automation experience. Once you can reliably bootstrap, tear down, and generally control your distributed application from a test harness, you move on to a couple more problems: when is a test actually done and what the heck is going on inside my system during the test when messages are zipping back and forth?


Knowing when to Assert in the Asynchronous World

Since everything is asynchronous, how does the test harness know when it’s safe to start checking outcomes? Any automated test is more or less going to follow the basic “arrange, act, assert” structure. If the “act” part of the test involves asynchronous operations, your test harness has to know how to wait for the “act” to finish before doing the “assert” part of a test in order for the tests to be reliable enough to be useful inside of continuous integration builds.

For a very real scenario, consider a test that involves:

  1. A website application that receives an HTTP POST, then sends a message through the service bus for asynchronous processing
  2. A headless service that actually processes the message from step 1, and quite possibly sends related messages to the service bus that also need to be finished processing before the test can do its assertions against expected change of state.
  3. The headless service may be sending messages back to the website application

To solve this problem in our test automation, we came up with the “MessageHistory” concept in FubuMVC’s service bus support to help order test assertions after all message processing is complete. When activated, MessageHistory allows our test harnesses to know when all message processing has been completed in a test, even when multiple service bus applications are taking part in the messaging work.

When a FubuMVC application has the service bus feature enabled, you can activate the MessageHistory feature by first bootstrapping in the “Testing” mode like so:

public class ChainInvokerTransportRegistry 
    : FubuTransportRegistry<ChainInvokerSettings>
    public ChainInvokerTransportRegistry()
        // This property opts the application into
        // the "testing" mode
        Mode = "testing";

        // Other configuration declaring how the application
        // is composed. 

In a blog post last week, I talked about How we do Semantic Logging, specifically how we’re able to programmatically add listeners for strong typed audit or debug messages. By setting the “Testing” mode, FubuMVC adds a new listener called MessageRecordListener to the logging that listens for logging messages related to the service bus message handling. The method below from MessageRecordListener opts the listener into any logging message that inherits from the MessageLogRecord class we use to mark messages related to service bus processing:

        public bool ListensFor(Type type)
            return type.CanBeCastTo<MessageLogRecord>();

For the purposes of the MessageHistory, we listen for:

  1. EnvelopeSent — history of a message that was sent via the service bus
  2. MessageSuccessful or MessageFailed — history of a message being completely handled
  3. ChainExecutionStarted — a message is starting to be executed internally
  4. ChainExecutionFinished — a message has been completely executed internally

All of these logging messages have the message correlation id, and by tracking the outstanding “activity started” messages against the “activity finished” messages, MessageHistory can “know” when the message processing has completed and it’s safe to start processing test assertions. Even if an automated test involves multiple applications, we can still get predictable results as long as every application is logging its information to the static MessageHistory class (I’m not showing it here, but we do have a mechanism to connect message activity back to MessageHistory when we use separate AppDomain’s in tests).

Just to help connect the dots, the MessageRecordListener relays information about work that’s started or finished to MessageHistory with this method:

        public void DebugMessage(object message)
            var log = message.As<MessageLogRecord>();
            var record = log.ToRecord();

            if (record.IsPollingJobRelated()) return;

            // Tells MessageHistory about the recorded
            // activity



Inside of test harness code, the MessageHistory usage is like this:

MessageHistory.WaitForWorkToFinish(() => {
    // Do the "act" part of your test against a running
    // FubuMVC service bus application or applications

This method does a couple things:

  1. Clears out any existing message history inside of MessageHistory so you’re starting from a blank slate
  2. Executes the .Net Action “continuation” you passed into the method as the first argument
  3. Polls until there has been at least one recorded “sent” tracked message and all outstanding “sent” messages have been logged as completely handled or until the configured timeout period has expired.
  4. Returns a boolean that just indicates whether or not MessageHistory finished successfully (true) or just timed out (false).

For the pedants and the truly interested among us, the WaitForWorkToFinish() method is an example of using Continuation Passing Style (CPS) to correctly order the execution steps. I would argue that CPS is very useful in these kinds of scenarios where you have a set order of execution but some step in the middle or end can vary.


Visualizing What the Heck Just Happened

The next big challenge in testing message-based, service bus applications is being able to understand what is really happening inside the system when one of these big end to end tests fails. There’s asynchronous behavior and loosely coupled publish/subscribe mechanics. It’s clearly not the easiest problem domain to troubleshoot when things don’t work the way you expect.

We have partially solved this problem by tying the semantic log messages produced by FubuMVC’s service bus system into the results report of our automated tests. Specifically, we use the Storyteller 3 project (one of my other OSS projects that is being criminally neglected because Marten is so busy) as our end to end test harness. One of the powerful features in Storyteller 3 is the ability to publish and embed custom diagnostics into the HTML results report that Storyteller produces.

Building on the MessageRecordListener setup in the previous section, FubuMVC will log all of the service bus activity to an internal history. In our Storyteller test harness, we wipe out the existing state of the recorded logging messages before the test executes, then at the end of the specification run we gather all of the recorded logging messages for just that test run and inject some custom HTML into the test results.

We do two different visualizations, a “threaded” message history arranged by the history of a single message, who published it, who handled it, and what became of it?


The threaded history view helps to understand how a single message was processed from sender, to receiver, to execution. Any error steps will show up in the threaded history. So will retry attempts and any additional messages triggered by the topmost message.

We also present pretty well the same information in a tabular form that exposes the metadata for the message envelope wrapper at every point some activity is recorded:



I’m using images for the blog post, but these reports are written into the Storyteller HTML results. These diagnostics have been invaluable to us in understanding how our message based systems actually behave. Having these diagnostics as part of the test results on the CI server has been very helpful in diagnosing failures in the CI builds that can be notoriously hard to debug.

Next time…

At some point I’ll blog about how we integrate FubuMVC’s HTTP diagnostics into the Storyteller results and maybe a different post about the performance tracking data that Storyteller exposes as part of the testing results. But don’t look for any of that too soon;)

How we do Semantic Logging

I know full well that what I’m showing here is polarizing and some of you are going to hate it, but what we did does have some significant benefits and it’s possible it could be pulled off later with cleaner usage mechanics. When you read this and start saying, “isn’t this like Serilog?”, please just read to the end. 

I’ve frequently used the term “semantic logging” to describe the mechanism we still use in FubuMVC for our logging abstraction. By semantic logging, I mean that we log strong-typed messages for most framework audit and debug actions.

To make that concrete, this is part of our ILogger interface that FubuMVC uses internally:

    public interface ILogger
        // LogTopic is yes, a standardized base
        // class with some basic properties for
        // timestamps and a Guid Id
        void DebugMessage(LogTopic message);
        void InfoMessage(LogTopic message);

        // This is the mechanism for conditional logging if
        // there is a registered listener for whatever T is
        void DebugMessage<T>(Func<T> message) where T : class, LogTopic;
        void InfoMessage<T>(Func<T> message) where T : class, LogTopic;

Taking the service bus part of FubuMVC as an example, we have these log topic types that get logged at various points in message sending and handling:

  1. EnvelopeSent — in service bus parlance, an “envelope” would be a message plus whatever metadata the service bus needs to get it to the right place.
  2. EnvelopeReceived
  3. MessageSuccessful
  4. MessageFailed

For a concrete example from inside our service bus features, we log an EnvelopeSent log record any time a message is sent from the service bus client:

private void sendToChannel(Envelope envelope, ChannelNode node)
    var replyUri = _router.ReplyUriFor(node);

    var headers = node.Send(envelope, _serializer, replyUri: replyUri);
    _logger.InfoMessage(() => new EnvelopeSent(new EnvelopeToken
        Headers = headers,
        Message = envelope.Message
    }, node));

As you can see above, we pass a Func<EnvelopeSent> into the logger.InfoMessage() method instead of directly building an EnvelopeSent object. We do that so that the logging can be optional, so that if nothing is listening for the EnvelopeSent log record type, that Func in never executed. Some of you might argue that this is a little less efficient at runtime than doing the if (_logger.ListensFor(typeof(EnvelopeSent)) _logger.InfoMessage(new EnvelopeSent{}); approach and you’d be correct, but I prefer using the Func approach to express the optional logging because I think it’s more terse and easier to read.

Internally, the actual “sinks” for these log records have to implement this interface (partially elided for space):

    public interface ILogListener
        bool IsDebugEnabled { get; }
        bool IsInfoEnabled { get; }

        // Does this particular listener
        // care about a log record type?
        bool ListensFor(Type type);

        void DebugMessage(object message);
        void InfoMessage(object message);


With this interface, you are able to subscribe for certain log record types being logged by the application.

We manage both ILogger and the full collection of ILogListener’s through our application’s IoC container. While there probably isn’t a more polarizing technique in the software world than using an IoC tool, in our case I think it’s been very advantageous for us because of how easy that makes it to add or remove log listeners and to govern the lifecycle or scoping of log listeners (think about having a log listener scoped to a single HTTP request that makes it stupidly easy to correlate log messages to a certain HTTP request).


The Downside

Some of my coworkers don’t like this approach because of the need to add all the extra Type’s to your code. To their point, I don’t think you’d bother to make every single log message its own special type, only things that are more valuable to key off of in system diagnostics or in tooling like our test automation support sample I’m showing in another section below. We do have some basic “Info(string message) or Debug(string message)” support too in our ILogger, but my advice for that kind of thing has been to go straight to some kind of low level logging tool like NLog or log4net.

The benefits are richer system monitoring because it’s so easy to intelligently parse and filter the information being logged.

Some Valuable Usages


  • Our HTTP request tracing in FubuMVC depends on a custom ILogListener that tracks any and all log topic records to the record of that HTTP request. By being able to listen for only certain log topic Type’s, we can easily move our diagnostic trace level from nothing to a production mode where we only hang on to certain types of logging actions to a very verbose, record everything mode for local development. Ditto for our service bus support inside of FubuMVC (was FubuTransportation).
  • Having the strong typed log record messages made it fairly simple to create powerful visualizations of different logging actions in FubuMVC’s built in diagnostics
  • I actually think it’s easier to filter out what logging message types you care about listening for than trying to filter logging output by the classes or namespaces in your application like folks do with tools like log4net or NLog.
  • We can inject custom ILogListener’s in automated testing harnesses to gather a lot of application diagnostics and embed these diagnostics directly into our acceptance testing output reports. That’s been extremely valuable for us to troubleshoot failing tests or understand performance problems in our code by having an easy correlation between each automated test and the logging records.  I’ll write a blog post about how we do that someday.
  • In my very next blog post I’ll talk about how we use an ILogListener to monitor the activity of a service bus applications so that you can much more reliably coordinate automated test harnesses with the application code.

Replace with Serilog the next time around?

I’m very happy with what we were able to do in terms of its impact, and “semantic logging” has consistently been in my “keep from FubuMVC” column in our planning for whatever we do next. In the longer term though, I think I’d like look to look at using Serilog as our underlying logging mechanism and just figure out how we could inject custom listeners into Serilog to replace our diagnostic and test automation abilities. It would mean giving up on the strong typed messages that *I* like, but might be more palatable for other developers. Now that we’ve mostly replaced server side rendering with a React.js based UI in our diagnostics, it might make more sense anyway to do the visualizations against a JSON body.

Mostly though, that’s just a way to have less custom OSS code that I have to support and maintain;)


What about…?

Anytime I write a blog post like this I get a flood of “have you tried XYZ?” kind of questions about other tools that I may or may not have heard of. I’ll try to cover some of them:

  • log4net? NLog? – now that NLog is strong named (boo!), I don’t have any preference for either and I would never in a million years consider coupling any of my OSS projects to either. In other news, strong naming still sucks and it’s something you have to consider when you release an OSS tool
  • There is the Semantic Logging Block tool from Microsoft P&P. I haven’t used it, and my informal Twitter poll on it suggests that folks prefer Serilog over it.
  • What about taking a dependency on Common Logging then doing whatever you want for the actual logging? I looked at doing this years ago when we first introduced logging into FubuMVC and decided against it. My thinking is that Common Logging suffers from a “boiling the ocean” kind of problem that bloats its API surface.
  • Liblog? It’s probably a better alternative to Common Logging without some of the compatibility headaches brought on by strong naming conflicts, but doesn’t really address the idea of semantic logging
  • I know that once upon a time the Glimpse guys were throwing around a similar idea for semantic logging (they weren’t using the same terminology though) with rich logging messages that you could consume in your own application monitoring tooling.

How we did authorization in FubuMVC, and what I’d do differently today

FubuMVC as an OSS project is still mostly dead, but we still use it very heavily at work and one of our teams asked me to explain how the authorization works. You’re probably never going to use FubuMVC, but we did some things that were interesting and successful from a technical perspective. Besides, it’s always more fun to learn from other people’s mistakes — in this case, my mistakes;)

Early in the project that originally spawned FubuMVC we spent almost two months tackling authorization rules in our application. We had a couple needs:

  1. Basic authorization to allow or deny user actions based on their roles and permissions
  2. Authorization rules based on the current business object state (rules like “an amount > 5,000 requires manager approval”).
  3. The ability to extend the authorization of the application with custom rules and exceptions for a customer without any impact on the core code
  4. Use authorization rules to enable, disable, show, or hide navigation elements in the user interface

The core architectural concept of FubuMVC was what we called the “Russian Doll Model” that allowed you to effectively move cross cutting into our version of middleware. Authorization was an obvious place to use a wrapping middleware class around endpoint actions. If a user has the proper rights, continue on. If the user does not, return an HTTP 403 and some kind of authorization failure response and don’t execute the inner action at all.

For some context, you can see the part of our AuthorizationBehavior class we used to enforce authorization rules:

protected override void invoke(Action action)
    if (!_settings.AuthorizationEnabled)

    var access = _authorization.IsAuthorized(_context);

    // If authorized, continue to the inner behavior in the 
    // chain (filters, controller actions, views, etc.)
    if (access == AuthorizationRight.Allow)
        // If authorization fails, hand off to the failure handler
        // and stop the inner behaviors from executing
        // the failure handler service was pluggable in fubu too
        var continuation = _failureHandler.Handle();
        _context.Service<IContinuationProcessor>().Continue(continuation, Inner); 

Pulling the authorization checks into a separate middleware independent of the inner HTTP action had a couple advantages:

  • It’s easier to share common authorization logic
  • Reversibility — meaning that it was easy to retrofit authorization logic onto existing code without having to dig into the actual endpoint action code.
  • The inner endpoint action code could be simpler by not having to worry about authorization (in most cases)

FubuMVC itself might have failed, but the strategy of using composable middleware chains is absolutely succeeding as you find it in almost all of the newer HTTP frameworks and service bus frameworks including the new ASP.Net Core MVC code.

Authorization Rights and Policies

The core abstraction in FubuMVC’s authorization  subsystem was the IAuthorizationPolicy interface that could be executed to determine if a user had rights to the current action:

    public interface IAuthorizationPolicy
        AuthorizationRight RightsFor(IFubuRequestContext request);

The “AuthorizationRight” class above was a strong typed enumeration consisting of:

  1. “None” — meaning that the policy just didn’t apply
  2. “Allow”
  3. “Deny”

If multiple rules applied to a given endpoint, each rule would be executed to determine if the user had rights. Any rule evaluating to “Deny” automatically failed the authorization check. Otherwise, at least one rule had to be evaluated as “Allow” to proceed.

Being able to combine these checks enabled us to model both the simple, role and permission-based authorization, and also rules based on business logic and the current system state.

Simple Role Based Authorization

In the following example, we have an endpoint action that has a single “AllowRole” authorization rule:

        // This action would have the Url: /authorized/hello
        public string get_authorized_hello()
            return "Hello.";

Behind the scenes, that [AllowRole] attribute is evaluated once at application startup time and adds a new AllowRole object to the underlying model for the “GET /authorized/hello” endpoint.

For some context, the AllowRole rule (partially elided) looks like this:

    public class AllowRole : IAuthorizationPolicy
        private readonly string _role;

        public AllowRole(string role)
            _role = role;

        public AuthorizationRight RightsFor(IFubuRequestContext request)
            return PrincipalRoles.IsInRole(_role) ? AuthorizationRight.Allow : AuthorizationRight.None;



Model Based Configuration

FubuMVC has an underlying model called the “behavior graph” that models exactly which middleware handlers are applicable to each HTTP route. Part of that model is an exact linkage to the authorization policies that were applicable to each route. In the case above, the “GET /authorized/hello” endpoint has a single AllowRole rule added by the attribute.

More powerfully though, FubuMVC also allowed you to reach into the underlying behavior graph model and add additional authorization rules for a given endpoint. You could do this through the marker attributes (even your own attributes if you wanted), programmatically if you had to, or through additive conventions like “all endpoints with a route starting with /admin require the administrator role.” We heavily exploited this ability to enable customer-specific authorization checks in the original FubuMVC application.

Authorization & Navigation

By attaching the IAuthorizationPolicy objects to each behavior chain, FubuMVC is also able to tell you programmatically if a user could navigate to or access any HTTP endpoint using the IEndpointService like this:

            IEndpointService endpoints = runtime

            var endpoint = endpoints
                 .EndpointFor<SomeEndpoint>(x => x.get_hello());

            var isAuthorized = endpoint.IsAuthorized;

We used the IEndpointService (and a simpler one not shown here called IAuthorizationPreviewService) in server side rendering to know when to show, hide, enable, or disable navigation elements based on whether or not a user would have rights to access those routes. By decoupling the authorization rules a little bit from the actual endpoint action code, we were able to define the authorization rules exactly once for every endpoint, then reuse the same logic in navigation accessibility that we did at runtime when the actual resource was requested.

This ability to “preview” authorization rights for HTTP endpoints was also useful for hypermedia endpoints where you used authorization rights to include or exclude additional links in the response body.

Lastly, having the model of what authorization rules applied to each route enabled FubuMVC to be able to present diagnostic information and visualizations of the middleware configuration for each HTTP endpoint. That kind of diagnostics becomes very important when you start using conventional policies or extensibility mechanism to insert authorization rules from outside of the core application.

Rule Object Lifecycle

FubuMVC, especially in its earlier versions, is awful for the number of object allocations it makes at runtime. Starting with FubuMVC 2, I tried to reduce that overhead by making the authorization rule objects live through the lifecycle of the application itself instead of being created fresh by the underlying IoC container on every single request. Great, but there are some services that you may need to access to perform authorization checks — and those services sometimes need to be scoped to the current HTTP request. To get around that, FubuMVC’s IAuthorizationPolicy takes in an IFubuRequestContext object which among other information contains a *gasp* service locator for the current request scope that authorization rules can use to perform their logic.

There’s been an almost extreme backlash against any and all usages of service locators over the past several years. Most of that is probably very justified, but in my opinion, it’s still very valid to use a service locator within an object that has a much longer lifetime than the dependencies it needs to use within certain operations. And no, using Lazy<T> or Func<T> builders injected in at the original time of creation will not work without making a potentially harmful dependency on things like the HttpContext for the scoping to work.

Please don’t dare commenting on this post with any form of “but Mark Seemann says…” I swear that I’ll reach through the interwebs and smack you silly if you do. Probably make a point of never, ever doing that on any kind of StructureMap list too for that matter.

What I’d do differently if there’s a next time

For the past couple years we’ve kicked around the idea of an all new framework we’re going to call “Jasper” that would be essentially a reboot of a core subset of FubuMVC on the new CoreCLR and all the forthcoming ASP.Net Core goodies. At some point I’ve said that I wanted to bring over the authorization model from FubuMVC roughly as is, but the first step is to figure out if we could use something off the shelf so we don’t have to support our own custom infrastructure (and there’s always the possibility that we’ll just give up and go to the mainstream tools).

The single biggest thing I’d change the next time around is to make it far easier to do one-off authorization rules as close as possible to the actual endpoint action methods. My thought has been to have something like a convention or yet another interface something like “IAuthorized” so that endpoint classes could happily expose some kind of Authorize() : AuthorizationRight method for one off rules.

Honestly though, I’m content with how the authorization model played out in FubuMVC. A big part of the theoretical plans for “Jasper” is be much, much more conscious about allocating new objects (i.e., use the IoC container much less) and adopting an async by default, all the way through approach to the runtime model.

Streamlining FubuMVC Bootstrapping & the Design Patterns Used

As I said in my last post, we’re rebooting FubuMVC as a new project called Jasper. As an intermediate step, I’m working up a FubuMVC 3.0 release that among other things, simplifies the application configuration and bootstrapping so that it’s quicker and easier to stand up a brand new FubuMVC web or service bus application.

At it’s very, very simplest for the classic Hello World exercise, the newly streamlined bootstrapping looks like this (when we rebrand as Jasper, just replace “Fubu” with “Jasper” in the class names below):

    // This little endpoint class handles
    // the root url of the application
    public class HomeEndpoint
        public string Index()
            return "Hello, World";

    public class RunHelloWorld
        public void start_and_run()
            // Bootstrap a basic FubuMVC applications
            // with just the default policies and services
            using (var runtime = FubuRuntime.Basic())
                // Execute the home route and verify
                // the response
                runtime.Scenario(_ =>

                    _.ContentShouldBe("Hello, World");

Of course, as we all know, real world development gets a lot hairier than cute little hello world examples and you’re not going to get away with the framework defaults in most cases. In the case of FubuMVC, you might want to add services to the application IoC container, change or add conventions and policies, configure features, turn on “opt in” features, or run an application in either the special “development” or “testing” mode.

Real World Configuration in FubuMVC 3.0

After all the dust settled, a FubuMVC application is completely described, configured, bootstrapped, and cleanly deactivated by just two classes:

  1. FubuRegistry is used to express all of the configurable elements of a FubuMVC application
  2. FubuRuntime holds all the runtime elements of a running FubuMVC application like the application container, the activation logs, the routing table, the root file path, the application mode (development/testing/normal) and “knows” how to cleanly shut down the running application.

A custom FubuRegistry might look something like this one:

    public class ExampleRegistry : FubuRegistry
        public ExampleRegistry()
            // Turn on some opt in features

            // Change the application mode if you want
            Mode = "development";

            // Have the application use an embedded 
            // Katana host at port 5501

            // Register services with the IoC container
            // using a superset of StructureMap's
            // Registry DSL
            Services.AddService<IActivator, MyActivator>();
            // For testing purposes, you may want 
            // to bootstrap the application from an external
            // testing library, in which case, you'd want
            // to override where FubuMVC looks for static
            // asset files like JS or CSS files
            // or
            RootPath = "some other path";

To bootstrap the application specified by ExampleRegistry, you can use this syntax below to create a new FubuRuntime object:

            using (var server = FubuRuntime.For<ExampleRegistry>())
                // do stuff with the application
                // or wait for some kind of signal
                // that you should shut it off

If you don’t want to bother with your own subclass of FubuRegistry, you can forgo it and bootstrap a FubuRuntime with syntax like this:

            var runtime = FubuRuntime.Basic(_ =>

                // I'm opting for NOWIN hosting this time
                // and letting FubuMVC try to pick an open
                // port starting from 5500

There is a FubuRegistry involved in the code above, but you’re configuring it inside the lambda expression argument. More on that below in the section on the design patterns.

Bootstrapping Use Cases and When You Want Your Own FubuRegistry

Just for some background, here are the various use cases for bootstrapping a FubuMVC web or service bus application that I could think of:

  1. Bootstrapping for ASP.Net hosting in the good ol’ Global.asax like this.
  2. Spinning up an application adhoc inside of tests like this example from FubuMVC 2.2.
  3. We have a tool we call Serenity (yes, it’s named for what you think it is) that we use to setup integration tests for FubuMVC with Storyteller.
  4. Running an application with our fubu run development server
  5. Hosting a FubuMVC application within a background service using our JasperService tool (was BottleServiceRunner) that’s just a small shim to TopShelf

The easiest way to use Serenity is to say “here, use this FubuRegistry,” while options #4 and #5 use type scanning to find either the one single FubuRegistry in your code or to use one by name to activate the hosted application. In these cases, it’s highly useful to have a custom FubuRegistry. Even without that, it’s also valuable to have your application configuration done in a way that’s easily reusable within automated tests as in use case #2 up above so that you’re tests are more reflective of how the actual application runs.


An aside on the design patterns I’ve used

Other people certainly have much more negative opinions, but I feel like learning, studying, and applying design patterns in code has been very useful to my career. At Oredev 2010 I gave a talk about Internal DSL’s in C# whose content is still very much relevant to the work I did the past month with FubuMVC’s bootstrapping.

The FubuMVC bootstrapping uses at least three different design patterns you can find described in Martin Fowler’s DSL book.

  1. Fluent Interface — method chaining to create an internal DSL directly in C# “if you squint hard enough”
  2. Object Scoping. The FubuRegistry (or StructureMap’s Registry) are examples of using “object scoping” to shorten the signatures of fluent interfaces to a base class. Most of your declarations are going to be in a custom fluent interface called in the constructor function of a FubuRegistry base class.
  3. Nested Closure — The usage of FubuRuntime.Basic(x => {}) where you use an action that configures some option type as the single argument to a function. Nested closures are helpful where you need to allow a user to specify any number of optional parameters as the input to a discrete action. I discussed nested closures ages ago in an MSDN article.

I also remember going to a lot of talks at that Oredev on the new flashy HTML 5 technologies that have been long since passed by. The obvious takeaway is that conceptual knowledge tends to outlast the usefulness of knowledge about specific technologies.


Just for Comparison, FubuMVC before 3.0

When I was asked by some of our technical leadership to simplify the bootstrapping for FubuMVC 3.0, my first reaction was “it’s not that bad” — but this was what FubuMVC had previously:

  • FubuRegistry & FubuRuntime were just a little smaller than they are today
  • You used a static property called FubuMvcPackageFacility.PhysicalRootPath to change the root directory of the content files. And yes, the mutable static property was a problem.
  • FubuApplication was a static class that you used to trigger a fluent interface that would gather up your FubuRegistry and allow you to choose an IoC adapter or use a pre-built IoC container in order to create a new FubuRuntime
  • IApplicationSource was a reusable recipe for how to bootstrap a FubuRuntime that we previously used for fubu run or our older TopShelf service host.
  • EmbeddedFubuMvcServer was a class we used to stand up a FubuMVC application hosted with Katana for testing or just embedding a web server for diagnostics in service bus applications running as a background service. Of course, that all got duplicated when we added NOWIN support and it duplicated some functionality from FubuRuntime anyway.
  • FubuMode was a static class we used to detect and tell you whether the application should run normally or in “development” or “testing” mode. Statics are evil you say? Yeah well keep reading…
  • PackageRegistry was a static class from our now defunct Bottles library that exposed information about the loaded extension assemblies in the application and the diagnostic logging from application bootstrapping.

As of now, we’ve completely eliminated all of the static classes and properties. All of the configuration is done through FubuRegistry, and all of the runtime information about a FubuMVC application is exposed off of FubuRuntime. So yeah, it’s quite a bit better now. It’s good to pay attention to the feedback of others because they see things you don’t or problems you just get too comfortable working around.



Gutting FubuMVC and Rebooting as “Jasper”

tl;dr – FubuMVC and FubuTransportation (the service bus tooling we built on top of FubuMVC) are getting a full reboot with the name “Jasper” on the new DNX platform. This blog post tries to explain why we’d do such a silly thing and describe our current thinking on the technical direction to start getting some feedback. Just for fun, I’m also describing a lot of functionality that I’ve been ripping out of FubuMVC in preparation for the reboot for folks that are interested in how web development has changed since FubuMVC was conceived in 2008-9.

My wife loves watching all the home remodeling shows on HG TV. One of her favorites is a show called Love it or List It. The premise of the show is a couple that wants to move to a new house gets the opportunity to choose between staying in their old home after it has been remodeled by one of the show’s stars — or decides to sell the now remodeled home in favor of purchasing a different house that the other star of the show finds for them on the market. Last year I said that I was giving up on FubuMVC development when it became clear that it was going nowhere and our community support was almost completely gone.

My shop had some flirtations with other platforms and like many shops we have been supplementing .Net development with Node.js work, but this is our current situation as I see it:

  1. We’ve got a pretty big portfolio of existing FubuMVC-based applications, and the idea of rewriting them to a new platform or even just a different .Net application framework or architecture is daunting
  2. We’re very happy with how the FubuTransportation service bus built on top of FubuMVC has worked out for us in production, but we would like it to be sitting on top of a technical foundation that isn’t “dead” like FubuMVC
  3. We’d love to get to be able to “Docker-ize” our applications and quite possibly move our production hosting and day to day development off of Windows
  4. We’ve got a huge investment and coupling in test automation and diagnostics tooling tied to FubuMVC and FubuTransportation that’s providing value
  5. I think many of us are generally positive about the new .Net tooling (DNX, Roslyn, CoreCLR) — except for the part where they didn’t eliminate strong naming in the new world order:(

Taking those factors into account, we’ve decided to “love it” instead of “leaving it” with what I’ve been calling a “Casino Royale style reboot” of the newly combined FubuMVC and FubuTransportation.

I’m working this week to hopefully finish up an intermediate FubuMVC 3.0 release that largely consolidates the codebase, streamlines the application bootstrapping, improves the diagnostics, and eliminates a whole lot of code and functionality that no longer matters. When we finally make the jump to DNX, FubuMVC/FubuTransportation is going to be rebranded as “Jasper.”



The Vision for Jasper

My personal hopes for Jasper are that we retain the best parts of FubuMVC, dramatically improve the performance and scalability of our applications, and solve the worst of the usability problems that FubuMVC and FubuTransportation have today. I’m also hoping that we end up with a decent foundation of technical documentation just for once. I’m not making any unrealistic goals for adoption beyond having enough community support to be viable.

We’re trying very hard to focus on what we consider to be our core competencies this time instead of trying to build everything ourselves. We’re going to fully embrace OWIN internally as a replacement for FubuMVC’s behavior model. We’re betting big on the future of OWIN servers, middleware, and community — even though I’ve been known to gripe about OWIN on Twitter from time to time. We’re also going to support integration and usage of some subset of ASP.Net MVC from within Jasper applications, with a catch. Some users have asked us to make Jasper an addon to ASP.Net MVC, but my strong opinion is that what we want to do with Jasper won’t work unless Jasper is in control of the middleware composition and routing.

Mainly though, we just need Jasper to provide enough benefits to justify the time we’re going to spend building it on work time;-)

What’s Changing from FubuMVC

  • We’re going to stop using the Routing module from ASP.Net in favor of a new routing engine for OWIN based on the Trie algorithm
  • Dropping support for System.Web altogether. It’s OWIN or nothing baby.
  • Dropping most of the server side rendering support, probably including our Spark view engine support. More on this below.
  • The OWIN AppFunc is going to be the new behavior. We’re keeping the behavior graph model for specifying which middleware goes where, but this time we’re planning to use Roslyn to compile code at runtime for composing all the middleware for a single route or message type into one OWIN AppFunc. We have high hopes that doing this will lead to easier to understand exception stack traces and a significantly more efficient runtime pipeline than the current FubuMVC model. We’ll also support MidFunc, but it won’t be encouraged.
  • Part of adopting OWIN is that we’ll be going async by default in all routes and message handling chains. Users won’t be forced to code this way, but it’s a great way to wring out a lot more scalability and many other .Net tools are already headed in this direction.
  • Jasper needs to be much easier in cases where users need to drop down directly to HTTP manipulation or opt out of the conventional FubuMVC behavior
  • FubuMVC on Mono was an unrewarding time sink. I’m very hopeful that with Jasper and the new cross platform support for .Net that coding on OS X and hosting on Linux will be perfectly viable this time around.


What Jasper is Keeping from FubuMVC

  • One of our taglines from FubuMVC was the “web framework that gets out of your way.” To that end, Jasper should have the least possible intrusion into your application — meaning that we’ll try hard to avoid cluttering up application code with fluent interfaces from the framework, mandatory base classes, marker interfaces, and the copious number of attributes that are all too common in many .Net development tools.
  • Keep FubuMVC’s Russian Doll Model and the “BehaviorGraph” configuration model that we use to compose pipelines of middleware and handlers per route or service bus message
  • Retain the “semantic logging” strategy we use within FubuMVC. I think it’s been very valuable for diagnostics purposes and frequently for testing automation support. The Glimpse guys are considering something similar for ASP.Net MVC that we might switch to later if that looks usable.
  • Continue supporting the built in diagnostics in FubuMVC/FT. These are getting some considerable improvement in the 3.0 release for performance tracking and offline viewing
  • Our current mechanisms for deriving url routes from endpoint actions and the reverse url resolution in FubuMVC today. As far as I know, FubuMVC is the only web framework on any platform that provides reverse url resolution for free without additional work on the user’s part.
  • The “one model in, one model out” architecture for expressing url endpoints — but for Jasper we’re going to support more patterns for cases where the one in, one out signature was annoying
  • The built in conventions that FubuMVC and FubuTransportation support today
  • Jasper will continue to support “meta-conventions” that allow users to create and use their own policies
  • The areas or slices modularity support that we have today with Bottles and FubuMVC, but this has already been simplified to only apply to server side code. Jasper is almost entirely getting out of the client side asset business.
  • Content negotiation, the authorization model, and the lightweight asset support from FubuMVC 2.0 will be optimized somewhat but mostly kept as is.
  • Definitely keep the strong-typed “Settings” pattern for application and framework configuration.



First, gut the existing code

Like I said in the beginning, my wife loves HG TV fix it up shows about remodeling houses. A lot of those episodes invariably include contractors tearing into an old house and finding all kinds of unexpected horrors lurking behind the dry wall and ripping out outdated 70’s shag carpet. Likewise, I’ve spent the last month or so at work ripping a lot of 70’s shag carpet type features and code out of FubuMVC. I’m ostensibly making an intermediate FubuMVC 3.0 release that we’ll use internally at work until next year when Jasper is ready and the dust seems settled enough on DNX, but I’ve also taken advantage of the time to clean as much junk out of the codebase as possible before transforming FubuMVC to Jasper.

The main goal of this release was to consolidate all the FubuMVC related code that is going to survive into Jasper into one single Github repository. As secondary goals, I’ve also streamlined the application bootstrapping, removed about a net of 10k lines of code, and I’ll be working this coming week on performance instrumentation inside FubuMVC’s diagnostics and some of the test automation support.


Consolidate and Simplify the Code Topology

FubuMVC’s ecosystem of add on projects and spun off tooling became massive and spread across about ~75 GitHub repositories at its peak. FubuMVC had — in my obviously biased opinion — a very effective architecture for modularity that led us to get a little too slaphappy with splitting features into separate assemblies and nugets. Doing development across related repositories though turned out to be a huge source of friction for us and no matter how much DNX may or may not improve that experience, we’re never going to try to do that again. In that vein, I’ve spent much of the past several weeks consolidating the codebase into many fewer libraries. Instead of just dropping assemblies into the application to auto-magically add new behavior or features to a FubuMVC application, those features ride with the core library but users need to explicitly opt into those features. I liked the “just drop the assembly file in” plugin abilities, but others prefer the explicit code. I’m not sure I have a strong opinion right now, but fewer repositories, libraries, and nugets definitely makes my life easier as the maintainer.

For previous FubuMVC users, I combined:

  • FubuTransportation, FubuMVC.Authentication, FubuMVC.AntiForgery, FubuMVC.StructureMap, and FubuMVC.Localization into FubuMVC.Core
  • FubuMVC.Diagnostics was combined into FubuMVC.Core as part of the 2.1 release earlier this year
  • FubuPersistence and FubuTransporation.RavenDb were all combined into FubuMVC.RavenDb
  • All the Serenity add ons were collapsed into Serenity itself
  • Some of Bottles was folded into FubuMVC and the rest thrown away. More on that later.


StructureMap Only For Now

FubuMVC, like very many .Net frameworks, had some abstractions to allow the tool to be used with multiple IoC containers. I was never happy with our IoC registration abstraction model, but our biggest problem was that FubuMVC was built primarily against StructureMap and its abilities and assumptions about open generic types, enumerable types, and lifecycle management and that made it very difficult for us to support other IoC containers. In addition to StructureMap, we fully supported Autofac and got *this* close with Windsor — but I’m not aware of anyone using other containers besides StructureMap with FubuMVC in a production application.

As of a couple weeks ago, I demolished the IoC abstractions in favor of just having FubuMVC use StructureMap directly. That change allowed me to throw away a lot of code and unit tests, eliminate a couple assemblies, remove some nontrivial duplication in reflection handling code between StructureMap and FubuMVC, and simplify the application bootstrapping.

In the longer run, if we decide to once again support other IoC containers, my thought is that Jasper itself will use StructureMap’s registration model and we’ll just have that model mapped into whatever the other IoC container is at bootstrapping time. I know we could support Autofac and probably Windsor. Ninject and SimpleInjector are definitely out. I don’t have the slightest clue about a Unity adapter or the other 20 or so .Net IoC tools out there.

The new IoC integration in ASP.Net MVC6 is surprisingly similar to FubuMVC’s original IoC integration in many aspects and I think is very likely to run into the exact same problems that we did in FubuMVC (some of the IoC containers out there aren’t usable with MVC6 as it is and their project maintainers aren’t happy with the ASP.Net team). That’s a subject for another blog post on another day though.


Backing off of Server Side Rendering

I know not everyone is onboard the single page application train, but it’s been very obvious to me over the past 2-3 years that server side html generation is becoming much less important as more teams use tools like Angular.js or React.js to do client side development while using FubuMVC to mostly expose Json over Http endpoints. We had some very powerful features in FubuMVC for server side html generation, but the world has moved on and among others, these features have been removed:

  • Html conventions – FubuMVC supported user-defined conventions for generating forms, labels, editors, and displays based on the signature of view models built around the HtmlTags library. While I think that our html convention support was technically successful, it’s no longer commonly used by our teams and I’ve removed it completely from FubuMVC.Core. Jimmy Bogard has pulled most of the convention support into HtmlTags 3.0 such that you can use the html convention generation in projects that don’t use FubuMVC. I’ve been surprised by how well the new TagHelpers feature in MVC6 has been received by the .Net community. I feel like our old HtmlTags-based conventions were much more capable than TagHelpers, but I still think that the time for that kind of functionality has largely passed.
  • Content extensions — a model we had early on in FubuMVC to insert customer specific markup into existing views without having to change those view files. It was successful, but it’s no longer used and out it goes.





Retooling Build and Test Automation Tools

tl;dr: I had a convention based build automation approach based on Rake we used for the FubuMVC projects that I was proud of, but the world moved on and I’ve been replacing that with newer and hopefully better tools like Fixie, Paket, and gulp.js. What I do today with FubuRake I’ve generally used Rake as my build scripting tool for the last 7-8 years, even on .Net projects and I’ve been mostly happy with it. In a grandiose attempt to simplify the build scripts across the FubuMVC ecosystem, make all the homegrown build tools we’d created easier to adopt, and support a Ruby on Rails style “fubu new” approach to bootstrapping new FubuMVC projects, I created the FubuRake library as an addon to Rake. In its most simple form, a complete working build script using FubuRake can look like this below:

require 'fuburake' do |sln|
	sln.assembly_info = {
		:product_name => "FubuCore",
		:copyright => 'Copyright 2008-2015...'

That simple script above generated rake tasks for:

  1. Generating a “CommonAssemblyInfo.cs” file to embed semantic version numbers, CI build numbers, and git commit numbers into the compiled assemblies
  2. Compiling code
  3. Fetching, building, and publishing nuget with Ripple
  4. Running unit tests with NUnit
  5. Tasks to interact with our FubuDocs tool for documentation generation (and yes, I spent way more time building our docs tool than writing docs)
  6. Tasks to create embedded resource files in csproj files as part of our Bottles strategy for modularizing large web applications

If you typed rake -T to see the task list for this script you’d see this output:

rake ci                # Target used for the CI server
rake clean             # Prepares the working directory for a new build
rake compile           # Compiles the solution src/FubuCore.sln
rake compile:debug     # Compiles the solution in Debug mode
rake compile:release   # Compiles the solution in Release mode
rake default           # **Default**, compiles and runs tests
rake docs:bottle       # 'Bottles' up a single project in the solution with...
rake docs:run          # Runs a documentation project hosted in FubuWorld
rake docs:run_chrome   # Runs the documentation projects in this solution i...
rake docs:run_firefox  # Runs the documentation projects in this solution i...
rake docs:snippets     # Gathers up code snippets from the solution into th...
rake ripple:history    # creates a history file for nuget dependencies
rake ripple:package    # packages the nuget files from the nuspec files in ...
rake ripple:publish    # publishes the built nupkg files
rake ripple:restore    # Restores nuget package files and updates all float...
rake ripple:update     # Updates nuget package files to the latest
rake sln               # Open solution src/FubuCore.sln
rake unit_test         # Runs unit tests for FubuCore.Testing
rake version           # Update the version information for the build

As you’ve probably inferred, FubuRake depended very heavily on naming and folder layout conventions for knowing what to do and how to build out its tasks like:

  • Compile the one and only *.sln file under the /src directory using MSBuild with some defaults (.Net version = 4.0, compile target = Debug)
  • Run all the NUnit tests in folders under /src that end in *.Tests or *.Testing. In the case up above, that meant “FubuCore.Testing”
  • Build and publish all the *.nuspec files found in /packaging/nuget

You could, of course, explicitly override any of the conventional behavior. Any tool using conventions almost has to have an easy facility for overriding or breaking out of the conventions for one off cases. It was fine and great as long as you mostly stayed inside our idioms for project layout and you were okay with using NUnit and Ripple. All in all, I’d say that FubuRake was a mild technical success but time has passed it by. The location of MSBuild changed in .Net 4.5, breaking our msbuild support. Ripple was always problematic and it was frequently hard to keep up with Nuget features and changes. FubuDocs was a well intentioned thing, but the support for it that got embedded into FubuRake has file locking issues that sometimes forced me to shutdown VS.Net in order to run the build. If FubuRake is Roland in the Stephen King’s Dark Tower novels, I’d say that the world has moved on. Time for the world to move on I’ve been quiet on the blogging and even the Twitter front lately because I’ve been working very hard on a near rewrite of our Storyteller tool (more on this soon) that we use at work for executable specifications and integration testing. The new work includes a lot of performance driven improvements in the existing .Net code and a brand new user interface written as a Javascript single page application. At the end of last week my situation looked like this:

  • I needed to start publishing pre-release nugets and we only do that from successful CI builds
  • Our TeamCity CI build was broken because of some kind of problem with our Ripple tool that we use for Nuget management.
  • I had started the new client in a completely separate Github repository and had been using a git submodule to include the Javascript code within the Storyteller .Net code repository for full integration
  • I had no effective automation to attach the submodule if it was missing or do the initial npm install or any of the other hidden things that a developer would have to do before being able to work with the code. In other words, my “time to login screen” metric was terrible right as I’d love to start getting some other developers to start contributing to the project with feedback or patches.
  • I’ve wanted to upgrade or replace several pieces of the build and test automation tools I use in my OSS projects for quite some time anyway — especially where there were opportunities to replace homegrown tools I no longer want to support in favor of actively maintained OSS projects.
  • We have a heavy investment in Rake and Ruby for build automation both at work and on OSS projects. Now that we have so much dependency on Node.js tools for client asset work, we’ve gotten into this situation where project build scripts may include installing gems, nugets, and npm packages and it’s becoming a problem of technology overload and build times.

As a result, I’ve spent the last couple days swapping out tools and generally trying to make the new build automation a lot easier to use for other people. I’ve merged the client javascript code into the old Storyteller repository to avoid the whole git submodule mess. I created a new build script that does everything necessary to get both the Javascript and C# code ready for development work. And lastly, I took the very unusual step (for me) of trying to document how to use the code in a readme. All told I:

  • Replaced our old NUnit-based SpecificationExtensions I ripped off from Scott Bellware ages ago with Shouldly. I chose Shouldly because I liked how terse it is, it was an easier switch than Should or Fluent Assertions would have been, and I love their error messages. It went smoothly
  • Replaced Ripple with Paket for managing Nuget dependencies. Ripple is effectively dead, I didn’t want to invest any more time in it, and Paket has a strong, active community right now. Using Nuget out of the box is completely out of the question for me, so Paket was really the only alternative. So far, so good. I hit a few quirks with Paket using multiple feeds, but quickly learned to just be very particular about version numbers. The proof for me will be when we start using Paket across multiple upstream and downstream repositories like we did with our “floating” nuget dependencies in Ripple. It’s my hope though that tools like Paket or Ripple are no longer necessary in ASP.Net vNext but we’ll see.
  • Built a small gulp.js script to compile the .Net code and run the C# unit tests. I leaned heavily at first on Mike O’Brien’s blogpost about building .Net projects with gulp. It didn’t go as smoothly as I’d like and I partially retreated to doing some things in npm scripts instead. I think this is definitely something I’m going to watch and reconsider moving forward. EDIT 3/25: I’ve already given up on gulp.js for the .Net build and I’m in the process of just using simple Javascript files called from NPM as a replacement. Oh well, no harm done.
  • Completely replaced NUnit with Fixie, but used Fixie’s ability to emulate NUnit’s attributes and behavior as a temporary measure. The FubuMVC team has wanted to do this for quite a while because of Fixie’s crazy flexible feature set and performance — and this was the perfect opportunity to finally pull that off. I’m very happy with how this has turned out so far and I’m even seeing the promised performance improvement with the test suite consistently taking 75% of the runtime that NUnit was taking on my machine. Going forward, I’m going to look to slowly remove the NUnit attributes in favor of the cleaner Fixie idioms. I’m happy with Fixie right off the bat.
  • Finally, to make sure it all “just works,” I created an overarching “npm run build” script with a matching “build.cmd” shortcut to do everything you need to do to build both the Javascript and C# code and pull down all of the various dependencies. So now, if you do a fresh clone of the Storyteller code and you already have both .Net 4.5 and Node.js >+ v10 installed, you should be able to execution “npm run build” and go straight to work. And I’ll keep patching things until that’s certified to be true by other developers;)

In case you’re wondering, the Javascript code is built with Webpack through npm scripts that also delegate to mocha and karma for testing. Ironically, I’m using gulp.js to build the .Net code but not the Javascript code, but that’s a subject for someone else to blog;)

More on Replacing Rake with Gulp

I personally like Ruby better than Javascript, especially for the kind of scripting you do for build and test automation. From time to time I still see folks wishing that the browsers would adopt Ruby as the embedded scripting language instead of Javascript, but that metaphorical ship has long since sailed and I think more developers are going to be familiar with JS than Ruby.

I gave some thought to just trying to use pure console tools for the build automation, and some of my coworkers want to investigate using make, but there’s just a little bit of programmatic manipulation here and there for picking up build versions and arguments from the CI server that I wanted to retain some kind of scripting language. I put this very question of replacing Rake to the StructureMap community and got myriad suggestions: F# based tools, C# tools using scriptcs, and Powershell based tools all made an appearance. My stance at this point is that the build script is best done with a low ceremony scripting language and preferably one that’s commonly understood so as to not be a barrier to entry. As much as I liked Rake personally, Ruby was a problem for us with many .Net developers. By choosing a Javascript based tool, we’re investing in what’s arguably the most widely used programming language going. And while this also forces developers to have a working installation of Node.js on their box, I think that’s going to be pretty common anyway.