Technology wise, everything in this blog post is related to OSS projects you’re very unlikely to ever use (FubuMVC and Storyteller 3), but I’d hope that the techniques and ideas would still be useful in whatever technical stack you happen to be using. At a minimum, I’m hoping this blog post is useful to some of our internal teams who have to wrestle with the tooling and the testing scenarios in this blogpost. And who knows, we still might try to make a FubuMVC comeback (renamed as “Jasper”) if the dust ever does settle on .Net Core and ASP.Net Core.
We heavily utilize message-based architectures using service bus tools at work. We’ve also invested somewhat in automating testing against those message based integrations. Automating tests against distributed systems with plenty of asynchronous behavior has been challenging. We’ve come up with a couple key aspects of our architectures and technical tooling that I feel have made our automated testing efforts around message-based architectures easier mechanically, more reliable, and even create some transparency into how the system actually behaves.
My recipe for automating tests against distributed, messaging systems is:
- Try to choose tools that allow you to “inject” configuration programmatically at runtime. As I’ll try to make clear in this post, a tight coupling to the .Net configuration file can almost force you into having to swallow the extra complexity of separate AppDomain’s
- Favor tools that require minimal steps to deploy. Ideally, I love using tools that allow for fullblown xcopy deployment, at least in functional testing scenarios
- Try very hard to be able to run all the various elements of a distributed application in a single process because that makes it so much simpler to coordinate the test harness code with all the messages going through the greater system. It also makes debugging failures a lot simpler.
- At worst, have some very good automated scripts that can install and teardown everything you need to be running in automated tests
- Make sure that you can cleanly shut your distributed system down between tests to avoid having friction from locked resources on your testing box
- Have some mechanism to completely rollback any intermediate state between tests to avoid polluting system state between tests
Repeatable Application Bootstrapping and Teardown
The first step in automating a message-based system is simply being able to completely and reliably start the system with all the related publisher and subscriber nodes you need to participate in the automated test. You could demand a black box testing approach where you would deploy the system under test and all of its various services by scripting out the installation and startup of Windows services or *nix daemon processes or programmatically launching other processes to host various distributed elements from the command line.
Personally, I’m a big fan of using whitebox testing techniques to make test automation efforts much more efficient, meaning that I think
My strong preference however is to adopt architectures that make it possible for the test automation harness itself to bootstrap and start the entire distributed system, rather than depending on external scripts to deploy and start the application services. In effect, this has meant running all of the normally distributed elements collapsed down to the single test harness process.
Our common technical stack at the moment is:
- FubuMVC 3 as both our web application framework and as our service bus
- Storyteller 3 as our test automation harness for integrated acceptance testing
- LightningQueues as a “store and forward” persistent queue transport. Hopefully you’ll hear a lot more about this tool when we’re able to completely move off of Esent and onto LightningDB as the underlying storage engine.
- Serenity as an add-on library as a standard recipe to host FubuMVC applications in Storyteller with diagnostics integration
In all cases, these tools can be programmatically bootstrapped in the test harness itself. LightningQueues being “x-copy deployable” as our messaging transport has been a huge advantage over queueing tools that require external servers or installations. Additionally, we have programmatic mechanisms in LightningQueues to delete any previous state in the persistent queues to prevent leftover state bleeding between automated tests.
After a lot of work last year to streamline the prior mess, a FubuMVC web and/or service bus node application is completely described with a FubuRegistry class (here’s a link to an example), conceptually similar to the “Startup” concept in the new ASP.Net Core.
Having the application startup and teardown expressed in a single place makes it easy for us to launch the application from within a test harness. In our Serenity/Storyteller projects, we have a base “SerenitySystem” class we use to launch one or more applications before any tests are executed. To build a new SerenitySystem, you just supply the name of a FubuRegistry class of your application as the generic argument to SerenitySystem like so:
public class TestSystem : SerenitySystem<WebsiteRegistry>
That declaration above is actually enough to tell Storyteller how to bootstrap the application defined by “WebsiteRegistry.”
It’s not just about bootstrapping applications, it’s also about being able to cleanly shut down your distributed application. By cleanly, I mean that you release all system resources like IP ports or file locks (looking at you Esent) that could prevent you from being able to quickly restart the application. This becomes vital anytime it takes more than one iteration of code changes to fix a failing test. My consistent experience over the years is that the ability to quickly iterate between code changes and executing a failing test is vital for productivity in test automation efforts.
We’ve beaten this problem through standardization of test harnesses. The Serenity library “knows” to dispose the running FubuMVC application when it shuts down, which will do all the work of releasing shared resources. As long as our teams use the standard Serenity test harness, they’re mostly set for clean activation and teardown (it’s an imperfect world and there are *always* complications).
Bootstrapping the Entire System in One Process
Ideally, I’d prefer to get away with running all of the running service bus nodes in a single AppDomain. Here’s a pretty typical case for us, say you have a web application defined as “WebRegistry” that communicates bi-directionally with a headless service bus service defined by a “BusRegistry” class that normally runs in a Windows service:
// FubuTransportRegistry is a special kind of FubuRegistry // specifically to help configure service bus applications // BusSettings would hold configuration data for BusRegistry public class BusRegistry : FubuTransportRegistry { } public class WebRegistry : FubuTransportRegistry { }
To bootstrap both the website application and the bus registry in a single AppDomain, I could use code like this:
public static void Bootstrap_Two_Nodes() { var busApp = FubuRuntime.For<BusRegistry>(); var webApp = FubuRuntime.For<WebRegistry>(_ => { // WebRegistry would normally run in full IIS, // but for the test harness we tend to use // Katana to run all in process _.HostWith(); }); // Carry out tests that depend on both // busApp and webApp }
This is the ideal, and I hope to see us use this pattern more often, but it’s often defeated by tools that have hard dependencies on .Net’s System.Configuration that may cause configuration conflicts between running nodes. Occasionally we’ll also have conflicts between assembly versions across the nodes or hit cases where we cannot have a particular assembly deployed in one of the nodes.
In these cases we resort to using a separate AppDomain for each service bus application. We’ve built this pattern into Serenity itself to standardize the approach with what I named “Remote Systems.” For an example from the FubuMVC acceptance tests, we execute tests against an application called “WebsiteRegistry” running as the main application in the test harness, and open a second AppDomain for a testing application called “ServiceNode” that exchanges messages with “WebsiteRegistry.” In the Serenity test harness, I can just declare the second AppDomain for “ServiceNode” like so:
public class TestSystem : SerenitySystem<WebsiteRegistry> { public TestSystem() { // Running the "ServiceNode" app in a second AppDomain AddRemoteSubSystem("ServiceNode", x => { x.UseParallelServiceDirectory("ServiceNode"); x.Setup.ShadowCopyFiles = false.ToString(); }); }
The code above directs Serenity to start a second application found at a directory named “ServiceNode” that is parallel to the main application. So if the main application is hosted at “src/FubuMVC.IntegrationTesting”, then “ServiceNode” is at “src/ServiceNode.” The Serenity harness is smart enough to figure out how to launch the second AppDomain pointed at this other directory. Serenity can also bootstrap that application by scanning the assemblies in that AppDomain to find a class that inherits from FubuRegistry — very similar to how ASP.Net Core is going to use the Startup class convention to bootstrap applications.
The biggest problem now is generally in dealing with the asynchronous behavior in the different AppDomain’s and “knowing” when it’s safe for the test harness to start checking for the outcome of the messages that are processed within the “arrange” and “act” portions of a test. In a following post, I’ll talk about the tooling and technique we use to coordinate activities between the different AppDomain’s.
This All Changes in .Net Core
AppDomain’s and .Net Remoting all go away in .Net Core. While no one is really that disappointed by those features going away, I think that’s going to set back the state of test automation tools in .Net for a little while because almost every testing tool uses AppDomain’s in some fashion. For our part, my thought is that we’d move to launching all new Process’s for the additional service bus nodes in testing.
I know there’s also some plans for an “AssemblyLoadContext” that sounds to me like a better, lightweight way of doing the kinds of Assembly loading sandboxes for testing that are only possible with separate AppDomain’s in .Net today. Other than rumors and an occasionally cryptic hint from members of the ASP.Net team, there’s basically no information about what AssemblyLoadContext will be able to do.
The new “Startup” mechanism in .Net Core might serve in the exact same fashion that our FubuRegistry does today in FubuMVC. That should make it much easier to carry out this strategy of collapsing distributed or microservice applications down into a single process.
I also think that the improved configuration mechanisms in ASP.Net Core (is this in .Net Core itself? I really don’t know). The tight coupling of certain libraries we use to the existence of that single app.config/web.config file today is a headache and the main reason we’re sometimes forced into the more complicated, multiple AppDomain issue.
What’s Next
This blog post went a lot longer than I anticipated, so I cut it in half. Next time up I’ll talk about how we coordinate the test harness timing with the various messaging nodes and how we expose application diagnostics in automated test outputs to help understand what’s actually happening in your application when things fail or just run too slowly.
5 thoughts on “Automated Testing of Message Based Systems”