Introducing Jasper — Asynchronous Messaging for .Net

IMG_1017

For my take on when you would use a tool like Jasper, see How Should Microservice’s Communicate?

“Jasper” is the codename for a new messaging and command execution framework that my shop has been building out to both integrate with and eventually replace our existing messaging infrastructure as we migrate applications to Netstandard 2.0, ASP.Net Core, and yes, adopt a microservices architecture. While we’ve been working very hard on it over the past 6 months, I’ve been hesitant to talk about it too much online. That ends today with the release of the first usable alpha (0.5.0) to Nuget today.

We’ve already done a great deal of work and it’s fairly feature rich, but I’m really just hoping to start drumming up some community interest and getting whatever feedback I can. Production-worthy versions of Jasper should hopefully be ready this spring.

Okay, so what problems does it solve over just using queues?

It’s ostensibly about NServiceBus (for right now, let’s call that the most similar competitor to Jasper), but Sure, You Can Just Use RabbitMQ sums it up perfectly. Jasper already supports functionality to:

Why would you want to use Jasper over [fill in the blank tool]?

I hate this part of talking about any kind of OSS activity or choice, but I know it’s coming, so let’s get to it:

  • Jasper’s execution pipeline is leaner than any other .Net framework that I’m aware of, and we’re theorizing that this will lead to Jasper having better performance, less memory utilization, less GC thrashing, and easier to understand stacktraces than other .Net frameworks. My very next blog post will be showing off our “special sauce” usage of Roslyn code generation and runtime compilation that makes this all possible.
  • Jasper requires much less coupling from your application code to the framework, especially compared to the typical .Net framework that requires you to implement their own interfaces or base classes, tangle your code with fluent interfaces, or force you to spray attributes all over your code. Some of you aren’t going to like that, but my preference is always cleaner code. There’s plenty of room in the world for both of us;)
  • It’s FOSS
  • Jasper plays nicely with ASP.Net Core and even comes with recipes for quick integration into ASP.Net Core applications

Ping/Pong Hello World

The obligatory “hello, world” project in messaging is to send a “ping” message from one service to another, with the expectation that the receiving system will send back a “pong.” So let’s start by saying we have a couple message types like these:

    public class PingMessage
    {
        public string Name { get; set; }
    }

    public class PongMessage
    {
        public string Name { get; set; }
    }

Note: Jasper does not require you to share .Net types between systems, but it’s the easiest way to get started so here you go.

Starting with the “Ponger” service (if the code is cut off in the blog post, it’s all in this project on GitHub), just follow these steps:

  1. “dotnet new console” to create a new console app
  2. Add a Nuget reference to Jasper.CommandLine that will also bring in the core Jasper Nuget as well

From there, the entire “Ponger” service is the following code:

    class Program
    {
        static int Main(string[] args)
        {
            return JasperAgent.Run(args, _ =>
            {
                _.Logging.UseConsoleLogging = true;

                _.Transports.LightweightListenerAt(2601);
            });
        }
    }

    public class PingHandler
    {
        public object Handle(PingMessage message)
        {
            ConsoleWriter.Write(ConsoleColor.Cyan, "Got a ping with name: " + message.Name);

            var response = new PongMessage
            {
                Name = message.Name
            };

            // Send a Pong response back to the original sender
            return Respond.With(response).ToSender();
        }
    }

Now, moving on to the “Pinger” service. Follow the same steps to start a new .Net console project and add a reference to the Jasper.CommandLine Nuget.

From there, we can utilize ASP.Net Core’s support for background services to send a new ping message every second:

    public class PingSender : BackgroundService
    {
        private readonly IServiceBus _bus;

        public PingSender(IServiceBus bus)
        {
            _bus = bus;
        }

        protected override Task ExecuteAsync(CancellationToken stoppingToken)
        {
            int count = 1;

            return Task.Run(async () =>
            {
                while (!stoppingToken.IsCancellationRequested)
                {
                    Thread.Sleep(1000);

                    await _bus.Send(new PingMessage
                    {
                        Name = "Message" + count++
                    });
                }
            }, stoppingToken);
        }
    }

Next, we need a simple message handler that receives the pong replies and writes the receipt to the console output:

    // Handles the Pong responses
    public class PongHandler
    {
        public void Handle(PongMessage message)
        {
            ConsoleWriter.Write(ConsoleColor.Cyan, "Got a pong back with name: " + message.Name);
        }
    }

Now, there’s a little more work to configure the Pinger application:

    class Program
    {
        static int Main(string[] args)
        {
            return JasperAgent.Run(args, _ =>
            {
                // Way too verbose logging suitable
                // for debugging
                _.Logging.UseConsoleLogging = true;

                // Listen for incoming messages
                // at port 2600
                _.Transports.LightweightListenerAt(2600);

                // Using static routing rules to start
                _.Publish.Message()
                    .To("tcp://localhost:2601");

                // Just adding the PingSender
                _.Services.AddSingleton<IHostedService, PingSender>();
            });
        }
    }

If I start up the Pinger application with “dotnet run” at the command line, I get output like this:

Running service 'Pinger'
Application Assembly: Pinger, Version=1.0.0.0, Culture=neutral, PublicKeyToken=null
Hosting environment: Production
Content root path: /Users/user/code/jasper/src/Pinger/bin/Debug/netcoreapp2.0/
Listening for loopback messages
Listening at tcp://YOURMACHINE:2600/

Active sending agent to ws://default/
Active sending agent to tcp://localhost:2601/
Active sending agent to loopback://replies/
Active sending agent to loopback://retries/
Application started. Press Ctrl+C to shut down.

Which, because the Ponger application hasn’t been started, will start spitting out messages like:

Failure trying to send a message batch to tcp://localhost:2601/

And after enough failures trying to send messages, you’ll finally see this:

Sending agent for tcp://localhost:2601/ is latched

Now, after starting the Ponger service with its own “dotnet run,” you should see the following output back in the Pinger console output after Jasper detects that the downstream system for the Ping messages is available:

Sending agent for tcp://localhost:2601/ has resumed

And finally, a whole bunch of console messages about ping and pong messages zipping back and forth.

Other Common Questions

  • Is this in production yet? Yes, a super, duper early version of Jasper is in production in a low volume system.
  • Is it ready for production usage? No, it’s not really proven out yet in real usage. I think we’ll be able to start converting applications to Jasper this month, so it hopefully won’t be long. The sooner that folks poke and prod it, then supply feedback, the faster it’s ready to go.
  • What .Net frameworks does it support? Netstandard 2.0 only.
  • Where’s the code? On GitHub.
  • Are there any docs yet? Yeah, but there’s plenty of room for improvement on the Jasper website.
  • Where can I ask questions or just make fun of this thing? There’s a Gitter room ready
  • What about the license? The permissive MIT license.
  • Is it just you? Nope, several of my colleagues have contributed code, ideas, and feedback so far.
  • Do you want more contributors? Hell, yeah. The more the merrier.
  • Why roll your own? See the first section of this post. We’re not starting from scratch by any means, otherwise I don’t think we would have opted to build something brand new.
  • What’s with the boring name? It’s my hometown, it’s easy to remember, and it can’t possibly offend anyone like “fubu” did.
  • What’s the story with IoC integration? Uh, yeah, it’s StructureMap 4.5 only just at the moment, but that’s a much longer discussion for another blog post. A huge design goal of Jasper is to try to minimize the usage of an IoC container outside of the initial bootstrapping and application shutdown, so I’m not sure you’re going to care all that much.

What’s next?

I’m actually slowing down on new development on Jasper for awhile, but I’ll be flooding the interwebs with blog posts on Jasper while I also plug holes in the documentation. The next big thing at work is to start the trial conversion of one of our applications to Jasper. For longer term items in our backlog, just see the GitHub issue list. The next development task is probably going to have to be replicating the integration we’ve done with Postgresql and Marten for Sql Server.

RabbitMQ, Kafka, and Azure Service Bus integrations will follow later this year.

My OSS Plans for 2018

I’m going back to work tomorrow after a 2+ week break for the holidays. As quite a bit of my official job and self identity as a developer revolves around developing OSS tools, I’m taking a minute to write up what my goals and agenda is for the new year.

I’m looking to start pacing myself much better over the next year. I had the brilliant idea last year that I was going to try to sprint through and “finish” all the outstanding OSS work I had on my plate and spend the next year coasting. Long story short, that turned out to be a really bad idea that left me pretty burned out near the end of the year. This year I’m giving up on the idea that any of my OSS tools will ever truly be “done” and treat OSS work like an on and off again long distance race rather than a series of sprints.

So here’s my theoretical OSS work this year:

  • Jasper — My immediate goal is to get an alpha released this week to start seeing if there’s any community interest. Immediately after that, my team will start doing a trial conversion of some of our applications at work to use Jasper. I’m not sure if this is going to be a big, OSS deal like FubuMVC in terms of my effort, or just something we built for work.
  • Marten — I mostly just want to keep the ball rolling, whittle down the open issue list, and keep the issue list under 25 (a single page in GitHub) open issues at any time. There are plenty of new features in the backlog to do this year, but I’d like to avoid any kind of huge efforts like the 2.0 release last summer.
  • BlueMilk — This is definitely inconsistent with reducing my workload, but I’m kinda, sorta well underway with pulling the runtime codegen & compilation “special sauce” out of Jasper and into its own library. Oh, and it’s also meant to be a streamlined replacement for StructureMap. Way more on this one later.
  • Storyteller — I actually have a 5.0 alpha published that I’m using personally with some engine improvements and better specification debugging. Depending on time and my ambition level, I’ll either kick that out as is or I might try for a semi-rebuild of the UI to more modern React/Redux usage and possibly try to restyle it from being Bootstrap based to Material UI instead. That’s mostly for the learning experience with client side tooling to keep up with what our development teams face on their projects.
  • StructureMap — I’ve been trying for years to get out of supporting StructureMap. I have no intentions of doing any additional work on StructureMap, but I’ll at least try to keep up on user questions and pull requests.
  • Oakton — I feel like this is “done,” with the possible exception of supporting async commands
  • Alba — The only thing definite is to adapt an outstanding pull request and bump it to 2.0 and only target ASP.Net Core 2.0. Alba didn’t take off like I thought it would and it’s been a struggle to get any of our internal teams to use it much, so it’s probably not going much farther.
  • FubuMVC — It’s been “dead” for several years as a public OSS project, but I’ve been supporting it and even enhancing since. My only goal this year with FubuMVC is to make progress within our shop on replacing it with ASP.Net Core MVC on the HTTP side and Jasper on the messaging side.

 

Subcutaneous Testing against React + .Net Applications

Everything in this post is from a proof of concept project we did for the technique described here. We have not used this tooling on a real project yet, but we have a team starting a project where this might be useful, so I promised a write up for them.

In my previous post I laid out how I see the testing pyramid and test tool and technique choices against my company’s typical web application technology stack. As a reminder, our recommended stack for new development on web applications or API’s looks like this (plus a backing database):

Slide1

Last week I talked through how we might test the React components and Redux store setup, including the interaction between Redux and React. I also talked about how we could go about testing the .Net backend both at a unit level and through integration tests through to the backing database. Lastly, I said we’d use a modicum of end to end, Selenium-based tests, but said that we should avoid depending on too many of those kinds of tests. That leaves us with a pretty big hole in coverage against the interaction between the Javascript code running in the browser and the .Net code and database interactions running server side.

As a possible solution for this gap, my team at work did a proof of concept for using Storyteller to do subcutaneous testing against the full application stack, but minus the actual React component “view layer.” The general idea is to use Storyteller with its Storyteller.Redux extension to host the ASP.Net Core application so that it can easily drive both test data input through the real data layer of the .Net code and then turn around and use the real system services to verify the state of the application and the backing database as the “assert” stage of the tests. The basic value proposition here is that this mechanism could be far more efficient in terms of developer time against its benefits compared to end to end, Selenium based testing. We’re also theorizing that the feedback cycles would be much tighter through faster tests and definitely more reliable tests than the equivalent tests against the browser every could be.

A couple things to note or argue:

  • This technique would be most useful if your React components are generally dumb and only communicate with the external world by dispatching well defined actions to the Redux store (I’m assuming that you’re utilizing Redux middleware like redux-thunk or redux-saga here).
  • Why Storyteller as the driver for this instead of another test runner? I’m obviously biased, but I think Storyteller has the very best story in test automation tooling for declarative set up and verification of system state. Plus, unlike any of the xUnit tools I’m aware of, Storyteller is built specifically with integration testing in mind (think configurable retries, bailing out on runaway tests, better control over the lifecycle of the test harness)
  • Storyteller has support for declarative assertions against a JSON document that should be handy for making assertions against the Redux store state
  • We’re theorizing that it’ll be vastly easier to make assertions against the Redux store state than it would to hunt down DOM elements with Selenium
  • The Storyteller.Redux extension subscribes to any changes to the store state and exposes that to the Storyteller test engine. The big win here is that it gives you a single mechanism to handle the dreaded “everything is asynchronous so how does the test harness know when it’s time to check the expected outcomes” problem that makes Selenium testing so dad gum hard in the real world.
  • The Storyteller.Redux extension can capture any logged messages to console.log or console.error in the running browser. Add that to any server side logging that you can also pipe into the Storyteller results

The general topology in these tests would look like this:

Slide2

The test harness would consist of:

  1. A Storyteller project that bootstraps the ASP.Net Core application and runs it within the Storyteller test engine. You can use the Storyteller.AspNetCore extension to make that easier (or you could after I update it for ASP.Net Core 2 and its breaking changes).
  2. The Storyteller.Redux extension for Storyteller provides the Websockets glue to communicate between the launched browser with your Redux store and the running Storyteller engine
  3. The Storyteller ISystem in this project has to have some way to launch a web browser to the page that hosts the Javascript bundle. In the proof of concept project, I just built out a static HTML page that included the bundle Javascript and directly launched the browser to the file location, but you could always use Selenium just to open the brower and navigate to the right Url.
  4. Storyteller Fixtures for setting up system state for tests, sending Redux actions directly to the running Redux store to simulate user interactions, asserting on the expected system state on the backend, and checking the expected Redux store state
  5. An alternative Javascript bundle that includes all the reducer and middleware code in your application, along with some “special sauce” code shown in a section down below that enables Storyteller to send messages and retrieve the current state of the running Redux store via Websockets.

The Special Sauce in the Javascript Bundle

Your custom bundle for the subcutaneous testing would need to have this code in its Webpack entry point file (the full file is on GitHub here):

// "store" is your configured Redux store object. 
// "transformState" is just a hook to convert your Redux
// store state to something that Storyteller could consume
function ReduxHarness(store, transformState){
    if (!transformState){
        transformState = s => s;
    }

    function getQueryVariable(variable)
    {
       var query = window.location.search.substring(1);
       var vars = query.split("&");
       for (var i=0;i<vars.length;i++) {                var pair = vars[i].split("=");                if(pair[0] == variable){return pair[1];}        }        return(false);     }     var revision = 1;     var port = getQueryVariable('StorytellerPort');     var wsAddress = "ws://127.0.0.1:5250";     var socket = new WebSocket(wsAddress); 	socket.onclose = function(){ 		console.log('The socket closed'); 	}; 	socket.onerror = function(evt){ 		console.error(JSON.stringify(evt)); 	}     socket.onmessage = function(evt){         if (evt.data == 'REFRESH'){             window.location.reload();             return;         }         if (evt.data == 'CLOSE'){             window.close();             return;         } 		var message = JSON.parse(evt.data); 		console.log('Got: ' + JSON.stringify(message) + ' with topic ' + message.type); 	 		store.dispatch(message); 	};     store.subscribe(() => {
        var state = store.getState();

        revision = revision + 1;
        var message = {
            type: 'redux-state',
            revision: revision,
            state: transformState(state)
        }

		if (socket.readyState == 1){
            var json = JSON.stringify(message);
            console.log('Sending to engine: ' + json);
			socket.send(json);
		}
    });

    // Capturing any kind of client side logging
    // and piping that into the Storyteller test results
    var originalLog = console.log;
    console.log = function(msg){
        originalLog(msg);

        var message = {
            type: 'console.log',
            text: msg
        }

        var json = JSON.stringify(message);
        socket.send(json);
    }

    // Capture any logged errors in the JS code
    // and pipe that into the Storyteller results
    var originalError = console.error;
    console.error = function(e){
        originalError(e);

        var message = {
            type: 'console.error',
            error: e
        }

        var json = JSON.stringify(message);
        socket.send(json);
    }
}


ReduxHarness(store, s => s.toJS())

The Storyteller System

In my proof of concept, I connected Storyteller to the Redux testing bundle like this (the real code is here):

    public class Program
    {
        public static void Main(string[] args)
        {
            StorytellerAgent.Run(args, new ReduxSampleSystem());
        }
    }

    public class ReduxSampleSystem : SimpleSystem
    {
        protected override void configureCellHandling(CellHandling handling)
        {
            // The code below is just to generate the static file I'm 
            // using to host the reducer + websockets code
            var directory = AppContext.BaseDirectory;
            while (Path.GetFileName(directory) != "ReduxSamples")
            {
                directory = directory.ParentDirectory();
            }

            var jsFile = directory.AppendPath("reduxharness.js");
            Console.WriteLine("Copying the reduxharness.js file to " + directory);
            var source = directory.AppendPath("..", "StorytellerRunner", "reduxharness.js");


            File.Copy(source, jsFile, true);

            var harnessPath = directory.AppendPath("harness.htm");
            if (!File.Exists(harnessPath))
            {
                var doc = new HtmlDocument();

                var href = "file://" + jsFile;

                doc.Head.Add("script").Attr("src", href);

                Console.WriteLine("Writing the harness file to " + harnessPath);
                doc.WriteToFile(harnessPath);
            }

            var url = "file://" + harnessPath;

            // Add the ReduxSagaExtension and point it at your view
            handling.Extensions.Add(new ReduxSagaExtension(url));
        }
    }

The static HTML file generation above isn’t mandatory. You *could* do that by running the real page from the instance of the application hosted within Storyteller as long as the ReduxHarness function shown above is applied to your Redux store at some point.

Storyteller Fixtures that Drive or Check the Redux Store

For driving and checking the Redux store, we created a helper class called ReduxFixture that enables you to do simple actions and value checks in a declarative way as shown below:

    public class CalculatorFixture : ReduxFixture
    {
        // There's a little bit of magic here. This would send a JSON action
        // to the Redux store like {"type": "multiply", "operand": "5"}
        [SendJson("multiply")]
        public void Multiply(int operand)
        {

        }

        // Does an assertion against a single value within the current state
        // of the redux store using a JSONPath expression
        public IGrammar CheckValue()
        {
            return CheckJsonValue("$.number", "The current number should be {number}");
        }

    }

You can of course skip the built in helpers and send JSON actions directly to the running browser or write your own assertions against the current state of the Redux store. There’s also some built in functionality in the ReduxFixture class to track Redux store revisions and to wait for any change to the Redux store before performing assertions.

OT: My personal ranking of the Star Wars movies

I think I need to see The Last Jedi at least a couple more times to be certain (my son & I loved it), but I see these rankings popping up everywhere and here’s my list. Queue the comic book guy voice…

  1. Empire Strikes Back — This is still a no brainer for the huge reveal, the constant feeling of tension during the escape through the asteroid field, the Hoth battle, and Yoda. My all time favorite movie experience was seeing this as a 6yo. We couldn’t get tickets for the early show, so my parents took me to play mini golf and my first trip to Taco Bell to pass time before the movie. I can’t even begin to tell you how cool that was to see that as a late movie. I told my Dad about how much I remember that night a couple years ago. He looked at me funny for a second and said all he remembered was having to dig through the car seats to find enough loose change to pay for the night.
  2. A New Hope — C’mon, you just can’t beat the one that started it all. Remember too, this was actually a little better move before the prequels kind of ruined the back story of Vader and Obi Wan. My second favorite movie experience was seeing the original movie at the Webb City drive in a couple summers later with a cooler of grape welch’s soda (that sounds nasty now, but as a kid…)
  3. The Last Jedi — No spoilers, but I thought it was great overall. I get the criticism that maybe it dragged a bit in the middle, but there were several good scenes in the middle too. I thought there were definitely callbacks to Empire Strikes Back, but the outcomes were very different and sometimes unexpected. It didn’t feel as derivative as The Force Awakens. Really surprised by how good Mark Hamill was in the movie. My daughter is only 8 mos old, but there’s definitely going to be a year she goes as Rey for Halloween
  4. Rogue One — The last third of it is the best battle sequence in the whole series. I’m nerdy enough that I enjoyed spotting all the easter eggs. Loved Alan Tudyk as the droid, but he’s still “Wash” to me.
  5. The Force Awakens — Loved it, just liked Rogue One and the new movie a little better. My favorite scene was the initial reveal of the Millenium Falcon.
  6. Return of the Jedi — This would have been a better movie if he’s stayed with the Wookies instead of the Ewoks, but oh well. It was a blast in the theater at the time.
  7. Revenge of the Sith — There were a handful of action scenes that were good. Maybe less of the super annoying dialogue than the other prequels.
  8. Attack of the Clones — Actually going to say that this was a much better movie in the IMAX release when they had to cut a lot of the “Anakin whines” dialogue.
  9. The Phantom Menace — Duel of the Fates and I still like the drag racing scene. The dialogue was atrocious and the plot was weak. I remember reading spoilers online before it came out about the Midi-chlorians and thinking that was so stupid that it couldn’t possibly be true, but there it was.

Automated Test Pyramid in our Typical Development Stack

Let’s start by making the statement that automating end to end, full stack tests against a non trivial web application with any degree of asynchronous behavior is just flat out hard. My shop has probably over done it with black box, end to end tests using Selenium in the past and it’s partially given automated testing a bad name to the point where many teams are hesitant to try it. As a reaction to those experiences, we’re trying to convince our teams to rebalance our testing efforts away from writing so many expensive, end to end tests and unit tests that overuse mock objects to writing far more intermediate level integration tests that provide a much better effort to reward ratio.

As part of that effort, consider our theoretical, preferred technical stack for new web application development consisting of a React/Redux front end, an ASP.Net Core application running on the web server, and some kind of database. The typical Microsoft layered architecture (minus the obligatory cylinder for the database that I just forgot to add) would look like this:

Slide1

Now, let’s talk about how we would want to devise our automated testing strategy for this technical stack (our test pyramid, if you will). First though, let me state some of my philosophy around automated testing:

  • I think that the primary purpose of automated testing is to try to find and remove problems in your code rather than try to prove that the system works perfectly. That’s actually an important argument because it’s a prerequisite for accepting white box testing — which frequently tends to be a much more efficient approach — as a valid approach compared to only accepting end to end, black box tests.
  • The secondary purpose of automated tests is to act as a regression test cycle that makes it “safe” for a team to continuously evolve or extend the codebase. That usage as a regression cycle is highly dependent upon the automated tests being fast, reliable, and not too brittle when the system is changed. The big bang, end to end Selenium based tests tend to fall down on all three of those criteria.
  • In most cases, you want to try to pick the testing approach that gives you the fastest feedback cycle while still telling you something useful

Here’s more on what I  think makes for a successful test automation strategy.

Now, to make that more concrete in regards to our technical stack shown above, I’d recommend:

  • Writing unit tests directly against the React components using something like Enzyme where that’s valuable. My personal approach is to make most of my React components pretty dumb and hopefully just be pure function components where you might not worry about tests, but I think that’s a case by case decision. As an aside, I think that React is easily the most testable user interface tooling I’ve ever used and maybe the first one I’ve ever used that took testability so seriously.
  • Write unit tests with Mocha or Jest directly against the Redux reducers. This removes problems in the user interface state logic.
  • Since there is coupling between your Redux store and the React components, I would strongly suggest some level of integration testing between the Redux tore and the React components, especially if you depend on transformations within the react-redux wiring. I thought I got quite a bit of value out of doing that in my Storyteller user interface, and it should be even better swapping out in-browser testing with Karma in favor of using Enzyme for the React components.
  • Continue to write unit tests with xUnit tests against elements of the .Net code wherever that makes sense, with the caveat being that if you find yourself writing tests with mock objects that seem to be just duplicating the implementation of the real code, it’s time to switch to an integration test. Here’s my thoughts and guidance for staying out of trouble with mock objects. Some day, I’d like to go through and rewrite my old CodeBetter-era posts on testability design, but that’s not happening any time soon.
  • Intermediate level integration testing against HTTP endpoints using Alba (or something similar), or testing message handling, or even integration testing of a service within the application using its dependencies. I’m assuming the usage of any kind of backing database within these tests. If these tests involve a lot of data setup, I’d personally recommend switching from xUnit to Storyteller where it’s easier to deal with test data state and the test lifecycle. The key here is to remove problems that occur between the .Net code and the backing database in a much faster way than you could ever possibly do with end to end, Selenium-based tests.
  • Write a modicum of black box, end to end tests with Selenium just to try to find and prevent integration errors between the entire stack. The key here isn’t to eliminate these kinds of tests altogether, but rather to rebalance our efforts toward more efficient mechanisms wherever we can.

The big thing that’s missing in that bullet list above is some kind of testing that can weed out problems that arise in the interaction and integration of the React/Redux front end and the .Net middle tier + the backing database. Tomorrow I’ll drop a follow up blog post with an experimental approach to using Storyteller to author data centric, subcutaneous tests from the Redux store layer down through the backing database as an alternative to writing so many functional tests with Selenium driving the front end.

A Tangential Rant about Selenium

First off, I have nothing against Selenium itself and other than the early diamond dependency hell before it ilmerged Newtonsoft.Json (it’s always Newtonsoft’s fault) and various browser updates breaking it. I’ve had very few issues with the Selenium library by itself. That being said, every so often I’ll have an interaction with a tester at work who thinks that automated testing begins and ends with writing Selenium scripts against a running application that drives me up the wall. Like clockwork, I’ll spend some energy trying to talk about issues like data set up, using a testing DSL tool like Cucumber or my own Storyteller to make the specs more readable, and worrying about how to keep the tests from being too brittle and they’ll generally ignore all of that because the Selenium tutorials make it seem so easy.

The typical Selenium tutorial tends to be simplistic and gives newbies a false sense about how difficult automated testing is and what it involves. Working through a tutorial on Selenium that requires you to control some kind of contact form, then post it to the server and check the values on the next page without any kind of asynchronous behavior and then saying that you’re ready to do test automation against real systems is like saying you know how to play chess because you know how the horse-y guy moves on the board.

 

 

 

Publish / Subscribe Messaging in our Ecosystem

Long story short, several of my colleagues and I are building a new framework for managing asynchronous messaging between .Net services with the codename “Jasper” as a modernized replacement for our existing FubuMVC based messaging infrastructure. Once I get a good couple weeks of downtime during the holidays, I’ll be making the first big public alpha nuget of Jasper and start blogging up a storm about the project, but for right now, here’s this post about Jasper’s intended dynamic subscription model.

As part of our larger effort at work to move toward a microservices architecture, some of us did a big internal presentation last week about the progress so far in our new messaging infrastructure we intend to use next year as we transition to Netstandard2 and all the modern .Net stuff. I went over my time slot before we could talk about our proposal for how we plan to handle publish/subscribe messaging and service discovery. I promised this post in the hopes of getting some feedback.

The Basic Architecture

Inside of our applications, when we need to publish or send a message to other systems, we would use some code like this:

public Task SendMessage(IServiceBus bus)
{
    // In this case, we're sending an "InvoiceCreated"
    // message
    var @event = new InvoiceCreated
    {
        Time = DateTime.UtcNow,
        Purchaser = "Guy Fieri",
        Amount = 112.34,
        Item = "Cookbook"
    };

    // Mandatory that there is subscribers for this message
    return bus.Send(@event);

    // or send the message to any subscribers for this type
    // of message, but don't enforce the existence of any
    // subscribers
    return bus.Publish(@event);
    
}

The system publishing messages itself doesn’t need to know or even how to send that message to whatever the downstream system is. The infrastructure code underneath bus.Send() knows how to look up any registered subscribers for the “InvoiceCreated” event message in some kind of subscription storage and route the message accordingly.

The basic architecture is shown below:

subscriptions

Just to make this clear, there’s no technical reason why the “subscription storage” has to be a shared resource — and therefore a single point of failure — between the systems.

Now that we’ve got the barebones basics, here are the underlying problems we’re trying to solve to make that diagram above work in real life.

The Problem Statement

Consider a fairly large scale technical ecosystem composed of many services that need to work together by exchanging messages in an asynchronous manner. You’ve got a couple challenges to consider:

  • How do the various applications “know” where and how to send or publish messages?
  • How do you account for the potential change in message payloads or representations between systems?
  • How could you create a living document that accurately describes how information flows between the systems?
  • How can you add new systems that need to publish or receive messages without having to do intrusive work on the existing systems to accomodate the new system?
  • How might you detect mismatches in capabilities between all the systems without having to just let things blow up in production?

Before I lay out our working theory about how we’ll do all of the above in our development next year, let’s talk about the downsides of doing all of that very badly. When I was still wet behind the ears as a developer, I worked at a very large manufacturing company (one of the “most admired” companies in the US at the time) that had an absolutely wretched “n-squared” problem where integration between lots of applications was done in an ad hoc manner with more or less hard coded, one off mechanisms between applications. Every time we needed a new integration, we had to effectively do all new work and break into all of the applications involved in the new data exchange.

I survived that experience with plenty of scars and I’m sure many of you have similar experiences. To solve this problem going forward, my shop has come up with the “dynamic subscription” model (the docs in that link are a bit out of date) in our forthcoming “Jasper” messaging framework.

Our Proposed Goals and Approach

A lot of this is already built out today, but it’s not in production yet and now’s a perfect time for feedback (hint, hint colleagues of mine). The primary goals in the Jasper approach is to:

  • Eliminate any need for a central broker or any other kind of single point of failure
  • Reduce direct coupling and between services
  • Make it easy to update, browse, or delete subscriptions
  • Provide tooling to validate the subscriptions across services
  • Allow developers to quickly visualize how messages flow in our ecosystem between senders and receivers

Now, to put this into action. First, each application should declare the messages it needs to subscribe to with code like this:

    public class MyAppRegistry : JasperRegistry
    {
        public MyAppRegistry()
        {
            // Override where you want the incoming messages
            // to be sent.
            Subscribe.At("tcp://server1:2222");

            // Declare which messages this system wants to 
            // receive
            Subscribe.To();
            Subscribe.To();
        }
    }

Note: the “JasperRegistry” type fulfills the same role for configuring a Jasper application as the WebHostBuilder and Startup classes do in ASP.Net Core applications.

When the application configured by the “MyAppRegistry” shown above is bootstrapped, it calculates its subscription requirements that include:

  • The logical name of the message type. By default this is derived from the .Net type, but doesn’t have to be. We use the logical name to avoid forcing you to share an assembly with the DTO message types
  • All of the representations of the message type that this application understands and can accept. This is done to enable version messaging between our applications and to enable alternative serialization or even custom reading/writing strategies later
  • The server address where the upstream applications should send the messages. We’re envisioning that this address will be the load balancer address when we’re hosting on premises in a single data center, or left blank for when we get to jump to cloud hosting when we’ll do some additional work to figure out the network address of the subscriber that’s most appropriate within the data center where the sender is hosted.

At deployment time, whenever a service is going to be updated, our theory is that you’ll have a step in your deployment process that will publish the subscription requirements to the proper subscription storage. If you’re using the built in command line support, and your Jasper application compiles to an executable called “MyApp.exe,” you could use this command line signature to publish subscriptions:

MyApp.exe subscriptions publish

At this point, we’re working under the assumption that we’ll be primarily using Consul as our subscription storage mechanism, but we also have a Postgresql-backed option as well. We think that Consul is a good fit here because of the way that it won’t create a single point of failure while also allowing us to replicate the subscription information between nodes through its own gossip protocol.

Validating Subscriptions

To take this capability farther, Jasper will allow you to declare what messages are published by a system like so:

    public class MyAppRegistry : JasperRegistry
    {
        public MyAppRegistry()
        {
            Publish.Message();
            Publish.Message();
            Publish.Message();
        }
    }

Note: You can also use an attribute directly on the published message types if you prefer, or use a convention for the braver.

When a Jasper application starts up, it combines the declaration of published message types with the known representations or message versions found in the system, and combines that information with the subscription requirements into what Jasper calls the “ServiceCapabilities” model (think of this as Swagger for Jasper messaging, or OpenAPI just in case Darrel Miller ever reads this;)).

Again, if your Jasper application compiles through dotnet publish to a file called “MyApp.exe”, you’ll get a couple more command line functions. First, to dump a JSON representation of the service to a file system folder, you can do this:

MyApp.exe subscriptions export --directory ~/subscriptions

My thinking here was that you could have a Git repository where all the services export their service capabilities at deployment time, because, that would enable you to use this command later:

MyApp.exe subscriptions validate --file subscription-report.json

The command above would read in all the service capability files in that directory, and analyze all the known message publishing and subscriptions to create a report with:

 

  1. All the valid message routes from sender to receiver by message type, the address of the receiver, and the representation or version of the message
  2. Any message types that are subscribed to, but not published by any service
  3. Message types that are published by one or more services, but not subscribed to by any other system
  4. Invalid message routing through mismatches in either accepted or published versions or transport mismatches (like if the sender can send messages only through TCP, but the receiver can only receive via HTTP)

Future Work

I thought we were basically done with this feature, but it’s not in production yet and we did come up with some additional items or changes before we go live:

  • Build a subscription control panel that’ll be a web front end that allows you to analyze or even edit subscriptions in the subscription storage
  • Publish the full service capabilities to the subscription storage (we only publish the subscriptions today)
  • Get a bit more intelligent with Consul and how we would use message routing if running nodes of the named services are hosted in different data centers
  • Create the idea of message “ownership,” as in “only this system should be processing this command message type”
  • Some kind of cache busting in the running Jasper nodes to refresh the subscription information in memory whenever the subscription storage changes

Marten 2.4.0 — now plays nicer with others

I was able to push a Marten 2.4.0 release and updated documentation yesterday with several bug fixes and some new small, but important features. The key additions are:

  1. Opening up a Marten session with a existing native Npgsql connection and/or transaction. When you do this, you can also direct Marten whether or not it “owns” the transaction lifecycle and whether or not Marten is responsible for committing or rolling back the transaction on IDocumentSession.SaveChanges(). Since this was a request from one of the Particular Software folks, I’m assuming you’ll shortly see Marten-backed saga persistence in NServiceBus soon;-)
  2. Opening a Marten session that enlists in the current TransactionScope. Do note that this feature is only available when either targeting the full .Net framework (> .Net 4.6) or Netstandard 2.0.
  3. Ejecting a document from a document session. Other folks have asked for that over time, but strangely enough, it got done quickly when I wanted it for something I was building. Weird how it works that way sometimes.
  4. Taking advantage of Marten’s schema management capabilities to register “feature schema objects” for additional database schema objects.

I don’t know the timing, but there were some new features that got left out because I got impatient to push this release, and we’ve had some recent feature requests that aren’t too crazy. Marten will return next in “2.5.0.”

 

 

 

 

 

 

 

Choosing Persistence Tooling, 2017 Edition

A couple years ago I wrote a long post title My Thoughts on Choosing and Using Persistence Tools about my personal decision tree on how I choose database and persistence tooling if left to my own devices.

This has been a common topic at work lately as our teams need to select persistence or database tooling for various projects in our new “microservice” strategy (i.e., don’t build more stuff into our already too large systems from here on out). As a lazy way of creating blog content, I thought I’d share the “guidance” that my team published a couple months back — even though it’s already been superseded a bit by the facts on the ground.

We’ve since decided to scale back our usage of Postgresql in new projects (for the record, this isn’t because of any purely technical reasons with Postgresql or Marten), but I think we’ve finally got some consensus to at least move away from a single centralized database in favor of application databases and improve our database build and test automation, so that’s a win. As for my recommendations on tooling selection, it looks like I’m having to relent on Entity Framework in our ecosystem due to developer preferences and familiarity.

 

Databases and Persistence

Most services will need to have some sort of persistent state, and that’s usually going to be in the form of a database. Even before considering which database engine or persistence approach you should take in your microservice, the first piece of advice is to favor application specific databases that are completely owned and only accessed by your microservice. 

The pattern of data access between services and databases is reflected by this diagram:

application-database

As much as possible, we want to avoid ever sharing a database between applications because of the implicit, tight coupling that creates between two or more services. If your service needs to query information that is owned by another service, favor using either exposing HTTP services from the upstream service or using the request/reply feature of our service bus tools.

Ideally, the application database for a microservice should be:

  • An integrated part of the microservice’s continuous integration and deployment pipeline. The microservice database should be built out by the normal build script for your microservice so that brand new developers can quickly be running the service from a fresh clone of the codebase.
  • Versioned together with any application code in the same source control repository to establish a strong link between the application code and database schema structure
  • Quick to tear down and rebuild from scratch or through incremental migrations

Database and Persistence Tooling Options

The following might be a lie. Like any bigger shop that’s been around for awhile, we’ve got some of everything (NHibernate, raw ADO.net, sproc’s, a forked copy of PetaPoco, etc.)

The two most common approaches in our applications is either:

  1.     Using Dapper as a micro-ORM to access Sql Server
  2.     Using Marten to treat Postgresql as a document database – slowly, but surely replacing our historical usages of RavenDb. 

Use Marten if your service data is complex or naturally hierarchical. If any of these things are true about your service’s persisted domain model, we recommend reaching for Marten:

  • Your domain model is hierarchical and a single logical entity used within the system would have to be stored across multiple tables in a relational database. Marten effectively bypasses the need to map a domain model to a relational database
  • There’s any opportunity to stream JSON data directly from the database to HTTP response streams to avoid the performance hit of using serializers, AutoMapper like tools, or ORM mapping layers
  • You expect your domain model to change rapidly in the future
  • You opt to use event sourcing to persist some kind of long running workflow

Choose Dapper + Sql Server if:

  • Your domain model is going to closely match the underlying database table structure, with simple CRUD-intensive systems being a good example.
  • Definitely use a more relational database approach if the application involves reporting functionality – but you can also do that with Marten/Postgresql
  • Your service will involve set-based logic that is more easily handled by relational database operations

If it feels like your service doesn’t fit into the decision tree above, opt for Sql Server as that has been our traditional standard.

Other choices may be appropriate on a case by case basis. Raw ADO.Net usage is not recommended from a productivity standpoint. Heavy, full featured ORM’s like Entity Framework or NHibernate are also not recommended by the architecture team. If you feel like EF would be advantageous for your domain model, then Marten might be an alternative with less friction.

The architecture team strongly discourages the usage of stored procedures in most circumstances.

For additional resources and conversation,

Jasper’s Getting Started Story – Take 1

IMG_1017

I’ve been kicking around the idea for a possible resurrection of FubuMVC as a mostly new framework with the codename “Jasper”  for several years with some of my colleagues. This year myself and several members of our architecture team at work have started making that a reality as the centerpiece of our longer term microservices strategy.

In the end, Jasper will be a lightweight service bus for asynchronous messaging, a high performance alternative to MVC for HTTP API’s, and a substitute for MediatR inside of ASP.Net Core applications (those three usages share much more infrastructure code than you might imagine and the whole thing is still going to be much, much smaller than FubuMVC was at the end). For the moment, we’re almost entirely focused on the messaging functionality.

I haven’t kicked out an up to date Nuget yet, but there’s quite a bit of documentation and I’m just hoping to get some feedback out of that right now. If you’re at all interested in Jasper, feel free to raise GitHub issues or join our Gitter room.

The only thing I’m trying to accomplish in this post is to get a sanity check from other folks on whether or not the bootstrapping looks usable.

Getting Started

This is taken directly from the getting started documentation.

Note! Jasper only targets Netstandard 1.5 and higher at this time, and we’ve been holding off on upgrading to ASP.Net Core v2.0.

Jasper is a framework for building server side services in .Net. Jasper can be used as an alternative web framework for .Net, a service bus for messaging, as a “mediator” type pipeline within a different framework, or any combination thereof. Jasper can be used as either your main application framework that handles all the configuration and bootstrapping, or as an add on to ASP.Net Core applications.

To create a new Jasper application, start by building a new console application:

dotnet new console -n MyApp

While this isn’t expressly necessary, you probably want to create a new JasperRegistry that will define the active options and configuration for your application:

public class MyAppRegistry : JasperRegistry
{
    public MyAppRegistry()
    {
        // Configure or select options in this constructor function
    }
}

See Configuring Jasper Applications for more information about using the JasperRegistry class.

Now, to bootstrap your application, add the Jasper.CommandLine library to your project and this code to the entrypoint of your console application:


using Jasper.CommandLine;

namespace MyApp
{
    class Program
    {
        static int Main(string[] args)
        {
            // This bootstraps and runs the Jasper
            // application as defined by MyAppRegistry
            // until the executable is stopped
            return JasperAgent.Run<MyAppRegistry>(args);
        }
    }
}

By itself, this doesn’t really do much, so let’s add Kestrel as a web server for serving HTTP services and start listening for messages from other applications using Jasper’s built in, lightweight transport:

public class MyAppRegistry : JasperRegistry
{
    public MyAppRegistry()
    {
        Http.UseKestrel().UseUrls("http://localhost:3001");
        Transports.Lightweight.ListenOnPort(2222);
    }
}

Now, when you run the console application you should see output like this:

Hosting environment: Production
Content root path: /Users/jeremill/code/jasper/src/MyApp/bin/Debug/netcoreapp1.1
Listening for messages at loopback://delayed/
Listening for messages at jasper://localhost:2333/replies
Listening for messages at jasper://localhost:2222/incoming
Now listening on: http://localhost:3001
Application started. Press Ctrl+C to shut down.

See Bootstrapping for more information about idiomatic Jasper bootstrapping.

That covers bootstrapping Jasper by itself, but next let’s see how you can add Jasper to an idiomatic ASP.Net Core application.

Adding Jasper to an ASP.Net Core Application

If you prefer to use typical ASP.Net Core bootstrapping or want to add Jasper messaging support to an existing project, you can use the UseJasper() extension method on ASP.Net Core’s IWebHostBuilder as shown below:

var host = new WebHostBuilder()
    .UseKestrel()
    .UseJasper<ServiceBusApp>()
    .Build();

host.Run();

See Adding Jasper to an ASP.Net Core Application for more information about configuring Jasper through ASP.Net Core hosting.

Your First HTTP Endpoint

The obligatory “Hello World” http endpoint is just this:

public class HomeEndpoint
{
    public string Get()
    {
        return "Hello, world.";
    }
}

As long as that class is in the same assembly as your JasperRegistry class, Jasper will find it and make the “Get” method handle the root url of your application.

See HTTP Services for more information about Jasper’s HTTP handling features.

Your First Message Handler

Let’s say you’re building an invoicing application and your application should handle an InvoiceCreated event. The skeleton for the message handler for that event would look like this:

public class InvoiceCreated
{
    public Guid InvoiceId { get; set; }
}

public class InvoiceHandler
{
    public void Handle(InvoiceCreated created)
    {
        // do something here with the created variable...
    }
}

See Message Handlers for more information on message handler actions.

 

 

Retrospective on Marten at 2 Years Old

I made the very first commit to Marten two years ago this week. Looking at the statistics, it’s gotten just shy of 2,000 commits since then from almost 60 contributors. It’s not setting any kind of world records for usage, but it’s averaging a healthy (for a .Net OSS project) 100+ downloads a day.

Marten was de facto sponsored by my shop because we intended all along to use it as a way to replace RavenDb in our ecosystem with Postgresql. Doing Marten out in the open as an open source project hosted in GitHub has turned out to be hugely advantageous because we’ve had input, contributions, and outright user testing from so many external folks before we even managed to put Marten into our biggest projects. Arguably — and this frustrates me more than a little bit — Marten has been far more successful in other shops that in my own.

I’ve been very pleasantly surprised by how the Marten community came together and how much positive contribution we’ve gotten on new features, documentation, and answering user questions in our Gitter room. At this point, I don’t feel like Marten is just my project anymore and that we’ve genuinely got a healthy group of contributors and folks answering user questions (which is contributing greatly to my mental health).

Early adopters are usually the best users to deal with because they’re more understanding and patient than the folks that come much later when and if your tool succeeds. There’s been a trend that I absolutely love in Marten where we’ve been able to collect a lot of bug reports as a pull request with failing tests that show you exactly what’s wrong. For a project that’s so vulnerable to permutation problems, that’s been a life send. Moreover, we’ve had enough users using it in lots of different things that’s led to the discovery and resolution of a lot of functionality and usability problems.

I’m a little bit disappointed by the uptake in Marten usage, because I think it’s hugely advantageous for developer productivity over ORM’s like Entity Framework and definitely more productive in many problem domains than using a relational database straight up. I don’t know if that’s mostly because the .Net community just isn’t very accepting of tools like this that are outside of the mainstream, we haven’t been able to break through in terms of promoting it, or if it just isn’t that compelling to the average .Net developer. I strongly suspect that Marten would be far more successful if it had been built on top of Sql Server, and we might test that theory if Sql Server ever catches up to Postgresql in terms of JSON and Javascript support (it’s not even close yet).

For some specific things:

  • Postgresql is great for developers just out of the sheer ease of installing it in developer or testing environments
  • I thought going into Marten that the Linq support would be the most problematic thing. After working on the Linq support for quite awhile, I now think that the Linq support is the most problematic and time consuming thing to work on and it’s likely that folks will never stop coming up with new usage scenarios
  • The Linq support would be so much easier and probably more performant when Postgresql gets their proposed JsonPath querying feature. Again, I don’t think that Sql Server’s JSON support is adequate to support Marten’s feature set, but they at least went for JsonPath in their Json querying.
  • A lot of other people contributed here too, but Marten has been a great learning experience on asynchronous code that’s helping me out quite a bit in other projects
  • The event sourcing feature has been a mixed bag for me. My shop hasn’t ended up adopting it, so I’m not dogfooding that work at all — but guess what seems to be the most popular part of Marten to the outside world? The event sourcing support wouldn’t be viable if we didn’t have so much constructive feedback and help from other people.
  • I think it was advantageous to have the documentation done very early and constantly updated as we went
  • After my FubuMVC flop, I swore that if I tried to do another big OSS project that I’d try much harder to build community, document it early, and promote it more effectively. To that end, you can see or hear more about Marten on DotNetRocks, the NoSQL podcast, the Cross Cutting Concerns podcast, a video on Channel 9Herding Code, a recent conversation on Hanselminutes, and a slew of blog posts as we went.

Let my close by thanking the Marten community. I might fight burnout occasionally or get grumpy about the internal politics around Marten at work, but y’all have been fantastic to interact with and I really enjoy the Marten community.