BlueMilk is Ready for Early Adopters

This was renamed “Lamar” because the feedback on the name was, um, not good:)

 

EDIT 2/14/2018: And this already brought out a bug if you have a type that would need a closed generic type as an argument to its constructor. 0.7.1 will follow very shortly on Nuget.

 

BlueMilk is the name of a new OSS Inversion of Control tool I’m building specifically for usage in Jasper applications, but also as a higher performant replacement for StructureMap in Netstandard 2.0 applications going forward. To read more about what is genuinely unique about its internals and approach, see Jasper’s Roslyn-Powered “Special Sauce.”

I’m declaring BlueMilk 0.7 on Nuget right now as ready for enterprising, early adopter types to try out either on its own or within an ASP.Net Core application. At this point it’s passing all the ASP.Net Core compliance tests with a couple exceptions that I can’t possibly imagine being important in many cases (like the order in which created objects are disposed and a really strange way they order objects in a list when there’s mixed open and closed generic type registrations). It’s also supporting quite a few StructureMap features that I missed while trying to work with the built in DI container.

As I said in the introductory post, you can use BlueMilk as either a drop in replacement for the built in ASP.Net Core DI tool or as a faster subset of StructureMap for a much more richer feature set.

Current Feature Set

In most cases, the BlueMilk feature and API is identical to StructureMap’s and I’ll have to send you to the StructureMap documentation for more explanation.

Caveats

  • I’m wrestling with the 1st usage, warm up time just due to how long it takes Roslyn to bootstrap itself on its very first usage. What I’ve come up with so far is to have the dynamic classes for services registered as singletons or scoped be built on the initial startup, but allowing any other resolvers be built lazily the first time they are actually used. This is an ongoing struggle.
  • The lifecycle scoping is different than idiomatic StructureMap. I opted to give up and just use the ASP.Net team’s new definition for what “transient” and “scoped” means.
  • There is a dependency for the moment on a library called Baseline, that’s just my junk drawer of convenience extension methods (leftovers from FubuCore for anyone that used to follow FubuMVC). Before BlueMilk hits 1.0, I’ll internalize those extension methods somehow and eliminate that dependency

Using within ASP.Net Core Applications

You’ll want to pull down the BlueMilk.Microsoft.DependencyInjection package to get the ASP.Net Core bootstrapping shim — and blame the ASP.Net team for the fugly naming convention.

In code, plugging in BlueMilk is done through the UseBlueMilk() extension method as shown below:

var builder = new WebHostBuilder();
builder
    .UseBlueMilk()
    .UseUrls("http://localhost:5002")
    .UseKestrel()
    .UseStartup();

Pretty standard ASP.Net Core stuff. Using their magical conventions on the Startup class, you can do specific BlueMilk registrations using a ConfigureContainer(ServiceRegistry) method as shown below:

public class Startup
{
    public void ConfigureContainer(ServiceRegistry services)
    {
        // BlueMilk supports the ASP.Net Core DI
        // abstractions for registering services
        services.AddLogging();

        // And also supports quite a few of the old 
        // StructureMap features like type scanning
        services.Scan(x =>
        {
            x.AssemblyContainingType<SomeMarkerType>();
            x.WithDefaultConventions();
        });

     }

 // Other stuff we don't care about here
}

Durable Messaging in Jasper

My colleague Mike Schenk had quite a bit of input and contributions to this work. This continues a series of blog posts just trying to build up to the integration of durable messaging with ASP.Net Core:

  1. Jasper’s Configuration Story 
  2. Jasper’s Extension Model
  3. Integrating Marten into Jasper Applications
  4. Durable Messaging in Jasper (this one)
  5. Integrating Jasper into ASP.Net Core Applications
  6. Jasper’s HTTP Transport
  7. Jasper’s “Outbox” Support within ASP.Net Core Applications

 

Right now (0.5.0), Jasper offers two built in message transports using either raw TCP socket connections or HTTP with custom ASP.Net Core middleware. Either transport can be used in one of two ways:

  1. Fire and Forget” — Fast, but not backed by any kind of durable message storage, so there’s no guaranteed delivery.
  2. Store and Forward” — Slower, but make damn sure that any message sent is successfully received by the downstream service even in the face of system failures.

Somewhere down the line we’ll support more transport options like RabbitMQ or Azure Service Bus, but for right now, one of the primary design goals of Jasper is to be able to effectively do reliable messaging with the infrastructure you already have. In this case, the “store” part of durable messaging is going to be the primary, backing database of the application. Our proposed architecture at work would then look something like this:

Slide1

So potentially, we have multiple running instances (“nodes”) of each service behind some kind of load balancer (F5 in our shop), with each service having its own dedicated database.

For the moment, we’ve been concentrating on using Postgresql through Marten to prove out the durable messaging concept inside of Jasper, with Sql Server backing to follow shortly. To opt into that persistence, add the MartenBackedExtension extension from the Jasper.Marten library like this:

public class PostgresBackedApp : JasperRegistry
{
    publicPostgresBackedApp()
    {
        Settings.ConfigureMarten(_ =>
        {
            _.Connection("some connection string");
        });

        // Listen for messages durably at port 2301
        Transports.DurableListenerAt(2301);

        // This opts into Marten/Postgresql backed
        // message persistence with the retry and
        // message recovery agents
        Include<MartenBackedPersistence>();
     }
}

What this does is add some new database tables to your Marten configuration and directs Jasper to use Marten to persist incoming and outgoing messages before they are successfully processed or sent. If you end up looking through the code, it uses custom storage operations in Marten for better performance than the out of the box Marten document storage.

Before talking about all the ways that Jasper tries to make the durable messaging as reliable as possible by backstopping error conditions, let’s talk about what actually happens when you publish a message.

What Happens when You Send a Message?

When you send a message through the IServiceBus interface  or through cascading messages, Jasper doesn’t just stop and send the actual message. If you’re publishing a message that is routed by a durable channel, calling this code:

public async Task SendPing(IServiceBus bus)
{
    // Publish a message
    await bus.Send(new PingMessage());
}

will result in an internal workflow shown in this somewhat simplified sequence diagram of the internals:

Handling a Message w_ Unit of Work Middleware

The very first thing that happens is that each outgoing copy of the message is persisted to the durable storage, which is Postgresql in this post. Next, Jasper batches outgoing messages in a similar way to the debounce operator in Rx by outgoing Uri. If you’re using either the built in TCP or HTTP transports, the next step is to send a batch of messages to the receiving application. The receiving application in turn first persists the incoming messages in its persistence, and sends back an acknowledgement to the original sending application. Once the acknowledgement is received, the sending application will delete the outgoing messages just sent successfully from its message persistence and go on its merry way.

That’s the “store and forward” happy path. Now let’s talk about all the myriad ways things could go wrong and what Jasper tries to do to ensure that your messages get to where they are supposed to go.

Network Hiccups

How’s that for a scientific term? Sending message batches will occasionally fail due to the normal litany of temporary network issues. If an outgoing message batch fails, all the messages get their “sent attempts” count incremented in storage and they are added back into the local, outgoing sending agent queue to be tried again.

Circuit Breaker

Jasper’s sending agents implement a form of the Circuit Breaker pattern where a certain number of consecutive failures (it’s configurable) to send a message batch to any destination will cause Jasper to latch that sending agent. When the sending agent is latched, Jasper will not make any attempts to send messages to that destination. Instead, all outgoing messages to that destination will simply be persisted to the backing message persistence without any node ownership. The key point here is that Jasper won’t keep trying to send messages just to get runtime exceptions and it won’t allow the memory usage of your system to blow up from all the backed up, outgoing messages being in an in memory sending queue.

When the destination is known to be in a failure condition, Jasper will continue to poll the destination with a lightweight ping message just to ascertain if the destination is back up yet. When a successful ping is acknowledged by the destination, Jasper will unlatch the sending agent and begin sending outgoing messages.

Resiliency and Node Failover

If you are using the Marten/Postgresql backed message persistence, your application has a constantly running message persistence agent (it’s actually called SchedulingAgent) that is polling for persisted messages that are either owned by no specific node or persisted messages that are owned by an inactive node.

To detect whether a node is active, we rely on each node holding a session level advisory lock in the underlying Postgresql database as long as it’s really active. Periodically, Jasper will run a query to move any messages owned by an inactive node to “any node” ownership where any running node can recover both the outgoing and incoming messages. This query detects inactive nodes simply by the absence of an active advisory lock for the node identity in the database.

The message persistence agent also polls for the existence of any persisted incoming or outgoing messages that are not owned by any node. If it detects either, it will assign some of these messages to itself and pull outgoing messages into its local sending queues and incoming messages into its local worker queues to be processed. The message polling and fetching was designed to try to enable the recovery work to be spread across the running nodes. This process also uses Postgresql advisory locks as a distributed lock to prevent multiple running nodes from double dipping into the same persisted messages.

The end result of all that verbiage is:

  • If the receiving application is completely down, Jasper will be able to recover the outgoing messages and send them later when the receiving application is back up
  • If a node fails before it can send or process all the messages in flight, another node will be able to recover those persisted messages and process them
  • If your entire application goes down or is shut down, it will pick up the outstanding, persisted work of incoming and outgoing messages when any node is restarted
  • By using the advisory locks in the backing database, we got around having to have any kind of distributed lock mechanism (we were considering Consul) or leader election for the message recovery process, making our architecture here a lot simpler than it could have been otherwise.

 

More Work for the Future

The single biggest thing Jasper needs is early adopters and usage in real applications to know how what it already has for resiliency is working out. Beyond that though, I know we want at least a little more work in the built in transports for:

  1. Backpressure — We might need some kind of mechanism to allow the receiving applications let the senders know, “hey, I’m super busy here, could you stop sending me so many messages for a little bit” and slow down the sending.
  2. Work Stealing — We might say that its easier to implement back pressure between the listening agent and worker queues within the receiving application. In that case, if the listener sees there are too many outstanding messages waiting to be processed in the local worker queues, it would just persist the incoming messages for any other node to pick up when it can. We think this might be a cheap way to implement some form of work stealing.
  3. Diagnostics — We actually do have a working diagnostic package that adds a small website application to expose information about the running application. It could definitely use some additional work to expose metrics on the active message persistence.
  4. Sql Server backed message persistence — probably the next thing we need with Jasper at work

 

Other Related Stuff I Didn’t Get Into

  • We do have a dead letter queue mechanism where messages that just can’t be processed are shoved over to the side in the message persistence. All configurable of course
  • All the message recovery and batching thresholds are configurable. If you’re an advanced Jasper user, you could use those knobs to fine tune batch sizes and failure thresholds
  • It is possible to tell Jasper that a message expires at a certain time to prevent sending messages that are just too old

 

On Productive Software Development Teams

I’ve had several conversations lately about how to organize development teams, enough so that it’s probably useful for me to write out what I do think here. By no means should you think for a second that my shop or my team is perfect and I’ll happily admit that much of the focus here is on things that I want to be different at work.

This would have been a much longer post, but I found worlds of older blog posts I’ve written on this subject. Honestly, the problems I see and the solutions I’d recommend just haven’t evolved that much in the last 10 years. I don’t know if that makes me worried more about myself or the software development world.

 

Multi-Disciplinary Team and Multi-Skilled Folks

I think it’s a huge anti-pattern to organize software teams around technical specialties or disciplines like having a UI team, a middle tier team, and a database team. I’d even go farther and say it can be a negative to keep the testers somewhat separate from the rest of the development team as well. I think the main point is that each team should ideally have every skillset it needs to carry out the project inside the team room or at least under the team’s control.

Moreover, autonomous teams that can do everything needed to carry out the project are far more likely to succeed than teams that have to frequently depend on other teams or technical specialties. Formal hand-offs between teams in software development are an almost inevitable drag on productivity and should be minimized as much as possible within a software organization.

You will make mistakes and discover last minute requirements or flaws in your architecture. Sure, you can try to do the very best planning possible to ensure that you know everything upfront, but my experience says that you’re far better off when you can easily recover from mistakes and accommodate those last minute discoveries. Needless to say, it’s far easier to be flexible in last minute discoveries or problems if you can just fix the problems yourself. What’s going to be faster, filling out a Jira ticket to get a database team to add a column you need or being able to just add a database migration to your codebase and continuing on your way?

Even without formal hand-offs, keeping developers in strict specialties can easily make it harder for the team to communicate with each other because there is less common concepts between team members while simultaneously requiring more communication because every feature has to involve more people. Not every developer can be a multi-skilled generalist, but at a minimum, each developer needs to have some understanding of how the folks both upstream and downstream of them work and what their concerns are. My early career was actually spent as an engineer designing petrochemical plants, and one of the earliest lessons my first manager drilled into me was how important it was to understand what the other engineering disciplines did so you could better support their work with your products and how to more intelligently ask questions of the other types of engineers you depended on.

Making developers specialize by technical concern can frequently lead teams into the Local Optimization Fallacy trap. If you’re not familiar with this, think of cases where the developers optimize some part of the system for how themselves, but that decision makes the tester’s jobs much harder. On the flip side, it’s frequently advantageous to do some extra work inside the code for no other reason that to make automated testing much more efficient. In many cases, you’ll save more time in testing than the extra time you spend coding, and that’s a net win for the team.

Specializing personnel on teams can be bad for developer growth and skills acquisition. My current shop has historically kept the database team separate from developers, and frequently the outcome has been developers who don’t have a good background in the basics of database development and database developers who maybe don’t have a solid appreciation for other software engineering principles.

 

Everybody Knows Where the Bodies are Buried

I’m familiar with several teams who own and maintain codebases that were originally written by other people and they frequently struggle understanding how things flow through the system or even the basic architecture. Somebody is going to say they needed more documentation and I’ll say instead that companies pay a high price for not having some developer continuity when the tribal knowledge walks out the door. Either way, it’s absolute vital that the team truly understands the codebase they’re working on.

Arguably, one of the most important jobs of a technical lead is making sure that the other team members understand how the system is designed, the basic technologies, how information flows through the system, basic domain concepts, and how to troubleshoot problems throughout the stack.

We’re admittedly struggling in many cases with gobs of legacy technology decisions that aren’t familiar to many of our developers (partially because we hired a prolific OSS author and his old tools are every which way). The long term solution is to replace those tools with more modern, commonly known tools. For now though, it helps somewhat to try to explain why those decisions were made and what the original team was thinking as a way to better understand how to work in those codebase as they are today.

As an aside, a couple years ago I seriously suggested that we try to switch to Node.js for our server side web development based on the theory that our younger developers were far more interested in Javascript than .Net, but the stupid “left pad” incident happened that exact day and I never brought that up again.

 

How Microservices Might Fit In

Honestly, the big value proposition for microservices in my shop is mostly being able to move to a world where each codebase is small enough to be completely owned by no more than one normal sized team. We have 3-4 big systems that are worked on by multiple teams, and let me tell you, the folks on those teams have absurd Git skills just able to manage having so many developers contributing simultaneously.

Forget the Golang or Node.js powered microservices with under a 100 lines of code, I’ll settle for most of our teams getting to work on a codebase that is:

  • Small enough to be completely owned by a single team
  • Small and simple enough that mere mortals can reasonably understand the codebase
  • Has a relatively fast automated build because that’s a huge component of being productive

 

 

 

Sunsetting StructureMap

I haven’t really been hiding this, but I apparently need to be more clear that I do not intend to do any further development work on StructureMap other than trying to keep up with pull requests and bugs that have no workaround. My focus is on other projects these days and I’m trying to cut down on the time I spend on OSS work as well. If someone wants to take over maintenance, I’d be happy to help someone else take it on. With umpteen dozen .Net IoC containers on GitHub and Nuget and the new built in DI container in ASP.Net Core, I feel like there’s plenty of tools for whatever folks need.

I am working on the BlueMilk project as an intended successor project to StructureMap that should be able to serve as an offramp to many shops developing on StructureMap today. BlueMilk will be a much smaller library that retains what I feel to be the core capabilities of StructureMap, while ditching most of the legacy design decisions that hold StructureMap back and keep my inbox filled with user questions.

For some perspective, I started working on what became StructureMap the summer before my oldest son was born, and he’s starting High School this coming fall. The highlight of my conference speaking history was probably a QCon where I gave a talk about “lessons learned from a long lived codebase” all about working with StructureMap — in 2008!

 

Integrating Marten into Jasper Applications

Continuing a new blog series on Jasper:

  1. Jasper’s Configuration Story 
  2. Jasper’s Extension Model
  3. Integrating Marten into Jasper Applications  (this one)
  4. Durable Messaging in Jasper
  5. Integrating Jasper into ASP.Net Core Applications
  6. Jasper’s HTTP Transport
  7. Jasper’s “Outbox” Support within ASP.Net Core Applications

 

Using Marten from Jasper Applications

Time to combine two of my biggest passions (time sinks) and show you how easy it is to integrate Marten into Jasper applications. If you already have a Jasper application, start by adding a reference to the Jasper.Marten Nuget. Using Jasper’s extension model, the Jasper.Marten library will automatically add IoC registrations for the Marten:

  1. IDocumentStore as a singleton
  2. IQuerySession as scoped
  3. IDocumentSession as scoped

At a bare minimum, you’ll at least need to tell Jasper & Marten what the connection string is to the underlying Postgresql database something like this sample:

public class AppWithMarten : JasperRegistry
{
    public AppWithMarten()
    {
        // StoreOptions is a Marten object that fulfills the same
        // role as JasperRegistry
        Settings.Alter<StoreOptions>((config, marten) =>
        {
            // At the simplest, you would just need to tell Marten
            // the connection string to the application database
            marten.Connection(config.GetConnectionString("marten"));
        });
    }
}

In this case, we’re taking advantage of Jasper’s strong typed configuration model to configure the Marten StoreOptions object that completely configures a Marten DocumentStore in the underlying IoC container like this from the Jasper.Marten code:

// Just uses the ASP.Net Core DI registrations
registry.Services.AddSingleton<IDocumentStore>(x =>
{
    var storeOptions = x.GetService<StoreOptions>();
    var documentStore = new DocumentStore(storeOptions);
    return documentStore;
});

And for the basics, that’s all there is to it. For right now, the IDocumentSession service is resolved by calling IDocumentStore.OpenSession(), but it’s likely users will want to be able to opt for either lightweight sessions or configure different transactional levels. I don’t know what that’s going to look like yet, but it’s definitely something we’ve thought about for the future.

 

 

 

Jasper’s Extension Model

Continuing a new blog series on Jasper:

  1. Jasper’s Configuration Story 
  2. Jasper’s Extension Model (this one)
  3. Integrating Marten into Jasper Applications
  4. Durable Messaging in Jasper
  5. Integrating Jasper into ASP.Net Core Applications
  6. Jasper’s HTTP Transport
  7. Jasper’s “Outbox” Support within ASP.Net Core Applications

 

 

The starting point of any Jasper application is the JasperRegistry class that defines the configuration sources, various settings, and service registrations, similar in many respects to the IWebHostBuilder and Startup types you may be familiar with in ASP.Net Core applications (or the FubuRegistry for any old FubuMVC hands). A sample one is shown below:

public class SubscriberApp : JasperRegistry
{
    public SubscriberApp()
    {
        Subscribe.At("http://loadbalancer/messages");
        Subscribe.ToAllMessages();

        Transports.LightweightListenerAt(22222);
    }
}

If everything you can possibly change or configure is done by the internal DSL exposed by the JasperRegistry class, it’s only natural then that the extension model is just this:

public interface IJasperExtension
{
    void Configure(JasperRegistry registry);
}

As a sample, here’s an available extension from the Jasper.Marten library that lets you opt into using Marten-backed message persistence that will be featured in a blog post later in this series:

/// <summary>
/// Opts into using Marten as the backing message store
/// </summary>
public class MartenBackedPersistence : IJasperExtension
{
    public void Configure(JasperRegistry registry)
    {
        // Override an OOTB service in Jasper
        registry.Services.AddSingleton<IPersistence, MartenBackedMessagePersistence>();

        // Jasper works *with* ASP.Net Core, even without a web server,
        // so you can use their IHostedService model for long running tasks
        registry.Services.AddSingleton<IHostedService, SchedulingAgent>();

        // Customizes the Marten integration a little bit with
        // some custom schema objects this extension needs
        registry.Settings.ConfigureMarten(options =>
        {
            options.Storage.Add<PostgresqlEnvelopeStorage>();
        });
    }
}

To apply and use these extensions, you have two options. First, you can say that the exposed extension has to be explicitly added by the application developers in their JasperRegistry with syntax like this:

public class ItemSender : JasperRegistry
{
    public ItemSender()
    {
        Include<MartenBackedPersistence>();
        // and a bunch of other stuff that isn't germane here
    }
}

The other option is to make an extension be auto-discovered and applied whenever the containing assembly is part of the application. To do this, add an assembly level attribute to your extension library that references the extension type you want auto-loaded. Here’s an example from the Jasper.Consul extension library:

// This is a slight change from [FubuModule] in FubuMVC
// Telling Jasper what the extension type is just saves the
// type scanning discovery. Got that idea from a conversation
// w/ one of the ASP.Net team members
[assembly:JasperModule(typeof(ConsulExtension))]

namespace Jasper.Consul.Internal
{
    public class ConsulExtension : IJasperExtension
    {
        public void Configure(JasperRegistry registry)
        {
            registry.Services.For<IUriLookup>().Add<ConsulUriLookup>();
            registry.Services.For<IConsulGateway>().Add<ConsulGateway>();
        }
    }
}

When to use either model? I’m a big fan of the “it should just work” model and the auto-discovery in many places, but plenty of other folks will prefer the explicit Include<Extension>() call in their JasperRegistry to make the code be more self-documenting.

In either case, there’s a little bit of trickery going on behind the scenes to order both the service registrations and any changes to “Settings” objects to create a precedence order like this:

  1. Application specific declarations in the JasperRegistry class — regardless of the ordering of where the Include() statements wind up
  2. Extension declarations
  3. Core framework defaults

 

A Little History about the Ideas Here

When Chad Myers and I started talking through the ideas that later became FubuMVC, one of our goals was to maximize our ability to apply customer-specific extensions and customizations to the on premises model application we were building at the time. We knew that many a software shop had crashed and burned in that situation if they resorted to using customer specific forks of their core product. We envisioned a web framework that would pretty well let you add or change almost anything in customer extensions without forcing any kind of fork to the core application. The Open-Closed Principle taken to an extreme, if you will. Using FubuMVC extensions (we originally called them “Bottles”), you could add all new routes, change IoC service registrations, swap out views, and even inject content into existing views in the core application with a model where you just drop the extension assembly into the bin path of the application and go.

I want to say that it’s one of the cleverest things I’ve ever successfully completed, and it definitely added some value (not so much in the app it was meant for because they ditched the on premises model shortly after and succeeded without the crazy extensibility;)) All that said though, Jasper is meant for a different world where we might not be quite so eager to build large applications and I’m much more gun shy about complexity in my OSS projects than 10 years younger me was, so the extensibility will not be quite so big a part of Jasper’s core identity and philosophy as it was in the FubuMVC days.

 

Jasper’s Configuration Story

A couple dozen times I’ve sat around a table at some kind of software conference swapping software horror stories with other developers, all of us trying to one up each other. The winning story usually ends up being some kind of twisted usage of stored procedures to build dynamic websites or something just as berserk. Just behind misuse of sprocs is stories of naive teams that hard coded some kind of value like a database connection string, a file path, or an email address that should really be in some kind of external configuration source. If you’re going to use Jasper to build applications, this post is all about how to avoid being the subject of one of those horror stories by using .Net configuration with Jasper.

This is meant to be the start of a series of related blog posts on Jasper that lead up to how we intend to support durable messaging within an ASP.Net Core application without forcing you to use distributed transactions. Not promising when they’ll be done, but I’m planning:

  1. Jasper’s Configuration Story (this one)
  2. Jasper’s Extension Model
  3. Integrating Marten into Jasper Applications
  4. Durable Messaging in Jasper
  5. Integrating Jasper into ASP.Net Core Applications
  6. Jasper’s HTTP Transport
  7. Jasper’s “Outbox” Support within ASP.Net Core Applications

I think this will be the longest post by far.

When I published my blog post introducing Jasper a couple weeks ago, I used this sample code to show a minimal Jasper application that would listen for incoming messages at port 2601:

    class Program
    {
        static int Main(string[] args)
        {
            return JasperAgent.Run(args, _ =>
            {
                _.Logging.UseConsoleLogging = true;

                _.Transports.LightweightListenerAt(2601);
            });
        }
    }

I wrote that sample purposely to be as simple as possible, but that led to the obvious question about how to pull the port number from configuration from folks reading the code sample. As of now, Jasper gives you a couple options. The first option is to use our integration with the configuration support from ASP.Net Core.

First though, I’m going to move the application bootstrapping code above into a JasperRegistry class (think Startup classes in ASP.Net Core applications) like this:

public class PongerApp : JasperRegistry
{
    public PongerApp()
    {
        Logging.UseConsoleLogging = true;
        Transports.LightweightListenerAt(2601);
    }
}

and then that application would be bootstrapped with code like this:

class Program
{
    static int Main(string[] args)
    {
        return JasperAgent.Run(args);
    }
}

Great, now let’s pull the port number from configuration, in this case from environment variables:

public class PongerApp : JasperRegistry
{
    public PongerApp()
    {
        // Declare what configuration sources we want
        Configuration
            .AddEnvironmentVariables()
            .AddJsonFile("ponger.settings.json");

        // Use the final product of the configuration root data
        // to configure the application
        Settings.WithConfig(root =>
        {
            var port = root.GetValue("INCOMING_PORT");
            Transports.LightweightListenerAt(port);
        });
    }
}

A couple quick things to note:

  1. JasperRegistry.Configuration is a ConfigurationBuilder object from ASP.Net Core
  2. The root argument to the nested closure in Settings.WithConfig() is the IConfigurationRoot object compiled from the JasperRegistry.Configuration object.

It’s admittedly a little cumbersome with the nested closure in the call to Settings.WithConfig(), but doing it that way helps Jasper order operations for you so that all additions to the ConfigurationBuilder are calculated once before you execute the nested closure. In the FubuMVC days I called the “Mongolian BBQ” approach where you declare the ingredients you want and let’s the framework figure out how to do things in order for you just like the cook at a Mongolian BBQ restaurant.

This is maybe the simplest conceptual way to add external configuration, but there are a couple other options in Jasper.

Uri Aliases

We’ve been using this code to direct Jasper to listen for messages at a certain port with its built in TCP transport in its “fire and forget” mode:

public class PongerApp : JasperRegistry
{
    public PongerApp()
    {
        Transports.LightweightListenerAt(2601);
    }
}

That code is just syntactical sugar for this code that instead supplies a meaningful Uri:

public class PongerApp : JasperRegistry
{
    public PongerApp()
    {
        Transports.ListenForMessagesFrom("tcp://localhost:2601");
    }
}

Now, let’s move the Uri for where and how the application should listen for incoming messages to an application json configuration file like this one:

{
    "incoming": "tcp://localhost:2601"
}

Finally, we can use Jasper’s concept of Uri aliases to configure the application like this:

public class PongerApp : JasperRegistry
{
    public PongerApp()
    {
        Configuration
            .AddJsonFile("ponger.settings.json");
        
        Transports.ListenForMessagesFrom("config://incoming");
    }
}

At runtime, Jasper will translate the Uri “config://incoming” to “the Uri string in the configuration with the key ‘incoming.'” There’s a couple other things to note about Uri aliases:

  • The aliasing strategies are pluggable, but the “config” option is the only one that comes out of the box. We also have an option to use Consul’s key/value storage in the Jasper.Consul extension.
  • The aliases are valid in any place where you use a Uri to send or route messages

Strong Typed Configuration

I’m a big fan of strong typed configuration going back to the “Settings” model in FubuMVC, and it’s even part of ASP.Net Core proper with their Options model. With quite a bit of help from my colleague Mark Wuthrich, Jasper brings the old Settings model forward with some lessons learned type improvements.

Using Jasper’s strong typed configuration, you can do this instead:

public class PongerSettings
{
    public Uri Incoming { get; set; }
}

public class PongerApp : JasperRegistry
{
    public PongerApp()
    {
        Configuration
            .AddJsonFile("ponger.settings.json");

        Settings.With(settings =>
        {
            Transports.ListenForMessagesFrom(settings.Incoming);
        });
    }
}

where the “ponger.settings.json” file would look something like:

{ "incoming": "tcp://localhost:2601" }

I like the Settings model personally because of the traceability between configuration elements and the places in the code that consumes those elements. I wouldn’t bother with the Settings model in this simple of an application, but it might make sense in a bigger application. Do note that the Settings objects are part of the underlying IoC registrations and are available to be injected into any kind of message or HTTP handler class in your application code.

It’s Just Code, Do Whatever You Want

Don’t like any of the options that Jasper provides out of the box? No worries, the JasperRegistry configuration is an example of an “internal DSL,” meaning that it’s fair game to use any kind of .Net code you want like maybe this:

public class PongerApp : JasperRegistry
{
    public PongerApp()
    {
        // Programmatically find the incoming Uri any way you want
        // with your own code
        string uri = Environment.GetEnvironmentVariable("PONGER_INCOMING_URI");
        Transports.ListenForMessagesFrom(uri);
    }
}

Storyteller 5.0 – Streamlined CLI, Netstandard 2.0, and easier debugging

I published the Storyteller 5.0 release last night. I punted on doing any kind of big user interface overhaul for now, and just released the back end improvements on their own with some vague idea that there’d be an improved or at least restyled user interface later this year.

The key improvements are:

  • Netstandard 2.0 support
  • An easier getting started story
  • Streamlined command line usage
  • Easier “F5 debugging” for specifications in your IDE
  • No changes whatsoever to your Fixture code from 4.0

Getting Started with Storyteller 5

Previous versions of Storyteller have been problematic for new users getting started and setting up projects with the right Nuget dependencies. I felt like things got a little better with the dotnet cli, but the enduring problem with that is how few .Net developers seem to be using it or familiar with it. When you use Storyteller 5, you need two dependencies in your Storyteller specification project:

  1. A reference to the Storyteller 5.0 assembly via Nuget
  2. The dotnet-storyteller command line tool referenced as a dotnet cli tool in your project, and that’s where most of the trouble come in.

To start up a new Storyteller 5.0 specification project, first make the directory where you want the project to live. Next, use the dotnet new console command to create a new project with a csproj file and a Program.cs file.

In your csproj file, replace the contents with this, or just add the package reference for Storyteller and the cli tool reference for dotnet-storyteller as shown below:

  

  
    netcoreapp2.0
    EXE
  
  
    
  
  
    
  

Next, we need to get into the entry point to this new console application change the Program.Main() method to activate the Storyteller engine within this new project:

    public class Program
    {
        public static int Main(string[] args)
        {
            return StorytellerAgent.Run(args);
        }
    }

Internally, the StorytellerAgent is using Oakton to parse the arguments and carry out one of these commands:

  ------------------------------------------------------------------------------
    Available commands:
  ------------------------------------------------------------------------------
       agent -> Used by dotnet storyteller to remote control the Storyteller specification engine
         run -> Executes Specifications and Writes Results
        test -> Try to start and warmup the system under test for diagnostics
    validate -> Use to validate specifications for syntax errors or missing grammars or fixtures
  ------------------------------------------------------------------------------

If you execute the console application with no arguments like this:

|> dotnet run

It will execute all the specifications and write the results to a file named “stresults.htm.”

You can customize the running behavior by passing in optional flags with the pattern dotnet run -- run --flag flagvalue like this example that just writes the results file to a different location:

|> dotnet run -- run Arithmetic -r ./artifacts/results.htm

If you’re not already familiar with the dotnet cli, what’s going on here is that anything to the right of the “–” double dash is considered to be the command line arguments passed into your application’s Main() method. The “run” argument tells the StorytellerAgent that you actually want to run specifications and it’s unfortunately not redundant and very much mandatory if you want to customize how Storyteller runs specifications.

See the Storyteller 5.0 quickstart project for a working example.

Running the Storyteller Specification Editor

Assuming that you’ve got the cli tools reference to dotnet-storyteller and you’ve executed `dotnet restore` at least once (compiling through VS.Net or Rider does this for you), the only thing you need to do to launch the specification editor tool is this from the command line:

|> dotnet storyteller

F5 Debugging

Debugging complicated Storyteller specifications has been its Achille’s Heel from the very beginning. You can always attach a debugger to a running Storyteller process, but that’s clumsy (quicker in Rider than VS.Net, but still). As a cheap but effective improvement in v5, you can run a single specification from the command line with this signature:

|> dotnet run -- run "Suite1 / ChildSuite1 / Specification Name"

This is admittedly pretty ugly, but remember that you can tell either Rider or VS.Net to pass arguments to your console application when your press F5 to run an application in debug mode. I utilize this quite a bit in Jasper development to troubleshoot individual specifications. Here’s what the configuration looks like for this in Rider:

 

RunSingleSpec

See the “Program arguments” specifically. Once the path to the specification is configured, I can just hit F5 and jump right into a debugging session running just that specification.

We looked pretty hard at supporting the dotnet test tooling so you could run Storyteller specifications from either Visual Studio.Net’s or Rider/ReSharper’s test runners, but all I could think about after trying to reverse engineer xUnit’s tooling around that was a certain Monty Python scene.

Introducing BlueMilk: StructureMap’s Replacement & Jasper’s Special Sauce

BlueMilk is the codename for a new project that’s an outgrowth from our new Jasper framework. Specifically, BlueMilk is extracting the runtime code generation and compilation “Special Sauce” support code that’s in Jasper now into a stand alone library. Building upon the runtime code generation, the logical next step was to make BlueMilk into the intended successor to the venerable StructureMap project as a fast, minimal IoC container on its own, but also supports inlining the service activation code into Jasper’s message and HTTP request handlers.

I think these are the key points for BlueMilk:

  1. Support the essential functionality and configuration API of StructureMap to be an offramp for folks invested in StructureMap that want to move to a faster option in their Netstandard2 applications
  2. Align much closer with the ASP.Net team’s DI compliance behavior. In some cases like object lifecycles, this is a breaking change with StructureMap’s traditional behavior and I don’t entirely agree with their choices, but .Net is their world and all us scrappy community OSS authors are just living in it.
  3. Easy integration into ASP.Net Core applications by directly supporting their abstractions (IServiceCollection, IServiceProvider, ServiceDescriptor, IServiceScope, etc.) out of the box.
  4. Trade in some of the runtime flexibility that StructureMap had in favor of better performance (and fewer ways for users to get themselves in a tangle)
  5. Expose the runtime code generation and compilation model (originally built for Marten, but we took it out later) in a separate library because a few folks have expressed some interest in having just that without using Jasper

There’s a preliminary Nuget up this morning (0.1.0) that supports some of StructureMap’s behavior and all of the ASP.Net Core compliance. You can use a container like this:

// Idiomatic StructureMap
var container = new Container(_ =>
{
    _.For<IWidget>().Use<AWidget>().Named("A");
    
    // StructureMap's old type scanning
    _.Scan(s =>
    {
        s.TheCallingAssembly();
        s.WithDefaultConventions();
    });
});

var widget = container.GetInstance<IWidget>();

// ASP.Net Core DI compatible
IServiceProvider container2 = new Container(_ =>
{
    _.AddTransient<IWidget, AWidget>();
    _.AddSingleton(new MoneyWidget());
});

var widget2 = container.GetService<IWidget>();

 

My Thoughts on Project Scope

I only started working on BlueMilk by itself over the holidays, so it’s not like anything is truly set in concrete, but this list is what I think the scope would be. My philosophy here is to jettison many of the features in StructureMap that cause internal complexity, performance issues, or generate loads of user questions and edge case bugs.

Core Functionality

  1. All ASP.Net Core DI compliance — lifecycle management (note that it’s different than StructureMap’s lifecycle definitions), object disposal, basic service resolution, open generic support, dealing with enumerable types
  2. StructureMap’s basic support for service location
  3. Nested Containers (scoped container)
  4. Type Scanning from StructureMap
  5. Service resolution by name
  6. Lazy & Func<T> resolution
  7. WhatDoIHave() and other diagnostics — no IoC or any other kind of framework author should release a tool without something like this for the sake of their own sanity
  8. Auto-find missing registrations — one of my biggest gripes about the built in container

Later

  1. Inline dependencies
  2. AutoFactory support — I think this could work out very well with the code generation model
  3. Construction Policies
  4. Some of StructureMap’s attribute configuration

Leaving Behind unless someone else really wants to build *and* help support it

  • Interception — Maybe. I’m not super excited about supporting it
  • Child containers and profiles. Utter nightmare to support. Edge case hell. Crazy amount of complexity internally. The only way we use them in work is for per-test isolation, and we can live without them in that case
  • Changing configuration at runtime (Container.Configure()).
  • Passing arguments at runtime. One of the biggest sources of heartburn for me supporting StructureMap. I think the better autofactory support in BlueMilk could be a far better alternative anyway.

Why do this?

One of my fervent goals with Jasper from the very beginning was to maximize performance to the point where its throughput was barely distinguishable from laboriously writing highly optimized bespoke code by hand. I wanted users to have all the productivity benefits of a good framework that takes care of common infrastructure needs without sacrificing performance. I know that some folks will disagree, but I still think there’s ample reasons to use an IoC container to handle quite a bit of the object composition, service activation, object scoping, and service cleanup.

If you allow that assumption of IoC usage, that left me with a couple options and thoughts:

  • I think it’s hugely disadvantageous to .Net frameworks to have to support multiple IoC containers. My experience is that basically every single framework abstraction I have ever seen for an IoC container has been problematic. If you are going to support multiple IoC containers in your application framework, my experience from FubuMVC and from watching the unfolding consternation over ASP.Net Core’s DI compliance is to restrict your framework from making all but a handful of assumptions about IoC container behavior.
  • I could just use my own StructureMap container because I understand it front to back and it fits the way that I personally work and all the .Net developers in my shop know it. The only problem there is that StructureMap has fallen behind in terms of performance. I think I have a decent handle on what it would take to reverse that with a potential 5.0, but I’m just not up for doing the work, I’m exhausted keeping up with user questions, and I really want to get out of supporting StructureMap.
  • I tried a couple times to just use the new built-in ASP.Net Core DI tool, but it’s extremely limited and I was getting frustrated with how many things were missing and how much more hand holding it took to be usable compared to StructureMap.

If you saw my Jasper’s Special Sauce post last week, you know that we are already using the service registration information to opt into inlining all the object construction and disposal directly into the generated message handlers whenever possible. The code that did that was effectively the beginning of a real IoC container, so it wasn’t that big of a jump to pulling all of that code into its own library and building it into the IoC tool that I wanted a theoretical StructureMap 5.0 to be.

 

Jasper’s Roslyn-Powered “Special Sauce”

As a follow on to my introductory post on the new OSS Jasper messaging framework, here’s an explanation of what’s different about Jasper’s internal approach compared to existing .Net frameworks as well as an argument to why I think it’s a better way.

This is an admittedly long blog post with a lot of background contextual information first. If you’re only here for the “Jasper does crazy stuff with Roslyn” part of it, just skip to the “Special Sauce” section.

In this post, I’m talking about:

  • What makes a tool a framework vs. a library
  • A discussion about the runtime architecture of current .Net frameworks
  • Design challenges for middleware strategies
  • Jasper’s “Special Sauce” approach and why I think it’s a new, better way

What do you mean when you say “Framework?”

First off, Jasper is unmistakably and unapologetically a framework and not just a library. What’s the difference? A library is something your code uses to perform tasks, with the assumption being that your application code is in control and the library code is passive. The Marten persistence tool I work on, logging libraries like NLog or log4net are all examples of library projects. Frameworks on the other hand, follow the Hollywood Principle to handle common workflow events and dealing with technical infrastructure while calling your code just for the actual application logic. Besides Jasper, ASP.Net MVC, NancyFx, or MediatR in the .Net world or Angular.js or Ember.js in the JS world are examples of frameworks (I personally don’t think it’s a clear cut distinction for React on whether or not it’s a framework or a library).

In the case of Jasper, in the process of handling an incoming message it:

  • Logs the arrival of the message
  • Determines from the raw message byte[] array what .Net message type it represents and what deserialization strategy it should use
  • Deserializes the raw data into an actual .Net object that the application code expects
  • Calls any configured middleware for that message type, before finally passing the message to all of the known application handlers for that message type
  • Dispose any IDisposable objects that were created during the message processing
  • Finally, logs the completion, successful or not, and sends out any outgoing messages that “cascaded” from handling the original message if it’s successful
  • If a message fails, process error handling policies to either retry the message, put it back on the queue, stick it into the dead letter queue, or retry it after a delay.

Using Jasper as your messaging framework allows you to focus on just the work that’s specific to your application and let Jasper handle the tedious chores.

The State of the Art in .Net

When a messaging framework needs to process an incoming command or an HTTP framework handles an HTTP request, the common steps are something like this:

  1. Route the incoming data to the proper HTTP route or message type handler
  2. Translate the incoming data to the form that the application handler needs to consume (a message type or a DTO type passed into an HTTP request handler)
  3. Build or find the application specific message or HTTP handlers, along with any other related services
  4. Call the application specific handlers
  5. After the message or HTTP request is complete, do any necessary clean up by disposing services that were created for the request. Stuff like closing and disposing open connections to the database used in handling the message or request.

For right now, let’s focus on 3.) and 4.) up above. We need some way to both discover message handlers or HTTP controllers in the application code and then to execute those handlers or controllers at runtime. I’ve seen and used two different general mechanisms to tackle these issues. The first is to require framework developers to write all their message handlers with some kind of standard interface like this one below:

public interface IHandler
{
    Task Process(T message, Context context);
}

It’s simple to understand, and makes a lot of the underlying mechanics in your framework easier. This strategy is also easy to combine with an IoC container to handle by handler discovery with something like StructureMap’s ConnectImplementationsToTypesClosing type scanning convention. Likewise, using an IoC tool makes object creation, scoping, and cleanup relatively simple. It does have the downside of being a little bit intrusive into application code and it’s not quite as flexible as the next alternative.

The other way to call application code is to use some sort of Reflection to dynamically call handlers. This has the advantage of greater flexibility in how the application specific handlers can be created and possibly reducing coupling from the application specific code to the framework infrastructure. ASP.Net MVC is an example of this approach, as was the earlier FubuMVC project I led earlier. The FubuMVC “hello world” endpoint that would just return text as “string/plain” when the “/” route is called looked like this:

    public class HomeEndpoint
    {
        public string Index()
        {
            return "Hello, world.";
        }
    }

The advantage here is that the code can be subjectively “cleaner,” by which I mean to be an absence of framework artifacts like marker interfaces, mandatory base classes, fluent interfaces, or mandatory attributes. This approach by necessity depends on naming conventions that are frequently derided as “magic” by some folks and when used well, praised as “clean” code by people like me. There’s room in the world for both types of folks.

Another significant advantage is being able to be flexible on the method signatures and even use techniques like “method injection” as ASP.Net Core MVC supports with its [FromServices] attribute in this sample below:

public IActionResult About([FromServices] IDateTime dateTime)
{
    ViewData["Message"] = "Currently on the server the time is " + dateTime.Now;

    return View();
}

From a technical perspective, the challenge is in how you invoke the application handlers if there is no mandatory interface or base class. Traditionally, your options are:

  • Just use Reflection. It’s slow, but it’s the simplest to use
  • Emit dynamic assemblies with IL at runtime. The early versions of StructureMap used that, and I thought it was excruciating. It’s laborious to code and not very approachable for other contributors to your project. I think you get serious neckbeard points for using this successfully.
  • Use .Net Expressions to dynamically generate and compile Lambda’s. StructureMap does this, and I’m guessing that most other .Net IoC tools do as well. AutoMapper does as well. This model is more approachable than IL, but it’s still not something that most .Net developers can — or would want to — use productively. It also creates horrendously awful stack traces in exception messages. If you do use this technique, definitely check out FastExpressionCompiler.

What about middleware?

By this point, any .Net framework worth its salt is going to support what I used to call the Russian Doll Model of nested handlers with something like ASP.Net Core’s (or OWIN’s) concept of middleware. The purpose is to allow developers to move common, cross-cutting concerns like exception handling, validation, or security to middleware and out of each individual message handler or HTTP controller method. Used judiciously, this can make application code simpler and more robust by simply having fewer things for developers to remember to do. Used badly — and trust me, I’ve seen it used horrendously with my very own FubuMVC “behaviors” model — copious usage of middleware will make your application sluggish, potentially hard to understand, and devilishly hard to debug.

One of the common usages of middleware is to manage transactional boundaries between one or more nested handlers like my shop did several years ago in this post with FubuMVC and RavenDb. A sequence diagram of a typical framework’s internal runtime workflow would look something like this:

UoWwithMiddleware

Most .Net frameworks that I’m aware of will use a scope (nested container in StructureMap parlance, IServiceScope in ASP.Net Core, LifetimeScope in Autofac) per message or HTTP request. Using a container scope allows the framework to easily control the scoping of services like a Marten IDocumentSession or an EF DbContext that should be shared by all other services participating in the logical transaction. The container scope usage also allows the framework to clean up resources by calling Dispose() on all the IDisposable objects created during the processing of the message.

As a framework author and as a user of a framework, you’ve got a couple challenges with middleware:

  • Understanding what middleware is applicable to each message type or HTTP route. Sooner or later, it’s going to be necessary to visualize the layers of middleware to debug some problem or other
  • Properly scoping services that are shared between different middleware or the message handlers. ASP.Net Core does this by sharing the container scope as part of the HttpContext and doing a lot of service location at different places in the runtime. FubuMVC and some versions of NServiceBus build the middleware objects and the handlers wrapped inside of them per HTTP request or message being processed. I can tell you from experience helping folks using StructureMap to write ASP.Net Core middleware that their approach can be problematic. The FubuMVC approach was sluggish and did way too many object allocations in memory.
  • The stack traces can be epically bad and noisy, making developer troubleshooting more difficult than it should be
  • Hopefully, the container scope and handler object creation isn’t that expensive, but it’s still object allocations that could be potentially eliminated

Jasper’s Special Sauce

Jasper was largely conceived as a next generation replacement for the earlier FubuMVC framework, and takes a lot of lessons and bias — both positive and negative — from our usage of FubuMVC over the past decade. Roughly speaking, we wanted to support a superset of the message handler signatures (and HTTP endpoint signatures too, but that’s a subject for another day), with the addition of support for method injection of dependencies and support for static methods or classes as well for perfectly stateless handlers. We also wanted to keep roughly the same kind of compose-ability with per message type middleware we had with FubuMVC, but this time be able to provide much more visibility to how the middleware was composed at runtime for easier debugging. At the same time, we knew that FubuMVC suffered from performance problems from the sheer number of objects it created and destroyed in its runtime pipeline. For my personal sanity, I knew that we needed to make the exception stack traces coming out of Jasper have a lot less noise.

Very early on, we theorized that we could heavily use the forthcoming runtime code generation and compilation capability in Roslyn to write the tightest possible “glue” code at runtime that handles the intermediation between the Jasper framework, the associated middleware, and the application handlers.

The actual message handlers activated and executed by Jasper’s internal pipeline all inherit from this base class:

    public abstract class MessageHandler
    {
        // Error handling policies for the message type
        // and other configuration kind of things that
        // the Handle method might need
        public HandlerChain Chain { get; set; }

        // This method actually processes the incoming Envelope
        public abstract Task Handle(IInvocationContext input);
    }

Removing the actual middleware noise cleaned up the exception stack traces dramatically. Surfacing the generated code up to users serves as a cheap, but effective visualization of what’s going on internally for developers. Finally, we were sure that this strategy would greatly improve our performance by drastically reducing memory allocations and method delegations in our internals. And wouldn’t you know it, the NServiceBus team who had earlier borrowed FubuMVC’s “Behavior” model, made the same kind of change to using generated code (but with Expression’s) and reported an absurd improvement in their measured performance. Jasper uses a similar approach to NServiceBus 6, but I think we go much farther and that our code generation model will be much more approachable to outsiders than having to deal with Expressions.

In the next section I’ll show what the code generation does, and talk about how to write the fastest possible code with Jasper.

Sample Messaging Scenario

To see the code generation in action, let’s say that we need to handle a CreateItemCommand message, and save a corresponding ItemCreatedEvent document using Marten as our backing store.

The command and event objects look like this:

    public class CreateItemCommand
    {
        public string Name { get; set; }
    }

    public class ItemCreatedEvent
    {
        public Item Item { get; set; }
    }

In its crudest form, the message handler in Jasper using a traditional class instance that takes all of its dependencies in via constructor injection:

    public class CreateItemHandler
    {
        private readonly IDocumentStore _store;

        // store is the Marten IDocumentStore
        public CreateItemHandler(IDocumentStore store)
        {
            _store = store;
        }

        public async Task Handle(CreateItemCommand command)
        {
            using (var session = _store.LightweightSession())
            {
                var item = new Item {Name = command.Name};
                session.Store(item);
                await session.SaveChangesAsync();
            }
        }
    }

At runtime, Jasper will generate this code for the actual MessageHandler:

    public class ShowHandler_CreateItemCommand : Jasper.Bus.Model.MessageHandler
    {
        private readonly IDocumentStore _documentStore;

        public ShowHandler_CreateItemCommand(IDocumentStore documentStore)
        {
            _documentStore = documentStore;
        }


        public override Task Handle(Jasper.Bus.Runtime.Invocation.IInvocationContext context)
        {
            var createItemHandler 
                = new ShowHandler.CreateItemHandler(_documentStore);
            var createItemCommand = (ShowHandler.CreateItemCommand)context.Envelope.Message;
            return createItemHandler.Handle(createItemCommand);
        }

    }

Notice anything missing here from the “typical” framework pipeline I talked about in previous sections? That missing thing is any trace whatsoever of an IoC container at runtime. It turns out that the very fastest IoC container for object resolution is no container. With that in mind, any time that Jasper can figure out from the underlying service registrations how to do all the object construction and disposal per message inside of the generated Handle() method, it will not use the IoC container whatsoever. While there are plenty of cases that Jasper can’t quite handle yet and has to resort to generate code that does service location, we’re working very hard to close those gaps.

There’s a couple other things to note in the code up above:

  • The MessageHandler objects are compiled and created at runtime, and they are singleton scoped inside the application
  • The IDocumentStore dependency is known to be a singleton in the service registrations, so it is injected into the MessageHandler during its construction so there doesn’t have to be any kind of service lookup for that at runtime

Jasper can also support method injection of service dependencies in the message handler actions, so we could just pull in IDocumentStore as a method argument and simplify the code a little bit. Once you do that though, the ShowHandler class is entirely stateless, so let’s go a little bit farther and just make ShowHandler a static class like so:

    public static class CreateItemHandler
    {
        public static async Task Handle(CreateItemCommand command, IDocumentStore store)
        {
            using (var session = store.LightweightSession())
            {
                var item = new Item {Name = command.Name};
                session.Store(item);
                await session.SaveChangesAsync();
            }
        }
    }

Okay, the “static” keyword is a little bit of noise, but we got rid of the private field for the store and the constructor function, so it’s a little bit tighter. Using that handler above will result in Jasper generating this code:

    public class ShowHandler_CreateItemCommand : Jasper.Bus.Model.MessageHandler
    {
        private readonly IDocumentStore _documentStore;

        public ShowHandler_CreateItemCommand(IDocumentStore documentStore)
        {
            _documentStore = documentStore;
        }


        public override Task Handle(Jasper.Bus.Runtime.Invocation.IInvocationContext context)
        {
            var createItemCommand = (ShowHandler.CreateItemCommand)context.Envelope.Message;
            return ShowHandler.CreateItemHandler.Handle(createItemCommand, _documentStore);
        }

    }

The key thing up above being that switching to a static method in our message handler means one less object allocation for the message handler objects.

Finally, the one single bit of middleware that we built to prove out this whole strategy just happens to be some Marten transactional support. There are other ways to apply the middleware, but for right now I’ll just decorate the method with a [MartenTransaction] attribute from the Jasper.Marten library to apply that middleware handling around the handler. For fun, let’s even say that the act of handling the command emits a new event message that will be “cascaded” as an outgoing message when the original message has been completely processed. To do that, just return the event from your handler method. If the middleware is handling the call to IDocumentSession.SaveChangesAsync()/RollbackAsync() for me, I can simplify the message handler code in my application down even further to this now:

    public class CreateItemHandler
    {
        [MartenTransaction]
        public static ItemCreatedEvent Handle(
            CreateItemCommand command, 
            IDocumentSession session)
        {
            var item = new Item {Name = command.Name};
            session.Store(item);

            return new ItemCreatedEvent{Item = item};
        }
    }

If you notice, we’re able to use a synchronous method signature here instead of being forced to repetitively all return Task.CompletedTask; every which way because Jasper is smart enough to handle those mechanics for us in its code generation. It’s even smart enough to (imperfectly) switch from a method with an “async Task” signature to a method that can return a Task from the last line of code or “Task.CompletedTask.”

The handler above, with the Marten transaction middleware wrapped in, gives us this compiled code:

    public class ShowHandler_CreateItemCommand : Jasper.Bus.Model.MessageHandler
    {
        private readonly IDocumentStore _documentStore;

        public ShowHandler_CreateItemCommand(IDocumentStore documentStore)
        {
            _documentStore = documentStore;
        }


        public override async Task Handle(Jasper.Bus.Runtime.Invocation.IInvocationContext context)
        {
            var createItemCommand = (ShowHandler.CreateItemCommand)context.Envelope.Message;
            using (var documentSession = _documentStore.OpenSession())
            {
                var outgoing1 = ShowHandler.CreateItemHandler.Handle(createItemCommand, documentSession);
                context.EnqueueCascading(outgoing1);
                await documentSession.SaveChangesAsync();
            }

        }

    }

So, yeah, there’s some magic going on with conventions that some folks absolutely hate, but if it’s easy to get at the generated code internally and you can just read and even debug into that, are conventions really that scary anymore?

Our hope is that the code generation model leads to applications written with Jasper being just as performant as purely bespoke code, but with far less effort on the developer’s part. We think that our runtime codegen model gives Jasper the best of all possible worlds by allowing for very clean, flexible code without sacrificing anything in terms of performance.

It’s a ways away, but I’m well underway in the process of ripping the code generation support out into its own library under the BlueMilk moniker and growing that into a streamlined, performant replacement for StructureMap as well. I’ll blog next week about the vision and maybe the timeline for BlueMilk by itself.