BlueMilk 0.8: It’s fast, runs MVC apps, and probably needs a new name

EDIT 4:30 PM CST on 2/21: Egg on my face. How’s this for the new world order of .Net Core? It works perfectly on OS X, but it blows up on the `RegistryPolicyResolver` class only on Windows. I’ll get a 0.8.1 up soon to fix whatever is different.

I just published the latest BlueMilk v0.8 to Nuget with quite a bit of performance optimization, some additional StructureMap functionality added in, and the ability to handle every kind of service registration that a basic MVC Core application will throw at an IoC container.

BlueMilk is the current codename for a new IoC tool that partially originated in Jasper. You can read about the goals and motivation for the project in Introducing BlueMilk: StructureMap’s Replacement & Jasper’s Special Sauce.

One of my colleagues made the observation yesterday that while being a Star Wars nerd and getting the reference, the name “BlueMilk” is off putting and we probably need to change that (plus it feels awkward to say out loud to me). Other than Marten and BlueMilk, all the projects in the JasperFx organization are named after other little towns or landmarks around my hometown. Once upon a time, I parked some of the code that’s not part of BlueMilk in another repository named “Lamar” that fits that naming scheme, plus my wife is doing a master’s program at Lamar University, I have a former coworker who played baseball there, and it’s also the name of a Texas revolutionary hero. If nobody has a better idea, I’ll probably rename BlueMilk to “Lamar.” My other idea was “Corsair,” but that’s really just too cool a name for yet another IoC tool.

Usage in MVC Core

I blogged last week that BlueMilk is Ready for Early Adopters. I was wrong of course, but thank you to Mark Warpool and others for actually trying to use it and giving me some important feedback. The big problems with 0.7 were that the code generation model that BlueMilk uses internally can’t handle internal types and didn’t allow for using generic types as constructor parameters. Wouldn’t you know it, ASP.Net Core MVC has quite a bit of both of those usages in its service registrations and BlueMilk was falling flat on its face in an MVC Core application (correction, “works on my box,” fails on a couple registrations in AppVeyor even though it’s the same version of the .Net SDK. Still saying it’s good to go at least until someone proves it’s a no go).

After a couple fixes (one big and one small), there’s now a test in BlueMilk that successfully builds every possible service registration from a basic MVC Core application. For those of you following along at home, I had to revert to using the dynamic Expression compiled to a Func<> trick to slide around the nonpublic types just to call non-public constructors.

Performance

A caveat here, it’s just not terribly likely that your IoC tool of choice is the performance bottleneck in your system. 

First off, Maksim Volkau has built some seriously cool stuff, and BlueMilk got quite a bit of the performance boost I’m talking about here from using his ImTools library (both Marten and StructureMap use FastExpressionCompiler as well).

One of my coworkers asked how BlueMilk compared to StructureMap in terms of performance, so I threw together some benchmarks where I was able to show BlueMilk being over 5X faster than StructureMap over a range of potential usages. I made the mistake of tweeting about that yesterday, and Eric Smith asked me how BlueMilk compared to the build in DI container inside of ASP.Net Core. After adding a comparison to the built in container to my BenchmarkDotNet metrics, I could see that BlueMilk lagged a bit (~30% slower over all). Several optimizations later, I can now say that BlueMilk is (barely) faster than the built in DI container and closing in on an order of magnitude faster than the latest StructureMap.

Using a barebones MVC Core application with logging added in as well, I built a series of metrics that just loops through the registered types and builds each possible type. It’s a lazy way of building up metrics, but it gave me a mix of registrations by type, by lambda, pre-built objects, and some deeper object graphs. It’s probably a bit bogus because this isn’t the way that an application is going to use the IoC tool at runtime and may weight more heavily on usages that don’t actually happen.

That being said, here’s the overall metrics on just creating every possible registered type in that minimal MVC Core application:

BenchmarkDotNet=v0.10.12, OS=macOS 10.12.6 (16G1212) [Darwin 16.7.0]
Intel Core i7-6920HQ CPU 2.90GHz (Skylake), 1 CPU, 8 logical cores and 4 physical cores
.NET Core SDK=2.1.4
  [Host]     : .NET Core 2.0.5 (Framework 4.6.0.0), 64bit RyuJIT
  DefaultJob : .NET Core 2.0.5 (Framework 4.6.0.0), 64bit RyuJIT


   Method | ProviderName |      Mean |     Error |    StdDev |
--------- |------------- |----------:|----------:|----------:|
 AllTypes |   AspNetCore |  73.98 us |  1.444 us |  1.976 us |
 AllTypes |     BlueMilk |  70.92 us |  1.408 us |  2.392 us |
 AllTypes | StructureMap | 646.28 us | 12.856 us | 27.398 us |

Getting much more specific, here are some finer grained metrics with an explanation of the different measurements below:

BenchmarkDotNet=v0.10.12, OS=macOS 10.12.6 (16G1212) [Darwin 16.7.0]
Intel Core i7-6920HQ CPU 2.90GHz (Skylake), 1 CPU, 8 logical cores and 4 physical cores
.NET Core SDK=2.1.4
  [Host]     : .NET Core 2.0.5 (Framework 4.6.0.0), 64bit RyuJIT
  DefaultJob : .NET Core 2.0.5 (Framework 4.6.0.0), 64bit RyuJIT


      Method | ProviderName |         Mean |        Error |        StdDev |       Median |
------------ |------------- |-------------:|-------------:|--------------:|-------------:|
 CreateScope |   AspNetCore |     429.0 ns |     8.288 ns |      7.347 ns |     428.2 ns |
     Lambdas |   AspNetCore |   1,784.4 ns |    25.886 ns |     22.948 ns |   1,777.8 ns |
   Internals |   AspNetCore |     914.2 ns |    17.575 ns |     15.580 ns |     912.6 ns |
     Objects |   AspNetCore |     810.2 ns |     7.723 ns |      6.449 ns |     808.7 ns |
  Singletons |   AspNetCore |  18,428.6 ns |   203.784 ns |    159.101 ns |  18,441.3 ns |
       Scope |   AspNetCore |     556.7 ns |     7.823 ns |      7.317 ns |     555.9 ns |
  Transients |   AspNetCore |  41,882.1 ns |   391.872 ns |    327.231 ns |  41,787.8 ns |
 CreateScope |     BlueMilk |     110.8 ns |     2.205 ns |      2.944 ns |     111.4 ns |
     Lambdas |     BlueMilk |   2,138.1 ns |    27.465 ns |     25.691 ns |   2,140.5 ns |
   Internals |     BlueMilk |     332.2 ns |     3.926 ns |      3.278 ns |     331.4 ns |
     Objects |     BlueMilk |     586.9 ns |    17.605 ns |     51.633 ns |     575.4 ns |
  Singletons |     BlueMilk |   9,852.8 ns |   196.721 ns |    548.380 ns |   9,780.1 ns |
       Scope |     BlueMilk |     330.8 ns |     5.781 ns |      4.828 ns |     332.1 ns |
  Transients |     BlueMilk |  54,439.2 ns | 1,083.872 ns |  2,967.082 ns |  53,801.7 ns |
 CreateScope | StructureMap |  16,781.0 ns |   334.284 ns |    948.307 ns |  16,584.2 ns |
     Lambdas | StructureMap |  12,329.5 ns |   244.697 ns |    686.155 ns |  12,121.9 ns |
   Internals | StructureMap |  10,585.0 ns |   209.617 ns |    393.712 ns |  10,519.9 ns |
     Objects | StructureMap |  17,739.9 ns |   430.679 ns |    560.005 ns |  17,606.7 ns |
  Singletons | StructureMap | 162,029.0 ns | 3,191.513 ns |  6,148.961 ns | 161,590.8 ns |
       Scope | StructureMap |   5,830.1 ns |   158.896 ns |    463.507 ns |   5,700.8 ns |
  Transients | StructureMap | 451,798.1 ns | 8,988.047 ns | 21,707.143 ns | 448,860.3 ns |

The metrics named in the first column are:

  1. “CreateScope” — measures how long it takes to create a completely new container scope as MVC Core and other frameworks do on each HTTP request.
  2. “Lambdas” — resolving services that were registered with some kind of Func<IServiceProvider, object> factory
  3. “Internals” — resolving non-public types
  4. “Objects” — resolving services that were registered with a pre-built object
  5. “Singletons” — all singleton registrations of all kinds
  6. “Scope” — bad name, but all registrations with the “Scoped” lifetime
  7. “Transients” — all registrations with the “Transient” lifetime

 

There’s still some performance fat in BlueMilk’s code, but I’m saying that I’ve hit the point of diminishing returns for now and I’m staying put on performance.

New Functionality

BlueMilk v0.8 adds in some old StructureMap behavior for:

Roadmap

I’m probably done working on BlueMilk for now other than the inevitable bug reports. When I do come back to it (or someone else picks it up), the next version (v0.9) will hopefully have support for decorators and interception similar to StructureMap’s existing model. I’d hope to have a 1.0 version out sometime this summer or fall after it’s been in production somewhere for awhile.

Advertisements

BlueMilk is Ready for Early Adopters

EDIT 2/14/2018: And this already brought out a bug if you have a type that would need a closed generic type as an argument to its constructor. 0.7.1 will follow very shortly on Nuget.

 

BlueMilk is the name of a new OSS Inversion of Control tool I’m building specifically for usage in Jasper applications, but also as a higher performant replacement for StructureMap in Netstandard 2.0 applications going forward. To read more about what is genuinely unique about its internals and approach, see Jasper’s Roslyn-Powered “Special Sauce.”

I’m declaring BlueMilk 0.7 on Nuget right now as ready for enterprising, early adopter types to try out either on its own or within an ASP.Net Core application. At this point it’s passing all the ASP.Net Core compliance tests with a couple exceptions that I can’t possibly imagine being important in many cases (like the order in which created objects are disposed and a really strange way they order objects in a list when there’s mixed open and closed generic type registrations). It’s also supporting quite a few StructureMap features that I missed while trying to work with the built in DI container.

As I said in the introductory post, you can use BlueMilk as either a drop in replacement for the built in ASP.Net Core DI tool or as a faster subset of StructureMap for a much more richer feature set.

Current Feature Set

In most cases, the BlueMilk feature and API is identical to StructureMap’s and I’ll have to send you to the StructureMap documentation for more explanation.

Caveats

  • I’m wrestling with the 1st usage, warm up time just due to how long it takes Roslyn to bootstrap itself on its very first usage. What I’ve come up with so far is to have the dynamic classes for services registered as singletons or scoped be built on the initial startup, but allowing any other resolvers be built lazily the first time they are actually used. This is an ongoing struggle.
  • The lifecycle scoping is different than idiomatic StructureMap. I opted to give up and just use the ASP.Net team’s new definition for what “transient” and “scoped” means.
  • There is a dependency for the moment on a library called Baseline, that’s just my junk drawer of convenience extension methods (leftovers from FubuCore for anyone that used to follow FubuMVC). Before BlueMilk hits 1.0, I’ll internalize those extension methods somehow and eliminate that dependency

Using within ASP.Net Core Applications

You’ll want to pull down the BlueMilk.Microsoft.DependencyInjection package to get the ASP.Net Core bootstrapping shim — and blame the ASP.Net team for the fugly naming convention.

In code, plugging in BlueMilk is done through the UseBlueMilk() extension method as shown below:

var builder = new WebHostBuilder();
builder
    .UseBlueMilk()
    .UseUrls("http://localhost:5002")
    .UseKestrel()
    .UseStartup();

Pretty standard ASP.Net Core stuff. Using their magical conventions on the Startup class, you can do specific BlueMilk registrations using a ConfigureContainer(ServiceRegistry) method as shown below:

public class Startup
{
    public void ConfigureContainer(ServiceRegistry services)
    {
        // BlueMilk supports the ASP.Net Core DI
        // abstractions for registering services
        services.AddLogging();

        // And also supports quite a few of the old 
        // StructureMap features like type scanning
        services.Scan(x =>
        {
            x.AssemblyContainingType<SomeMarkerType>();
            x.WithDefaultConventions();
        });

     }

 // Other stuff we don't care about here
}

Durable Messaging in Jasper

My colleague Mike Schenk had quite a bit of input and contributions to this work. This continues a series of blog posts just trying to build up to the integration of durable messaging with ASP.Net Core:

  1. Jasper’s Configuration Story 
  2. Jasper’s Extension Model
  3. Integrating Marten into Jasper Applications
  4. Durable Messaging in Jasper (this one)
  5. Integrating Jasper into ASP.Net Core Applications
  6. Jasper’s HTTP Transport
  7. Jasper’s “Outbox” Support within ASP.Net Core Applications

 

Right now (0.5.0), Jasper offers two built in message transports using either raw TCP socket connections or HTTP with custom ASP.Net Core middleware. Either transport can be used in one of two ways:

  1. Fire and Forget” — Fast, but not backed by any kind of durable message storage, so there’s no guaranteed delivery.
  2. Store and Forward” — Slower, but make damn sure that any message sent is successfully received by the downstream service even in the face of system failures.

Somewhere down the line we’ll support more transport options like RabbitMQ or Azure Service Bus, but for right now, one of the primary design goals of Jasper is to be able to effectively do reliable messaging with the infrastructure you already have. In this case, the “store” part of durable messaging is going to be the primary, backing database of the application. Our proposed architecture at work would then look something like this:

Slide1

So potentially, we have multiple running instances (“nodes”) of each service behind some kind of load balancer (F5 in our shop), with each service having its own dedicated database.

For the moment, we’ve been concentrating on using Postgresql through Marten to prove out the durable messaging concept inside of Jasper, with Sql Server backing to follow shortly. To opt into that persistence, add the MartenBackedExtension extension from the Jasper.Marten library like this:

public class PostgresBackedApp : JasperRegistry
{
    publicPostgresBackedApp()
    {
        Settings.ConfigureMarten(_ =>
        {
            _.Connection("some connection string");
        });

        // Listen for messages durably at port 2301
        Transports.DurableListenerAt(2301);

        // This opts into Marten/Postgresql backed
        // message persistence with the retry and
        // message recovery agents
        Include<MartenBackedPersistence>();
     }
}

What this does is add some new database tables to your Marten configuration and directs Jasper to use Marten to persist incoming and outgoing messages before they are successfully processed or sent. If you end up looking through the code, it uses custom storage operations in Marten for better performance than the out of the box Marten document storage.

Before talking about all the ways that Jasper tries to make the durable messaging as reliable as possible by backstopping error conditions, let’s talk about what actually happens when you publish a message.

What Happens when You Send a Message?

When you send a message through the IServiceBus interface  or through cascading messages, Jasper doesn’t just stop and send the actual message. If you’re publishing a message that is routed by a durable channel, calling this code:

public async Task SendPing(IServiceBus bus)
{
    // Publish a message
    await bus.Send(new PingMessage());
}

will result in an internal workflow shown in this somewhat simplified sequence diagram of the internals:

Handling a Message w_ Unit of Work Middleware

The very first thing that happens is that each outgoing copy of the message is persisted to the durable storage, which is Postgresql in this post. Next, Jasper batches outgoing messages in a similar way to the debounce operator in Rx by outgoing Uri. If you’re using either the built in TCP or HTTP transports, the next step is to send a batch of messages to the receiving application. The receiving application in turn first persists the incoming messages in its persistence, and sends back an acknowledgement to the original sending application. Once the acknowledgement is received, the sending application will delete the outgoing messages just sent successfully from its message persistence and go on its merry way.

That’s the “store and forward” happy path. Now let’s talk about all the myriad ways things could go wrong and what Jasper tries to do to ensure that your messages get to where they are supposed to go.

Network Hiccups

How’s that for a scientific term? Sending message batches will occasionally fail due to the normal litany of temporary network issues. If an outgoing message batch fails, all the messages get their “sent attempts” count incremented in storage and they are added back into the local, outgoing sending agent queue to be tried again.

Circuit Breaker

Jasper’s sending agents implement a form of the Circuit Breaker pattern where a certain number of consecutive failures (it’s configurable) to send a message batch to any destination will cause Jasper to latch that sending agent. When the sending agent is latched, Jasper will not make any attempts to send messages to that destination. Instead, all outgoing messages to that destination will simply be persisted to the backing message persistence without any node ownership. The key point here is that Jasper won’t keep trying to send messages just to get runtime exceptions and it won’t allow the memory usage of your system to blow up from all the backed up, outgoing messages being in an in memory sending queue.

When the destination is known to be in a failure condition, Jasper will continue to poll the destination with a lightweight ping message just to ascertain if the destination is back up yet. When a successful ping is acknowledged by the destination, Jasper will unlatch the sending agent and begin sending outgoing messages.

Resiliency and Node Failover

If you are using the Marten/Postgresql backed message persistence, your application has a constantly running message persistence agent (it’s actually called SchedulingAgent) that is polling for persisted messages that are either owned by no specific node or persisted messages that are owned by an inactive node.

To detect whether a node is active, we rely on each node holding a session level advisory lock in the underlying Postgresql database as long as it’s really active. Periodically, Jasper will run a query to move any messages owned by an inactive node to “any node” ownership where any running node can recover both the outgoing and incoming messages. This query detects inactive nodes simply by the absence of an active advisory lock for the node identity in the database.

The message persistence agent also polls for the existence of any persisted incoming or outgoing messages that are not owned by any node. If it detects either, it will assign some of these messages to itself and pull outgoing messages into its local sending queues and incoming messages into its local worker queues to be processed. The message polling and fetching was designed to try to enable the recovery work to be spread across the running nodes. This process also uses Postgresql advisory locks as a distributed lock to prevent multiple running nodes from double dipping into the same persisted messages.

The end result of all that verbiage is:

  • If the receiving application is completely down, Jasper will be able to recover the outgoing messages and send them later when the receiving application is back up
  • If a node fails before it can send or process all the messages in flight, another node will be able to recover those persisted messages and process them
  • If your entire application goes down or is shut down, it will pick up the outstanding, persisted work of incoming and outgoing messages when any node is restarted
  • By using the advisory locks in the backing database, we got around having to have any kind of distributed lock mechanism (we were considering Consul) or leader election for the message recovery process, making our architecture here a lot simpler than it could have been otherwise.

 

More Work for the Future

The single biggest thing Jasper needs is early adopters and usage in real applications to know how what it already has for resiliency is working out. Beyond that though, I know we want at least a little more work in the built in transports for:

  1. Backpressure — We might need some kind of mechanism to allow the receiving applications let the senders know, “hey, I’m super busy here, could you stop sending me so many messages for a little bit” and slow down the sending.
  2. Work Stealing — We might say that its easier to implement back pressure between the listening agent and worker queues within the receiving application. In that case, if the listener sees there are too many outstanding messages waiting to be processed in the local worker queues, it would just persist the incoming messages for any other node to pick up when it can. We think this might be a cheap way to implement some form of work stealing.
  3. Diagnostics — We actually do have a working diagnostic package that adds a small website application to expose information about the running application. It could definitely use some additional work to expose metrics on the active message persistence.
  4. Sql Server backed message persistence — probably the next thing we need with Jasper at work

 

Other Related Stuff I Didn’t Get Into

  • We do have a dead letter queue mechanism where messages that just can’t be processed are shoved over to the side in the message persistence. All configurable of course
  • All the message recovery and batching thresholds are configurable. If you’re an advanced Jasper user, you could use those knobs to fine tune batch sizes and failure thresholds
  • It is possible to tell Jasper that a message expires at a certain time to prevent sending messages that are just too old

 

On Productive Software Development Teams

I’ve had several conversations lately about how to organize development teams, enough so that it’s probably useful for me to write out what I do think here. By no means should you think for a second that my shop or my team is perfect and I’ll happily admit that much of the focus here is on things that I want to be different at work.

This would have been a much longer post, but I found worlds of older blog posts I’ve written on this subject. Honestly, the problems I see and the solutions I’d recommend just haven’t evolved that much in the last 10 years. I don’t know if that makes me worried more about myself or the software development world.

 

Multi-Disciplinary Team and Multi-Skilled Folks

I think it’s a huge anti-pattern to organize software teams around technical specialties or disciplines like having a UI team, a middle tier team, and a database team. I’d even go farther and say it can be a negative to keep the testers somewhat separate from the rest of the development team as well. I think the main point is that each team should ideally have every skillset it needs to carry out the project inside the team room or at least under the team’s control.

Moreover, autonomous teams that can do everything needed to carry out the project are far more likely to succeed than teams that have to frequently depend on other teams or technical specialties. Formal hand-offs between teams in software development are an almost inevitable drag on productivity and should be minimized as much as possible within a software organization.

You will make mistakes and discover last minute requirements or flaws in your architecture. Sure, you can try to do the very best planning possible to ensure that you know everything upfront, but my experience says that you’re far better off when you can easily recover from mistakes and accommodate those last minute discoveries. Needless to say, it’s far easier to be flexible in last minute discoveries or problems if you can just fix the problems yourself. What’s going to be faster, filling out a Jira ticket to get a database team to add a column you need or being able to just add a database migration to your codebase and continuing on your way?

Even without formal hand-offs, keeping developers in strict specialties can easily make it harder for the team to communicate with each other because there is less common concepts between team members while simultaneously requiring more communication because every feature has to involve more people. Not every developer can be a multi-skilled generalist, but at a minimum, each developer needs to have some understanding of how the folks both upstream and downstream of them work and what their concerns are. My early career was actually spent as an engineer designing petrochemical plants, and one of the earliest lessons my first manager drilled into me was how important it was to understand what the other engineering disciplines did so you could better support their work with your products and how to more intelligently ask questions of the other types of engineers you depended on.

Making developers specialize by technical concern can frequently lead teams into the Local Optimization Fallacy trap. If you’re not familiar with this, think of cases where the developers optimize some part of the system for how themselves, but that decision makes the tester’s jobs much harder. On the flip side, it’s frequently advantageous to do some extra work inside the code for no other reason that to make automated testing much more efficient. In many cases, you’ll save more time in testing than the extra time you spend coding, and that’s a net win for the team.

Specializing personnel on teams can be bad for developer growth and skills acquisition. My current shop has historically kept the database team separate from developers, and frequently the outcome has been developers who don’t have a good background in the basics of database development and database developers who maybe don’t have a solid appreciation for other software engineering principles.

 

Everybody Knows Where the Bodies are Buried

I’m familiar with several teams who own and maintain codebases that were originally written by other people and they frequently struggle understanding how things flow through the system or even the basic architecture. Somebody is going to say they needed more documentation and I’ll say instead that companies pay a high price for not having some developer continuity when the tribal knowledge walks out the door. Either way, it’s absolute vital that the team truly understands the codebase they’re working on.

Arguably, one of the most important jobs of a technical lead is making sure that the other team members understand how the system is designed, the basic technologies, how information flows through the system, basic domain concepts, and how to troubleshoot problems throughout the stack.

We’re admittedly struggling in many cases with gobs of legacy technology decisions that aren’t familiar to many of our developers (partially because we hired a prolific OSS author and his old tools are every which way). The long term solution is to replace those tools with more modern, commonly known tools. For now though, it helps somewhat to try to explain why those decisions were made and what the original team was thinking as a way to better understand how to work in those codebase as they are today.

As an aside, a couple years ago I seriously suggested that we try to switch to Node.js for our server side web development based on the theory that our younger developers were far more interested in Javascript than .Net, but the stupid “left pad” incident happened that exact day and I never brought that up again.

 

How Microservices Might Fit In

Honestly, the big value proposition for microservices in my shop is mostly being able to move to a world where each codebase is small enough to be completely owned by no more than one normal sized team. We have 3-4 big systems that are worked on by multiple teams, and let me tell you, the folks on those teams have absurd Git skills just able to manage having so many developers contributing simultaneously.

Forget the Golang or Node.js powered microservices with under a 100 lines of code, I’ll settle for most of our teams getting to work on a codebase that is:

  • Small enough to be completely owned by a single team
  • Small and simple enough that mere mortals can reasonably understand the codebase
  • Has a relatively fast automated build because that’s a huge component of being productive

 

 

 

Sunsetting StructureMap

I haven’t really been hiding this, but I apparently need to be more clear that I do not intend to do any further development work on StructureMap other than trying to keep up with pull requests and bugs that have no workaround. My focus is on other projects these days and I’m trying to cut down on the time I spend on OSS work as well. If someone wants to take over maintenance, I’d be happy to help someone else take it on. With umpteen dozen .Net IoC containers on GitHub and Nuget and the new built in DI container in ASP.Net Core, I feel like there’s plenty of tools for whatever folks need.

I am working on the BlueMilk project as an intended successor project to StructureMap that should be able to serve as an offramp to many shops developing on StructureMap today. BlueMilk will be a much smaller library that retains what I feel to be the core capabilities of StructureMap, while ditching most of the legacy design decisions that hold StructureMap back and keep my inbox filled with user questions.

For some perspective, I started working on what became StructureMap the summer before my oldest son was born, and he’s starting High School this coming fall. The highlight of my conference speaking history was probably a QCon where I gave a talk about “lessons learned from a long lived codebase” all about working with StructureMap — in 2008!

 

Integrating Marten into Jasper Applications

Continuing a new blog series on Jasper:

  1. Jasper’s Configuration Story 
  2. Jasper’s Extension Model
  3. Integrating Marten into Jasper Applications  (this one)
  4. Durable Messaging in Jasper
  5. Integrating Jasper into ASP.Net Core Applications
  6. Jasper’s HTTP Transport
  7. Jasper’s “Outbox” Support within ASP.Net Core Applications

 

Using Marten from Jasper Applications

Time to combine two of my biggest passions (time sinks) and show you how easy it is to integrate Marten into Jasper applications. If you already have a Jasper application, start by adding a reference to the Jasper.Marten Nuget. Using Jasper’s extension model, the Jasper.Marten library will automatically add IoC registrations for the Marten:

  1. IDocumentStore as a singleton
  2. IQuerySession as scoped
  3. IDocumentSession as scoped

At a bare minimum, you’ll at least need to tell Jasper & Marten what the connection string is to the underlying Postgresql database something like this sample:

public class AppWithMarten : JasperRegistry
{
    public AppWithMarten()
    {
        // StoreOptions is a Marten object that fulfills the same
        // role as JasperRegistry
        Settings.Alter<StoreOptions>((config, marten) =>
        {
            // At the simplest, you would just need to tell Marten
            // the connection string to the application database
            marten.Connection(config.GetConnectionString("marten"));
        });
    }
}

In this case, we’re taking advantage of Jasper’s strong typed configuration model to configure the Marten StoreOptions object that completely configures a Marten DocumentStore in the underlying IoC container like this from the Jasper.Marten code:

// Just uses the ASP.Net Core DI registrations
registry.Services.AddSingleton<IDocumentStore>(x =>
{
    var storeOptions = x.GetService<StoreOptions>();
    var documentStore = new DocumentStore(storeOptions);
    return documentStore;
});

And for the basics, that’s all there is to it. For right now, the IDocumentSession service is resolved by calling IDocumentStore.OpenSession(), but it’s likely users will want to be able to opt for either lightweight sessions or configure different transactional levels. I don’t know what that’s going to look like yet, but it’s definitely something we’ve thought about for the future.

 

 

 

Jasper’s Extension Model

Continuing a new blog series on Jasper:

  1. Jasper’s Configuration Story 
  2. Jasper’s Extension Model (this one)
  3. Integrating Marten into Jasper Applications
  4. Durable Messaging in Jasper
  5. Integrating Jasper into ASP.Net Core Applications
  6. Jasper’s HTTP Transport
  7. Jasper’s “Outbox” Support within ASP.Net Core Applications

 

 

The starting point of any Jasper application is the JasperRegistry class that defines the configuration sources, various settings, and service registrations, similar in many respects to the IWebHostBuilder and Startup types you may be familiar with in ASP.Net Core applications (or the FubuRegistry for any old FubuMVC hands). A sample one is shown below:

public class SubscriberApp : JasperRegistry
{
    public SubscriberApp()
    {
        Subscribe.At("http://loadbalancer/messages");
        Subscribe.ToAllMessages();

        Transports.LightweightListenerAt(22222);
    }
}

If everything you can possibly change or configure is done by the internal DSL exposed by the JasperRegistry class, it’s only natural then that the extension model is just this:

public interface IJasperExtension
{
    void Configure(JasperRegistry registry);
}

As a sample, here’s an available extension from the Jasper.Marten library that lets you opt into using Marten-backed message persistence that will be featured in a blog post later in this series:

/// <summary>
/// Opts into using Marten as the backing message store
/// </summary>
public class MartenBackedPersistence : IJasperExtension
{
    public void Configure(JasperRegistry registry)
    {
        // Override an OOTB service in Jasper
        registry.Services.AddSingleton<IPersistence, MartenBackedMessagePersistence>();

        // Jasper works *with* ASP.Net Core, even without a web server,
        // so you can use their IHostedService model for long running tasks
        registry.Services.AddSingleton<IHostedService, SchedulingAgent>();

        // Customizes the Marten integration a little bit with
        // some custom schema objects this extension needs
        registry.Settings.ConfigureMarten(options =>
        {
            options.Storage.Add<PostgresqlEnvelopeStorage>();
        });
    }
}

To apply and use these extensions, you have two options. First, you can say that the exposed extension has to be explicitly added by the application developers in their JasperRegistry with syntax like this:

public class ItemSender : JasperRegistry
{
    public ItemSender()
    {
        Include<MartenBackedPersistence>();
        // and a bunch of other stuff that isn't germane here
    }
}

The other option is to make an extension be auto-discovered and applied whenever the containing assembly is part of the application. To do this, add an assembly level attribute to your extension library that references the extension type you want auto-loaded. Here’s an example from the Jasper.Consul extension library:

// This is a slight change from [FubuModule] in FubuMVC
// Telling Jasper what the extension type is just saves the
// type scanning discovery. Got that idea from a conversation
// w/ one of the ASP.Net team members
[assembly:JasperModule(typeof(ConsulExtension))]

namespace Jasper.Consul.Internal
{
    public class ConsulExtension : IJasperExtension
    {
        public void Configure(JasperRegistry registry)
        {
            registry.Services.For<IUriLookup>().Add<ConsulUriLookup>();
            registry.Services.For<IConsulGateway>().Add<ConsulGateway>();
        }
    }
}

When to use either model? I’m a big fan of the “it should just work” model and the auto-discovery in many places, but plenty of other folks will prefer the explicit Include<Extension>() call in their JasperRegistry to make the code be more self-documenting.

In either case, there’s a little bit of trickery going on behind the scenes to order both the service registrations and any changes to “Settings” objects to create a precedence order like this:

  1. Application specific declarations in the JasperRegistry class — regardless of the ordering of where the Include() statements wind up
  2. Extension declarations
  3. Core framework defaults

 

A Little History about the Ideas Here

When Chad Myers and I started talking through the ideas that later became FubuMVC, one of our goals was to maximize our ability to apply customer-specific extensions and customizations to the on premises model application we were building at the time. We knew that many a software shop had crashed and burned in that situation if they resorted to using customer specific forks of their core product. We envisioned a web framework that would pretty well let you add or change almost anything in customer extensions without forcing any kind of fork to the core application. The Open-Closed Principle taken to an extreme, if you will. Using FubuMVC extensions (we originally called them “Bottles”), you could add all new routes, change IoC service registrations, swap out views, and even inject content into existing views in the core application with a model where you just drop the extension assembly into the bin path of the application and go.

I want to say that it’s one of the cleverest things I’ve ever successfully completed, and it definitely added some value (not so much in the app it was meant for because they ditched the on premises model shortly after and succeeded without the crazy extensibility;)) All that said though, Jasper is meant for a different world where we might not be quite so eager to build large applications and I’m much more gun shy about complexity in my OSS projects than 10 years younger me was, so the extensibility will not be quite so big a part of Jasper’s core identity and philosophy as it was in the FubuMVC days.