Compiling Code At Runtime with Lamar – Part 2

In my previous post, I introduced Lamar‘s ability and utilities for generating code in memory and compiling that code into usable, in memory assemblies with the Roslyn compiler. In this post I’m going to try to explain the more complicated “frames” model with Lamar that provides the backbone for the related Jasper framework’s middleware and runtime pipeline strategy.

The purpose of the “frames” model is to be able to generate dynamic methods by declaring a list of logical operations in generated code via Frame objects, then let Lamar handle:

  • Finding any missing dependencies of those frames
  • Determine what the necessary variable inputs are
  • Ordering all the frames based on dependency order just prior to generating the code

Before diving into an example, here’s a little class diagram of the main types:


The various types represent:

  • Frame – Named after the StackFrame objects within stack traces in .Net. Models a logical action done in the generated code. Concrete examples in Lamar or Jasper would be calling a method on an object, calling a constructor function, or specific frame objects to create wrapped transaction boundaries or exception handling boundaries.
  • Variable – pretty well what it sounds like. This type models a variable within the generated method, but also includes information about what Frame created it to help order the dependencies
  • IVariableSource – mechanism to “find” or create variables. Examples in Lamar include resolving a service from an IoC container, passing along a method argument, or the example below that tells you the current time
  • IMethodVariables – interface that is used by Frame classes to go find their necessary Variable dependencies.

Alrighty then, let’s make this concrete. Let’s say that we want to generate and use dynamic instances of this interface:

public interface ISaySomething
    void Speak();

Moreover, I want a version of ISaySomething that will call the following method and write the current time to the console:

public static class NowSpeaker
    public static void Speak(DateTime now)

Our dynamic class for ISaySomething will need to pass the current time to the now parameter of that method. To help out here, there’s some built in helpers in Lamar specifically to write in the right code to get the current time to a variable of DateTime or DateTimeOffset that is named “now.”

To skip ahead a little bit, let’s generate a new class and object with the following code:

// Configures the code generation rules
// and policies
var rules = new GenerationRules("GeneratedNamespace");

// Adds the "now : DateTime" variable rule to 
// our generated code
rules.Sources.Add(new NowTimeVariableSource());

// Start the definition for a new generated assembly
var assembly = new GeneratedAssembly(rules);

// Add a new generated type called "WhatTimeIsIt" that will
// implement the 
var type = assembly.AddType("WhatTimeIsIt", typeof(ISaySomething));

// Getting the definition for the method named "Speak"
var method = type.MethodFor(nameof(ISaySomething.Speak));

// Adding a frame that calls the NowSpeaker.Speak() method and
// adding it to the generated method
var @call = new MethodCall(typeof(NowSpeaker), nameof(NowSpeaker.Speak));

// Compile the new code!

After all that, if we interrogate the source code for the generated type above (type.SourceCode), we’d see this ugly generated code:

    public class WhatTimeIsIt : Lamar.Testing.Samples.ISaySomething

        public void Speak()
            var now = System.DateTime.UtcNow;


Some notes about the generated code:

  • Lamar was able to stick in some additional code to pass the current time into a new variable, and call the Speak(DateTime now) method with that value.
  • Lamar is smart enough to put that code before the call to the method (that kind of matters here)
  • The generated code uses full type names in almost all cases to avoid type collisions rather than trying to get smart with using statements in the generated code

So now let’s look at how Lamar was able to add the code to pass along DateTime.UtcNow. First off, let’s look at the code that just writes out the date variable:

public class NowFetchFrame : SyncFrame
    public NowFetchFrame(Type variableType)
        // there's some sleight of hand here that's marking
        // this new Variable as created by this frame object
        // so that the dependency relationship is made
        Variable = new Variable(variableType, "now", this);
    public Variable Variable { get; }
    public override void GenerateCode(
        GeneratedMethod method, 
        ISourceWriter writer)
        writer.WriteLine($"var {Variable.Usage} 
            = {Variable.VariableType.FullName}.{nameof(DateTime.UtcNow)};");
        Next?.GenerateCode(method, writer);

In the frame above, you’ll see that the GenerateCode() method writes its code into the source, then immediately turns around and tells the next Frame – if there is one – to generated its code. As the last step to write out the new source code, Lamar:

  1. Goes through an effort to find any missing frames and variables
  2. Sorts them with a topological sort (what frames depend on what other frames or variables, what variables are used or created by what frames)
  3. Organizes the frames into a single linked list
  4. Calls GenerateCode() on the first frame

In the generated method up above, the call to NowSpeaker.Speak(now) depends on the now variable which is in turn created by the NowFetchFrame, and that’s enough information for Lamar to order things and generate the final code.

Lastly, we had to use a custom IVariableSource to teach Lamar how to resolve the now variable. That code looks like this:

public class NowTimeVariableSource : IVariableSource
    public bool Matches(Type type)
        return type == typeof(DateTime) || type == typeof(DateTimeOffset);

    public Variable Create(Type type)
        if (type == typeof(DateTime))
            return new NowFetchFrame(typeof(DateTime)).Variable;

        if (type == typeof(DateTimeOffset))
            return new NowFetchFrame(typeof(DateTimeOffset)).Variable;

        throw new ArgumentOutOfRangeException(nameof(type), "Only DateTime and DateTimeOffset are supported");

Out of the box, the Lamar + Jasper combination uses variable sources for:

  • Services from the internal IoC container of the application
  • Method arguments
  • Variables that can be derived from a method argument like HttpContext.Request
  • The “now” convention shown here


I don’t know how popular this thing is going to be, but it’s powering the dynamic code generation of Jasper’s runtime pipeline and it’s the key to Jasper’s efficiency compared to other .Net tools with similar functionality. The early feedback I got was that this model was very abstract and hard to follow. I’m open to suggestions, but the very nature of needing to do the recursive dependency detection and ordering kind of necessitates a model like this in my opinion.


Compiling Code At Runtime with Lamar – Part 1

This code was originally written and proven out in the related Marten and described in a post titled Using Roslyn for Runtime Code Generation in Marten. This code was ripped out of Marten itself, but it’s happily running now in Lamar a couple years later.

As some of you know, I’ve been working on a new library called Lamar that I mean to be the next generation replacement for the venerable, but creaky StructureMap library. Besides the IoC Container support though, Lamar also provides some tooling and a model to generate and compile code at runtime, then ultimately load and use the newly generated types.

If all you want to do is take some C# code and compile that in memory to a new, in memory assembly, you can use
the Lamar.Compilation.AssemblyGenerator class.

Let’s say that you have a simple interface in your system like this:

    public interface IOperation
        int Calculate(int one, int two);

Next, let’s use AssemblyGenerator to compile code with a custom implementation of IOperation that we’ve generated
in code:

        public void generate_code_on_the_fly()
            var generator = new AssemblyGenerator();

            // This is necessary for the compilation to succeed
            // It's exactly the equivalent of adding references
            // to your project

            // Compile and generate a new .Net Assembly object
            // in memory
            var assembly = generator.Generate(@"
using Lamar.Testing.Samples;

namespace Generated
    public class AddOperator : IOperation
        public int Calculate(int one, int two)
            return one + two;

            // Find the new type we generated up above
            var type = assembly.GetExportedTypes().Single();
            // Use Activator.CreateInstance() to build an object
            // instance of our new class, and cast it to the 
            // IOperation interface
            var operation = (IOperation)Activator.CreateInstance(type);

            // Use our new type
            var result = operation.Calculate(1, 2);

There’s only a couple things going on in the code above:

  1. I added an assembly reference for the .Net assembly that holds the IOperation interface
  2. I passed a string to the ​GenerateCode() method, which successfully compiles my code and hands me back a .Net Assembly object
  3. Load the newly generated type from the new Assembly
  4. Use the new IOperation

If you’re not perfectly keen on doing brute force string manipulation to generate your code, you can
also use Lamar’s built in ISourceWriter to generate some of the code for you with
all its code generation utilities:

public void generate_code_on_the_fly_using_source_writer()
    var generator = new AssemblyGenerator();

    // This is necessary for the compilation to succeed
    // It's exactly the equivalent of adding references
    // to your project

    var assembly = generator.Generate(x =>
        x.StartClass("AddOperator", typeof(IOperation));
        x.Write("BLOCK:public int Calculate(int one, int two)");
        x.Write("return one + two;");
        x.FinishBlock();  // Finish the method
        x.FinishBlock();  // Finish the class
        x.FinishBlock();  // Finish the namespace

    var type = assembly.GetExportedTypes().Single();
    var operation = (IOperation)Activator.CreateInstance(type);

    var result = operation.Calculate(1, 2);


In Part 2, I’ll talk about the “frames” model that’s heavily used within Jasper (shown in this post).

Scheduled or Delayed Message Execution in Jasper

This is definitely not a replacement for something like Hangfire, but it’s very handy for what it does. 

Here’s a couple somewhat common scenarios in an event driven or messaging system:

  • Handling a message failed, but with some kind of problem that might be resolved if the message is retried later in a few seconds or maybe even minutes
  • A message is received that starts a long running process of some kind, and you may want to schedule a “timeout” process later that would send an email to users or do something to escalate the process if it has not finished by the scheduled time

For these kinds of use cases, Jasper supports the idea of scheduled execution, where messages can be sent with a logical “execute this message at this later time.”

Retry Later in Error Handling

The first usage of scheduled execution is in message handling error policies. Take this example below where I tell Jasper to retry handling an incoming message that fails with a TimeoutException again after a 5 second delay:

public class GlobalRetryApp : JasperRegistry
    public GlobalRetryApp()

Behind the scenes, the usage of the RetryLater()method causes Jasper to schedule the incoming message for later execution if that error policy kicks in during processing.

Scheduling Messages Locally

To schedule a message to be processed by the current system at a later time, just use the IMessageContext.Schedule() method as shown below:

public async Task schedule_locally(IMessageContext context, Guid issueId)
    var timeout = new WarnIfIssueIsStale
        IssueId = issueId

    // Process the issue timeout logic 3 days from now
    // in *this* system
    await context.Schedule(timeout, 3.Days());

This method allows you to either express an exact time or to use a TimeSpan argument for delayed scheduling. Either way, Jasper ultimately stores the scheduled message against a UTC time. IMessageContext is the main service in Jasper for sending and executing messages or commands. It will be registered in your IoC container for any Jasper application.

Sending Scheduled Messages to Other Systems

To send a message to another system that should wait to execute the message on its end, use this syntax:

public async Task send_at_5_tomorrow_afternoon_yourself(IMessageContext context, Guid issueId)
    var timeout = new WarnIfIssueIsStale
        IssueId = issueId

    var time = DateTime.Today.AddDays(1).AddHours(17);

    // Process the issue timeout at 5PM tomorrow
    // Do note that Jasper quietly converts this
    // to universal time in storage
    await context.Send(timeout, e => e.ExecutionTime = time);

Do note that Jasper immediately sends the message. For right now, the thinking is that the receiving application will be responsible for handling the execution scheduling. We may choose later to make this be the responsibility of the sending application instead to be more usable when sending messages to other systems that aren’t using Jasper.

Persistent Job Scheduling

The default, in memory message execution scheduling is probably good enough merely for delayed message processing retries as shown above, However, if you’re using the scheduled execution as part of your business workflow, you probably want to be using the durable message persistence. Today your two options are to use either Postgresql with Marten, or the new Sql Server backed message persistence.

With durable messaging, the scheduled messages are persisted to a backing database so they aren’t lost if any particular node is shut down or the whole system somehow crashes. Behind the scenes, Jasper just uses polling to check the database for any scheduled messages that are ready to execute, and pulls these expired messages into the local worker queues of a running node for normal execution.

The scheduled messages can be processed by any of the running nodes within your system, but we take some steps to guarantee that only one node will execute specific scheduled messages. Rather than using any kind of leader election, Jasper just takes advantage of advisory locks in Postgresql or application locks in Sql Server as a lightweight, global lock between the running nodes within your application. It’s a much simpler (read, less effort on my time) mechanism than trying to do some kind of leader election between running nodes. It also allows Jasper to better spread the work across all of the nodes.


An Alternative Style of Routing for ASP.Net Core

Jasper started as an idea to build a far better, next generation version of an older framework called FubuMVC. FubuMVC was primarily a web framework meant to be an alternative to ASP.Net MVC that later got a service bus feature bolted on the side. My previous employer mostly wanted and needed the asynchronous messaging support, so that’s what most of the focus has been within Jasper so far. However, it does have some preliminary support for building HTTP APIs as well and that’s the subject of this post.

From the Carter (“Carter” is to NancyFx as Jasper is to FubuMVC) website:

Carter is a library that allows Nancy-esque routing for use with ASP.Net Core.

Cool, so in the ASP.Net Core space you at least have the main MVC Core framework and its style of implementing HTTP routes and the inevitable Sinatra inspired alternative. In the same vein, Jasper provides yet another alternative to implement routing on ASP.Net Core with a much improved version of what we did years ago with FubuMVC.

A Quick Example of Jasper Routing

Before I get into too many details, here’s a concrete example of a Jasper route handler from its testing library that allows you to post a JSON message to an endpoint and get a response back as JSON:

    public class NumbersEndpoint
        // This action would respond to the url "POST: /sum"
        public static SumValue post_sum(SomeNumbers input)
            return new SumValue{Sum = input.X + input.Y};

So right off the bat, you can probably see that Jasper still uses (by default) convention over configuration to derive the Url pattern attached to this method. I think most folks would react to that code and approach in one of a couple ways:

  1. OMG, that’s too much magic, and magic in code is evil! All code must be “simple” and explicit, no matter how much repetitive cruft and code ceremony I have to type and then later wade through to understand the code. You probably won’t care for Jasper, and probably don’t mind the repetitive base class declarations, attribute usage, and IActionResult code noise that bugs me when I use or have to read MVC Core code. There’s room in the world for both of us;-)
  2. That looks simple, clean, and I bet it’s easy to test. Tell me more!

So if you fell into the second group, or are at least open minded enough to learn a little more, let’s move on to some of the mechanics of defining routes and implementing handlers.


Discovering Routes within an Application

While Sinatra-inspired web frameworks like like Express.js tend to want you to explicitly register routes and their handlers, most .Net web frameworks I’m familiar with use some kind of type scanning to find candidate routes from the public types and methods exposed in your application’s main assembly. Jasper is no different in that it looks inside your application’s assembly (either the assembly containing your JasperRegistry or the main console application that bootstrapped Jasper) for concrete, public classes that are named with the suffix “Endpoint” or “Endpoints.” Within those types, Jasper looks for any public method on those types whose name begins with a supported HTTP method name (GET, POST, PUT, DELETE, or HEAD for the moment).

Do note that the endpoint methods can be either instance or static methods, as long as they are public and match the naming criteria.

Here’s an example:

    public class SomeEndpoints
        // Responds to GET: /something
        public string get_something()
            return "Something";

        // Responds to POST: /something
        public string post_something()
            return "You posted something";

Jasper does the type discovery using Lamar’s type scanning, which in turn is pretty well taken straight up from StructureMap 4.*’s type scanning, which in its turn was specifically designed to improve the cold start time in the last version of FubuMVC and provides the same benefit for Jasper (it does some pre-sorting of types and some parallelization that helps quite a bit when you have multiple type scanning conventions happening in the same application). This isn’t my first rodeo.

This discovery is somewhat customizable, but this time around I’m asking users to just use the minimal default conventions instead of making Jasper crazy configurable like FubuMVC was to its and its user’s detriment.

Defining Route Patterns

I didn’t realize this is missing until writing this post, but FubuMVC also supported attributes for explicitly defining or overriding route patterns. This just hasn’t quite made it to Jasper yet, but would have to for the inevitable exception cases.

First off, Jasper has a special naming convention for the root (“/”) url of your application. If an endpoint class is called HomeEndpoint or ServiceEndpoint (it’s up to your preference, but I’d advise you to only use one or the other), the route methods are just derived by the matching HTTP method names, like this:

    public class HomeEndpoint
        // Responds to GET: /
        public string Index()
            return "Hello, world";

        // Responds to GET: /
        public string Get()
            return "Hello, world";

        // Responds to PUT: /
        public void Put()

        // Responds to DELETE: /
        public void Delete()

The Index() method is a synonym for “GET” that was a convention in FubuMVC that I kept in Jasper.

For other endpoint action methods, the route is completely derived from the method name, where the method name would follow a pattern like this:

[http method name]_[segment1]_[segment2]_[segment3]

Roughly speaking, the underscore characters denote a “/” forward slash that separates segments in your route. The first segment is the HTTP method — and each action can only legally respond to a single HTTP method.

So a method with the signature post_api_v1_invoice() would respond to the route “POST: /api/v1/invoice.”

Cool, but now you’re probably asking, how do you pass in route arguments? That’s also a naming convention. Consider the method signature get_api_v1_invoice_id(Guid id). Jasper is looking for segments that match an argument to the method and it knows that those segments are really route parameters. By that logic, that method above responds to the route pattern “GET /api/v1/invoice/{id}.” At runtime when this route is matched, the last segment would be parsed to a Guid and the value passed to the method argument.

As of now, route arguments can be most of the primitive types you’d expect:

  • string
  • Guid
  • int/long/double
  • bool
  • enumerations
  • DateTime/DateTimeOffset

Jasper’s routing forces some limitations on you compared to ASP.Net Core’s Routing module. There’s no kind of route constraint other than full string segments and HTTP methods. I’m making a conscious tradeoff here in favor of performance versus the greater flexibility of the ASP.Net Core routing, with the additional upside that Jasper’s routing selection logic is far, far simpler than what the ASP.Net team did for MVC.

So what do I think are the advantages of this approach over some of the other alternatives in .Net land?

  • It’s really easy to navigate to the route handler from a Url. It’s not terribly difficult to go from a Url in some kind of exception report to using a keyboard shortcut in VS.Net or Rider that lets you navigate quickly to the actual method that handles that url.
  • Even though there’s some repetitiveness in defining all the segments of a route, I like that you can completely derive the route for a method by just the method name. From experience, it was a bit of cognitive load having to remember how to combine a controller type name with attributes and the method name to come up with the route pattern.
  • While using an attribute to define the route is mechanically easy, that’s a little extra code noise in my opinion, and you lose some of the code navigability via the method name when you do that.
  • The code is clean. By this I mean there doesn’t have to be any extraneous code noise from attributes or special return types or base classes getting in your way. Some people hate the magic, but I appreciate the terseness

Moreover, this approach is proven in large FubuMVC applications over a matter of years. I’m happily standing by this approach, even knowing that it’s not going to be for everyone.

Asynchronous Actions

All of the method signatures shown above are very simple and synchronous, but it’s a complicated world and many if not most HTTP endpoints will involve some kind of asynchronous behavior, so let’s look at more advanced usages.

For asynchronous behavior, just return Task or Task like this:

        public Task get_greetings(HttpResponse response)
            response.ContentType = "text/plain";
            return response.WriteAsync("Greetings and salutations!");

Injecting Services as Arguments

You’ll frequently need to get direct access to the HttpContext inside your action methods, and to do that, just take that in as a method argument like this:

        public Task post_values(HttpContext context)
            // do stuff

You can also take in arguments for any property of an HttpContext like HttpRequest or HttpResponse just to slim down your own code like this shown below:

        public Task post_values(HttpRequest request, HttpResponse response)
            // do stuff

Like MVC Core, Jasper supports “method injection” of registered service dependencies to the HTTP methods, but Jasper doesn’t require any kind of explicit attributes. Here’s an elided example from the load testing harness projects in Jasper:

        public static async Task post_one(IMessageContext context, IDocumentSession session)
            // Do stuff with the context and session

In the case above, the IMessageContext and IDocumentSession are known service registrations, so they will be passed into the method by Jasper by resolving with Lamar either through generating “poor man’s DI” code if it can, or service location if it has to. See Jasper’s special sauce for a little more description of what Jasper is doing differently here.

Reading and Writing Content

To write content, Jasper keys off the return type of your endpoint action. If your method returns:

  • int or Task — the return value is written to the response status code
  • string or Task — the return value is written to the response with the content type “text/plain”
  • Any other T or Task — the return value is written to the response using Jasper’s support for content negotiation. Out of the box though, the only known content for a given type is JSON serialization using Newtonsoft.Json, so if you do nothing to customize or override that, it’s JSON in and JSON else. I feel like that’s a good default and useful in many cases, so it stays.

To post information Jasper let’s you work directly with strong typed objects while it handles the work of deserializing an HTTP body into your declared input body. Putting that altogether, this method used earlier:

    public class NumbersEndpoint
        public static SumValue post_sum(SomeNumbers input)
            return new SumValue{Sum = input.X + input.Y};

At runtime, Jasper will try to deserialize the posted body of the HTTP request into a SomeNumbers model object that will be passed into the post_sum method above. Likewise, the SumValue object coming out of the method will be serialized and written to the HTTP response as the content type “application/json.”

How Does Jasper Select and Execute the Route at Runtime?

I’m not going to get into too many details, but Jasper uses its own routing mechanism based on the Trie algorithm. I was too intimidated to try something like this in the FubuMVC days and we just used the old ASP.Net routing that’s effectively a table scan search. The new Trie algorithm searching is far more efficient for the way that Jasper uses routing.

Once Jasper’s routing matches a route to the Url of the incoming HTTP request, it can immediately call the corresponding RouteHandler method with this signature:

    public abstract class RouteHandler
        public abstract Task Handle(HttpContext httpContext);

        // other methods

As I explained in my previous post on the Roslyn-powered code weaving, Jasper generates a class at runtime that mediates between the incoming HttpContext and your HTTP endpoint, along with any necessary middleware, content negotiation, route arguments, or service resolution.

Potential Advantages of the Jasper Routing Style

I’m biased here, but I think that the Jasper style of routing and the runtime pipeline has some potentially significant advantages over MVC Core:

  • Cleaner code, and for me, “clean” means the absence of extraneous attributes, marker interfaces, base types, and other custom framework types
  • Less mechanical overhead in the runtime pipeline
  • Better performance
  • Less memory usage through fewer object allocations
  • Cleaner stack traces
  • The generated code will hopefully make Jasper’s behavior much more self-revealing when users start heavily using middleware

Most of this is probably a rehash of my previous post, Roslyn Powered Code Weaving Middleware.



What’s Already in the Box

Keep in mind this work is in flight, and I honestly haven’t worked on it much in quite awhile. So far Jasper’s HTTP support includes:

  • The routing engine (kind of necessary to make anything else go;))
  • Reverse Url lookup
  • Action discovery
  • Middleware support, both Jasper’s idiomatic version or ASP.Net Core middleware (Jasper’s routing is itself ASP.Net Core middleware)
  • The ability to apply middleware conventionally for cases like “put a transactional middleware on any HTTP action that is a POST, PUT, or DELETE”
  • Some basic content conventions like “an endpoint action that returns a string will be rendered as text/plain at runtime.
  • Basic content negotiation (which shares a lot of code with the messaging side of things)

What’s Missing or Where does This Go Next?

The first question is “whether this goes on?” I haven’t poured a ton of effort into the HTTP handling in Jasper. If there’s not much interest in this or my ambition level just isn’t there, it wouldn’t hurt that much to throw it all away and say Jasper is just an asynchronous messaging framework that can be used with or without ASP.Net Core.

Assuming the HTTP side of Jasper goes on, Jasper gets retagged as a “framework for accelerating microservice development in .Net” and then I think this stuff is next up:

  • Documentation. A conventional framework has a weird tendency of being useless if nobody knows what the conventions are.
  • A modicum of middleware extensions that integrate common tools into Jasper HTTP actions. Offhand, I’ve thought about integrations for IdentityServer and Fluent Validation that add some idiomatic Jasper middleware to be just a little bit more efficient than you’d get from ASP.Net Core middleware.
  • Optimization of the routing. There’s some open issues and known performance “fat” in the routing I’ve just never gotten around to doing
  • Some kind of idiomatic approach in Jasper for branching route handling, i.e. return this status code if the entity exists or redirect to this resource if some condition or return a 403 if the user doesn’t have permission. All of that is perfectly possible today with middleware, but it needs to be something easier for one-off cases.
  • Some level of integration of MVC Core elements within Jasper. I’m thinking you’d definitely want the ability to use IActionResult return types within Jasper in some cases. Even though idiomatic Jasper middleware would be more efficient at runtime, you’d probably want to reuse existing ActionFilters too. It might be valuable to even allow you to use MVC Core Controller classes within Jasper, but still use Jasper’s routing, middleware, and the more efficient IoC integration to achieve what I think will be better performance than by using MVC Core by itself.
  • OpenAPI (Swagger) support. I don’t think this would be too big, and I think I have some ideas about how to allow for general customization of the Swagger documents without forcing users to spray attributes all over the endpoint classes until you can barely see the real code.

What about Razor?

I have zero interest in ever supporting any kind of custom integration of Razor (the view engine support sucked to work on in FubuMVC), and I have it in mind that Razor is pretty tightly coupled to MVC Core’s internals anyway. My thought is that either the IActionResult support gives Jasper Razor support for free, or we just say that you use MVC Core for those pages — and it’s perfectly possible to use both Jasper and MVC Core in the same application. My focus with Jasper is on headless services anyway.

Roslyn Powered Code Weaving Middleware

Jasper, with a big assist from Lamar, supports a unique middleware strategy that I believe will result in significantly higher performance, cleaner exception stack traces (and that matters), and better visibility into its runtime pipeline than similar frameworks in .Net. If you want to follow along with this code, you’ll need at least Jasper 0.8.3 that’s being indexed by Nuget as you read this and Jasper.SqlServer 0.8.2 because I managed to find some bugs while writing this post. Of course.

At this point, most .Net frameworks for messaging, local command running, or service bus message handling have some sort of support for nested middleware or what I used to call the Russian Doll Model. ASP.Net Core middleware is one example of this. Behaviors from NServiceBus is another example.

The great thing about this model when used judiciously is that it’s a great way to handle certain kinds of cross cutting concerns outside of your main HTTP route handling or message handling code. Used well, middleware will allow you to reuse a lot of code and simplify your application code by removing the need for repetitive infrastructure or workflow code.

In web development projects I’ve used or seen used middleware for:

  • Transaction management or unit of work semantics
  • Input validation where the middleware can stop further processing
  • Authentication
  • Authorization

Taking just authentication and authorization as examples, in many cases I’ve seen teams be able to get away with completely ignoring these concerns upfront while focusing on the core business functionality, then being able at a later time to just add middleware for authentication and authorization to take care of these concerns without having any impact on the existing business functionality. That’s a powerful exploitation of architectural reversibility to make development easier.

I’ve also seen this technique taken way, way too far to the point where the code was very difficult to understand. My advice is something along the lines of “don’t be stupid” and pay attention to what’s happening in your code if the middleware usage does more harm than good.

What Came Before and Why It Was Problematic

In FubuMVC, we supported a middleware strategy we called “behaviors” with this interface:

    public interface IActionBehavior
        Task Invoke();
        Task InvokePartial();

Calling the main HTTP action in FubuMVC’s equivalent to controller actions was a behavior. Reading the input body was a behavior. The common things like validation, authorization, authentication, and transactional management were potentially separate behavior objects. At runtime, we would use an IoC container to build out all the behaviors for the matched route, with each “outer” behavior having a reference to its “inner” behavior and each behavior having whatever services it needed to do its work injected into its constructor function.

When it worked well, it was awesome — at least in ~2010 terms when we .Net developers were just thrilled to break away from WebForms. Alas, this model has some issues:

  • It didn’t support an asynchronous model like you’d expect with more recent tooling
  • It results in an absurd number of objects being allocated for each HTTP request. Add in the mechanics around IoC scoped containers, and there was a lot of overhead just to assemble the things you needed to handle the request
  • When something went wrong, the stack traces were epic. There was so much FubuMVC-related framework noise code in the stack trace that many developers would just throw up their hands and run away (even though the real problem was clearly in their own code if they’d just looked at the top of the stack trace, but I digress….)
  • We had tools to visualize the full chain of behaviors for each route, but I don’t think that was ever fully effective for most developers who used FubuMVC

Jasper’s Approach

Not that long after publicly giving up on FubuMVC, I happened to see some articles about how the new Roslyn “compiler as a service” would allow you to compile and load assemblies on the fly from generated C# code. I theorized that this new Roslyn behavior could be exploited to create a new runtime pipeline for HTTP or messaging frameworks where you still had something like FubuMVC’s old Behavior model for cross cutting concerns, but you used some kind of code generation to “weave” in that functionality around your application code.

To make this more concrete, consider this function from a load testing harness that:

  • Handles an HTTP POST request to the url “/one”
  • Creates a new message object
  • Writes a record to the database tracking that the message was sent for the sake of verifying behavior later
  • Sends a message using Jasper’s Sql Server-backed messaging persistence

This is the actual code for the function that handles the HTTP POST:

public static async Task post_one(IMessageContext context, SqlTransaction tx)
    // Loads a pre-packaged message body from a JSON string
    var target1 = JsonConvert.DeserializeObject(_json1);
    target1.Id = Guid.NewGuid();

    await tx.StoreSent(target1.Id, "Target");

    // Send a message through Jasper
    await context.Send(target1);

When Jasper bootstraps, it will generate a new class for each known route that inherits from this class partially shown below:

    public abstract class RouteHandler
        public abstract Task Handle(HttpContext httpContext);

        // Other methods we don't care about here

The RouteHandler classes are all compiled into a new assembly on the fly, then a single instance of each is instantiated and kept in the routing tree ready to handle any incoming requests.

The various instances of RouteHandler mediate between Jasper’s built in HTTP router and the interface it expects, the action methods that handle the actual request, and any Jasper middleware that might be mixed in. In the case of the post_one method shown above, the generated RouteHandler class is this (also on a Gist if the formatting is unreadable in your browser):

    public class SqlSender_HomeEndpoint_post_one : Jasper.Http.Model.RouteHandler
        private readonly SqlServerSettings _sqlServerSettings;
        private readonly IMessagingRoot _messagingRoot;

        public SqlSender_HomeEndpoint_post_one(SqlServerSettings sqlServerSettings, IMessagingRoot messagingRoot)
            _sqlServerSettings = sqlServerSettings;
            _messagingRoot = messagingRoot;

        public override async Task Handle(Microsoft.AspNetCore.Http.HttpContext httpContext)
            var messageContext = _messagingRoot.NewContext();
            using (System.Data.SqlClient.SqlConnection sqlConnection2 = new System.Data.SqlClient.SqlConnection(_sqlServerSettings.ConnectionString))
                await sqlConnection2.OpenAsync();
                var sqlTransaction = sqlConnection2.BeginTransaction();
                await Jasper.SqlServer.SqlServerOutboxExtensions.EnlistInTransaction(messageContext, sqlTransaction);
                await SqlSender.HomeEndpoint.post_one(messageContext, sqlTransaction);
                await messageContext.SendAllQueuedOutgoingMessages();

So let’s deconstruct this generated code a little bit because there’s clearly more going on than just delegating to the post_one method. If you look up above at the post_one method, you’ll see that it’s decorated with an [SqlTransaction]attribute. That adds Jasper’s Sql Server transactional middleware into the mix. All told, the generated code:

  1. Creates a new IMessageContext object that the post_one method needs
  2. Creates and opens a new SqlConnection to the connection string specified in configuration (through the SqlServerSettings object)
  3. Starts a new transaction
  4. Enlists the IMessageContext in the current transaction using Jasper’s Sql Server-backed outbox support
  5. Calls post_one with its two arguments
  6. Commits the transaction
  7. Flushes out any queued up, outgoing messages into Jasper’s local sending queues
  8. Closes and disposes the open connection

What you don’t see in that generated code is maybe more important:

  • In this case, Jasper/Lamar didn’t have to resort to using a scoped IoC container of any kind when handling this HTTP request. That’s a lot of runtime overhead that just disappeared as compared to most other .Net frameworks that perform similar functions to Jasper
  • When something does go wrong, the exception stack traces are going to be much simpler because everything is happening in just a few methods now instead of having lots of wrapped objects implementing a middleware strategy
  • Very few object allocations compared to the way FubuMVC accomplished the exact same functionality, and that’s hugely advantageous for performance in high volume systems

I think a deeper dive blog post later is probably justified, but the implementation of the middleware is this class below:

    public class SqlTransactionFrame : AsyncFrame
        private Variable _connection;
        private bool _isUsingPersistence;
        private Variable _context;

        public SqlTransactionFrame()
            Transaction = new Variable(typeof(SqlTransaction), this);

        public bool ShouldFlushOutgoingMessages { get; set; }

        public Variable Transaction { get; }

        public override void GenerateCode(GeneratedMethod method, ISourceWriter writer)
            writer.Write($"await {_connection.Usage}.{nameof(SqlConnection.OpenAsync)}();");
            writer.Write($"var {Transaction.Usage} = {_connection.Usage}.{nameof(SqlConnection.BeginTransaction)}();");

            if (_context != null && _isUsingPersistence)
                writer.Write($"await {typeof(SqlServerOutboxExtensions).FullName}.{nameof(SqlServerOutboxExtensions.EnlistInTransaction)}({_context.Usage}, {Transaction.Usage});");

            Next?.GenerateCode(method, writer);

            if (ShouldFlushOutgoingMessages)
                writer.Write($"await {_context.Usage}.{nameof(IMessageContext.SendAllQueuedOutgoingMessages)}();");


        // This is necessary to identify other things that need to 
        // be written into the generated method as dependencies
        // to this Frame
        public override IEnumerable<Variable> FindVariables(IMethodVariables chain)
            _isUsingPersistence = chain.IsUsingSqlServerPersistence();

            _connection = chain.FindVariable(typeof(SqlConnection));
            yield return _connection;

            if (ShouldFlushOutgoingMessages)
                _context = chain.FindVariable(typeof(IMessageContext));
                // Inside of messaging. Not sure how this is gonna work for HTTP yet
                _context = chain.TryFindVariable(typeof(IMessageContext), VariableSource.NotServices);

            if (_context != null) yield return _context;

There’s a little bit of complicated goop around the code generation that’s necessary to allow Lamar to properly order the steps in the code generation, but the code generation itself is just writing C# code out — and the new C# string interpolation (finally) makes that pretty approachable in my opinion, especially compared to having to use .Net Expressions or emitting IL.


More Information

I wrote a blog post earlier this year called Jasper’s Roslyn-Powered “Special Sauce” that laid out some of the same arguments.

Using Roslyn for Runtime Code Generation in Marten presented an early form of the code generation and runtime compilation that ended up in Lamar. We ripped this out of Marten, but it still served as a proof of concept later for Jasper;)

Really, this amounts to what I think is an easier to use form of Aspect Oriented Programming.

Hello Calavista

I’m excited to announce that today I’m joining Calavista here in Austin. I’ll be providing oversight and technical direction for full stack, custom application development projects for our clients. I think I’ll mostly be working on the .Net platform, but I might get to be involved with other platforms here and there. After years of working remotely and mostly working on oddball, technical infrastructure tooling, I’m looking forward to simply getting back to project based work again — and having some measure of say over the entire technical stack;)

Several folks have asked me what’s going to happen with the OSS projects I work on and if I’d be able to use any of them in this new role. I can tell you that I don’t plan to walk away from any of my active OSS projects, but I have no idea what the future holds or whether or not it will be appropriate to use many of them in my new role. Offhand, I think I’ll be using Alba and maybe Storyteller on the first round of clients. After that, who knows?

I do know that I’ll be using my OSS work as a way to learn some technologies that are new to me but likely to show up in client project work. Definitely look for Jasper to suddenly get plenty of Azure friendly features and work much more tightly with MVC Core as part of that effort.


Retrospective on my OSS Career

Tomorrow is my last day at Willis Towers Watson (WTW) after 5+ years, and I felt like a bit of a retrospective was in order — partially to convince myself that I actually accomplished something in that time;) I did plenty of other things while I was there, but supporting and building OSS tools has been a major part of my official role.

It’s stupid long, but I think this is a decent timeline of the highlights and lowlights of my OSS career. I think I’m about to slow way down on OSS work after this year and in my new role, so I kind of want to reflect back a little bit on what I have done the past 15 years or so before turning the page.


I was in a miserable position as a non-coding architect at a Big IT shop, and I was scared to death that that meant my career was already over. I came up with a crazy scheme to build out an awesome new ORM for this new .Net thing as a way to prove to prospective employers that I really could write code. That work went nowhere, but I did actually manage to land a different job at ThoughtWorks.

From following the internal discussion lists there I learned about dependency injection and the very early PicoContainer IoC tool that had been built by some folks at TW. I realized that I could repurpose some of the code from the wreckage of my failed ORM project to be an IoC tool for .Net. Add some encouragement from TW colleagues, and StructureMap was born (and to this day, refuses to die).


The details aren’t important, but I went through some dark, then much happier but chaotic, personal times and work — especially OSS work — was an escape for me.

I made the big StructureMap 2.5 release that Jimmy Bogard took to calling my Duke Nukem Forever release that was supposed to be StructureMap’s “Python 3000” release that fixed all the original structural problems and put it permanently on a stable base for the future.

Narrator’s Voice: Everybody including Jeremy hated the new StructureMap 2.5 configuration DSL, he immediately added better alternatives that weren’t documented, people yelled at him for years about the missing documentation, and StructureMap was certainly not “done”

A couple of us at Dovetail started FubuMVC, an alternative framework for ASP.Net because we thought we were smarter than the real ASP.Net team working on MVC and had a different vision for how all of that should work.

We also rebooted my older Storyteller project I had worked on for years as improved tooling for the old FitNesse testing engine, and finally moved it toward being a full blown replacement for FitNesse.



I did a big pre-compiler workshop at CodeMash 2012 that I thought went great and I was encouraged.

I joined WTW at the end of 2012 after my one and only (and last) startup experience turned sour. The corner of WTW where I work was at the time the biggest user of FubuMVC — the very large, very ambitious OSS web framework that was my own personal white whale for a long time. A couple of us believed that this would be a perfect opportunity to keep FubuMVC going as an OSS project because we genuinely believed that it would be successful enough that we would be able to make a living doing consulting for shops wanting to use it.

At the time, we were still building new features into FubuMVC that were almost immediately being put into real projects and it was a fun time.

Narrator Voice: It did not turn out the way Jeremy planned


Some of my WTW colleagues and I did a big workshop at CodeMash 2013 on the big FubuMVC 1.0 release that flopped hard and I failed to reach the obvious conclusion that the gig was up and it was time to move on. Having stayed up all night the night before reading A Memory of Light didn’t help either, but c’mon, we’d waited 20+ years to get to the end of that thing.

I poured a lot more energy that year into FubuMVC for command line tooling and a full blown templating engine that made for some really cool conference talks that year, but didn’t really get used that much.

The big win for 2013 was building out an addon called FubuTransportation that used a lot of the basic infrastructure in FubuMVC to be a fairly robust service bus that still underpins quite a few systems at WTW today. That was my first real exposure to asynchronous messaging, and I take some significant pride in what we built.


I basically had to admit that FubuMVC had failed as an OSS project and I was admittedly kind of lost the rest of the year. I wrote a lot about why it failed and what I thought we had learned along the way. The biggest lesson to me was that I had never done a good job promoting, explaining, or documenting FubuMVC and its usage. Part of my theory here is that if we’d had more usage and visibility early, we could have more quickly identified and addressed usability issues. I swore that if I ever tried to do something like FubuMVC ever again that I’d do that part of the project much better.

It did give me a chance to swing back to StructureMap and finish the big 3.0 release that made some sweeping improvements to the internals, improved a serious performance problem that impacted our systems, killed off some old bugs, and fixed some pretty serious flaws. I genuinely believed that the 3.0 release would put it permanently on a stable base for the future.

Narrator’s Voice: Jeremy was wrong about this

I also got to play around with an eventually abandoned project called pg-events that was meant to be a Node.js based event store backed by Postgresql. It didn’t go anywhere by itself, but it absolutely helped form the basis for what became the Marten event sourcing functionality that’s actually been a success.

Later that year we started seeing some significantly encouraging information about “Project K” that eventually became .Net Core. All of that made myself and my main contributors much more bullish about .Net again and in a fit of pure stubbornness, I started to envision how I would build a much better version of FubuMVC that took advantage of all the new stuff like Roslyn and fixed the technical flaws in FubuMVC. I started referring to that theoretical project as “Jasper” after my original hometown in Missouri.


My shop was in border line insurrection over how our Storyteller integration testing was going. I gave a big talk outlining what I thought some of the challenges and problems were including options to switch to SpecFlow or just use xUnit, but to my surprise, they chose to just have a much improved Storyteller that became the Storyteller 3.0 release.

I pretty well did a near ground up rewrite of the testing engine focusing on performance and ease of use. For the user interface, I used this work as a chance for me to learn React.js that we had just adopted at work as a replacement for Angular.js. I had a blast doing UI work for the first time in years and I’m genuinely pretty proud of how it all turned out. Storyteller 3 went into usage later that year and it’s mostly still going in our older projects that haven’t converted yet to .Net Core.

In late 2015, we knew that we needed to get our biggest system off of RavenDb before the next busy season. My boss at the time had a theory that we could exploit Postgresql’s new JSON support to act as a document database. I took on the work to go spike that out and see if we could build a library that could be swapped in for our existing RavenDb usage in that big application. The initial spiking went well, and off it went.

At one point my boss asked me what name I was using for the Postgresql as doc db library, because he was concerned that I’d choose a bad name like “Jasper.Data” — which of course was exactly what the project name I was temporarily using. I mumbled “no,” quickly googled what the natural predators of ravens are, and settled on the name “Marten.”

Because of the bitter taste of FubuMVC that still remained, I swore that I would do a much better job at the softer things in an OSS project and tried to blog up a storm about the early work. The Marten concept seemed to resonate with quite a few folks and we had interest and early contributions almost off the bat that did a lot to make the project go faster and be far more successful than I think it would have otherwise.

I still wasn’t *completely* done with FubuMVC, and did a pretty large effort to consolidate all the remaining elements that we still used at work into a single library in the FubuMVC 3.0 release. I spent a lot of time streamlining the bootstrapping as well for quicker feedback cycles during integration testing. A lot of this work helped inform the internals of Jasper I’ll talk about later.

In late 2015 Kristian Hellang worked with me to make StructureMap work with the new ASP.Net Core DI abstractions and compliance specifications. While we were doing that, I also snuck in some work to overhaul StructureMap’s type scanning support based on years of user problems. With that work done, I pushed the StructureMap 4.0 release in the belief that I had now overhauled everything in StructureMap from the old 2.* architecture and that it was done for all time.

Narrator’s Voice: Jeremy was wrong. While the 4.* release was definitely an improvement, users still managed to find oddball problems and the performance definitely lagged newer IoC tools


We used Marten in production, just in time for our busy season. It had about the expected level of issues pop up in its shakedown cruise, but I still felt pretty good about how it went. Adoption in the outside world steadily creeped up and I got to do several podcasts that year about Marten.

Unfortunately, Marten caused quite a bit of conflict between myself and our centralized database team that ultimately contributed to me deciding to leave. I lost some enthusiasm for Marten because of this, and my activity within the Marten community declined because of it.


OSS wise, this year was going to be all about Jasper for me. FubuMVC was a web framework first that had messaging functionality bolted on later. Jasper on the other hand, was going to be a much smaller tool to replace the older FubuMVC/FubuTransportation messaging in a way that would play nicely with the newer ASP.Net Core bits.

First though, I needed to clean my plate of all other outstanding work so I could concentrate on just Jasper:

  • Oakton was a bit of leftover fubu code we’d used for years for command line parsing that I converted to .Net Core and documented
  • Alba is also some leftover fubu code for declarative testing of HTTP handlers that I adapted for .Net Core, documented, and published to Nuget
  • Storyteller 4 moved Storyteller to ASP.Net Core. That turned out to be a huge chunk of work, but it was a great learning experience. I also added quite a few improvements for usage by non-developers that might not have paid off.
  • Storyteller 5 took a little better advantage of the new dotnet cli and made debugging specifications a lot easier
  • Marten 2.0 was a huge effort to reduce object allocations within Marten and improve runtime performance. It also cleaned up some internal trash and made it a lot easier going forward to add new types of database schema objects that have definitely paid off

Finally, I got some time to bear down and start working on Jasper that’s really been my main passion project for 3-4 years. I had a bunch of new colleagues in our architecture team at work that were interested in Jasper, and I thought we made a huge amount of progress fast. I didn’t do as much work to publicize it like I did with Marten because I just didn’t have a good feeling about my company’s continued support for Jasper after what happened with Marten.


I decided I was absolutely fed up with supporting StructureMap and it wasn’t viable to make the large scale changes it would take to fix the performance and other remaining issues. As an offramp for SM users and as part of Jasper, I started the year by yanking some code out of Jasper into a new library I first called “BlueMilk,” and now Lamar that is meant to be a much more performant and smaller replacement for StructureMap.

I’m also working full speed right now on Jasper with an expected 0.8 and 0.9 update coming in the next couple weeks. I don’t have a lot of interest yet, but I’m feeling very, very good about the technical direction, the usability, and the performance right now. I’m not 100% sure what WTW will do with it, but I’m committed to continuing with both Jasper and Lamar. I’m aiming for a 1.0 release of both by this fall.

And Beyond…

Several folks have asked me what will happen with Jasper or Marten when I start my new position next week. I honestly don’t know, but I fully intend to continue supporting Jasper, Marten, Storyteller, and Lamar. I don’t expect that I’ll have the bandwidth to write nearly as much OSS code as I have the past 5 years or so at WTW, but I wanted to slow down anyway.


What I’ve Learned

Heh, maybe not enough. I learned a lot of specific technical lessons and I’m a far better technical developer because of my OSS work. For more specifics though, I’d say:

  • Only do OSS work if you’re passionate about it and generally enjoy what you’re doing
  • My OSS work has absolutely had a positive impact on my career, just indirectly. My interview for my new position included a presentation on Marten. My previous position came about directly because of my OSS projects
  • It’s also an opportunity cost if you could have been learning something valuable in the time you spent on OSS work
  • I’ve met a lot of cool people through my OSS work and I have relationships I wouldn’t have had otherwise
  • Make your documentation easy to update and easy to contribute to by other folks
  • Talk about what you’re doing early and often