Tag Archives: .Net

Introducing Jasper as an In Process Command Bus for .Net

A couple weeks ago I wrote a blog post called If you want your OSS project to be successful… about trying to succeed with open source development efforts. One of the things I said was “don’t go dark” when you’re working on an OSS project. Not only did I go “dark” on Jasper for quite awhile, I finally rolled out its 1.0 release during the worst global pandemic in a century. So all told, Jasper is by no means an exemplary project model for anyone to follow who’s trying to succeed with an OSS project.

This sample application is also explained and demonstrated in the documentation page Jasper as a Mediator.

Jasper is a new open source tool that can be used as an in process “command bus” inside of .Net Core 3 applications. Used locally, Jasper can provide a superset of the “mediator” functionality popularized by MediatR that many folks like using within ASP.Net MVC Core applications to simplify controller code by offloading most of the processing to separate command handlers. Jasper certainly supports that functionality, but also adds rich options for asynchronous processing commands with built in resiliency mechanisms.

Part of the reason why Jasper went cold was waiting for .Net Core 3.0 to be released. With the advent of .Net Core 3.0, Jasper was somewhat re-wired to support the new generic HostBuilder for bootstrapping and configuration. With this model of bootstrapping, Jasper can easily be integrated into any kind of .Net Core application (MVC Core application, web api, windows service, console app, “worker” app) that uses the HostBuilder.

Let’s jump into seeing how Jasper could be integrated into a .Net Core Web API system. All the sample code I’m showing here is on GitHub in the “InMemoryMediator” project. InMemoryMediator uses EF Core with Sql Server as its backing persistence. Additionally, this sample shows off Jasper’s support for the “Outbox” pattern for reliable messaging without having to resort to distributed transactions.

To get started, generated a project with the dotnet new webapi template. From there, I added some extra Nuget dependencies:

  1. Microsoft.EntityFrameworkCore.SqlServer — because we’re going to use EF Core with Sql Server as the backing persistence for this service
  2. Jasper — this is the core library, and all that you would need to use Jasper as an in process command bus
  3. Jasper.Persistence.EntityFrameworkCore — extension library to add Jasper’s “Outbox” and transactional support to EF Core
  4. Jasper.Persistence.SqlServer — extension library to add persistence for the “Outbox” support
  5. Swashbuckle.AspNetCore — just to add Swagger support

Your First Jasper Handler

Before we get into bootstrapping, let’s just start with how to build a Jasper command handler and how that would integrate with an MVC Core Controller. Keeping to a very simple problem domain, let’s say that we’re capturing, creating, and tracking new Item entities like this:

public class Item
{
    public string Name { get; set; }
    public Guid Id { get; set; }
}

So let’s build a simple Jasper command handler that would process a CreateItemCommand message, persist a new Item entity, and then raise an ItemCreated event message that would be handled by Jasper as well, but asynchronously somewhere off to the side in a different through. Lastly, we want things to be reliable, so we’re going to introduce Jasper’s integration of Entity Framework Core for “Outbox” support for the event messages being raised at the same time we create new Item entities.

First though, to put things in context, we’re trying to get to the point where our controller classes mostly just delegate to Jasper through its ICommandBus interface and look like this:

public class UseJasperAsMediatorController : ControllerBase
{
    private readonly ICommandBus _bus;

    public UseJasperAsMediatorController(ICommandBus bus)
    {
        _bus = bus;
    }

    [HttpPost("/items/create")]
    public Task Create([FromBody] CreateItemCommand command)
    {
        // Using Jasper as a Mediator
        return _bus.Invoke(command);
    }
}

You can find a lot more information about what Jasper can do as a local command bus in the project documentation.

When using Jasper as a mediator, the controller methods become strictly about the mechanics of reading and writing data to and from the HTTP protocol. The real functionality is now in the Jasper command handler for the CreateItemCommand message, as coded with this Jasper Handler class:

public class ItemHandler
{
    // This attribute applies Jasper's transactional
    // middleware
    [Transactional]
    public static ItemCreated Handle(
        // This would be the message
        CreateItemCommand command,

        // Any other arguments are assumed
        // to be service dependencies
        ItemsDbContext db)
    {
        // Create a new Item entity
        var item = new Item
        {
            Name = command.Name
        };

        // Add the item to the current
        // DbContext unit of work
        db.Items.Add(item);

        // This event being returned
        // by the handler will be automatically sent
        // out as a "cascading" message
        return new ItemCreated
        {
            Id = item.Id
        };
    }
}

You’ll probably notice that there’s no interface and mandatory base class usage in the code up above. Similar to MVC Core, Jasper will auto-discover the handler classes and message handling methods from your code through type scanning. Unlike MVC Core and every other service bus kind of tool .Net I’m aware of, Jasper only depends on naming conventions rather than base classes or interfaces.

The only bit of framework “stuff” at all in the code above is the [Transactional] attribute that decorates the handler class. That adds Jasper’s own middleware for transaction and outbox support around the message handling to just that message. At runtime, when Jasper handles the CreateItemCommand in that handler code up above, it:

  • Sets up an “outbox” transaction with the EF Core ItemsDbContextservice being passed into the Handle() method as a parameter
  • Take the ItemCreated message that “cascades” from the handler method and persists that message with ItemsDbContext so that both the outgoing message and the new Item entity are persisted in the same Sql Server transaction
  • Commits the EF Core unit of work by calling ItemsDbContext.SaveChangesAsync()
  • Assuming that the transaction succeeds, Jasper kicks the new ItemCreated message into its internal sending loop to speed it on its way. That outgoing event message could be handled locally in in-memory queues or sent out via external transports like Rabbit MQ or Azure Service Bus

If you’re interested in what the code above would look like without any of Jasper’s middleware or cascading message conventions, see the section near the bottom of this post called “Do it All Explicitly Controller”.

So that’s the MVC Controller and Jasper command handler, now let’s move on to integrating Jasper into the application.

Bootstrapping and Configuration

This is just an ASP.Net Core application, so you’ll probably be familiar with the generated Program.Main() entry point. To completely utilize Jasper’s extended command line support (really Oakton.AspNetCore), I’ll make some small edits to the out of the box generated file:

public class Program
{
    // Change the return type to Task to communicate
    // success/failure codes
    public static Task Main(string[] args)
    {
        return CreateHostBuilder(args)

            // This replaces Build().Start() from the default
            // dotnet new templates
            .RunJasper(args);
    }

    public static IHostBuilder CreateHostBuilder(string[] args) =>
        Host.CreateDefaultBuilder(args)

            // You can do the Jasper configuration inline with a 
            // Lambda, but here I've centralized the Jasper
            // configuration into a separate class
            .UseJasper<JasperConfig>()

            .ConfigureWebHostDefaults(webBuilder =>
            {
                webBuilder.UseStartup();
            });
}

This isn’t mandatory, but there’s just enough Jasper configuration for this project with the outbox support that I opted to put the Jasper configuration in a new file called JasperConfig that inherits from JasperOptions:

public class JasperConfig : JasperOptions
{
    public override void Configure(IHostEnvironment hosting, IConfiguration config)
    {
        if (hosting.IsDevelopment())
        {
            // In development mode, we're just going to have the message persistence
            // schema objects dropped and rebuilt on app startup so you're
            // always starting from a clean slate
            Advanced.StorageProvisioning = StorageProvisioning.Rebuild;
        }

        // Just the normal work to get the connection string out of
        // application configuration
        var connectionString = config.GetConnectionString("sqlserver");

        // Setting up Sql Server-backed message persistence
        // This requires a reference to Jasper.Persistence.SqlServer
        Extensions.PersistMessagesWithSqlServer(connectionString);

        // Set up Entity Framework Core as the support
        // for Jasper's transactional middleware
        Extensions.UseEntityFrameworkCorePersistence();

        // Register the EF Core DbContext
        // You can register IoC services in this file in addition
        // to any kind of Startup.ConfigureServices() method,
        // but you probably only want to do it in one place or the 
        // other and not both.
        Services.AddDbContext(
            x => x.UseSqlServer(connectionString),

            // This is important! Using Singleton scoping
            // of the options allows Jasper + Lamar to significantly
            // optimize the runtime pipeline of the handlers that
            // use this DbContext type
            optionsLifetime:ServiceLifetime.Singleton);
    }
}

Returning a Response to the HTTP Request

In the UseJasperAsMediatorController controller, we just passed the command into Jasper and let MVC return an HTTP status code 200 with no other context. If instead, we wanted to send down the ItemCreated message as a response to the HTTP caller, we could change the controller code to this:

public class WithResponseController : ControllerBase
{
    private readonly ICommandBus _bus;

    public WithResponseController(ICommandBus bus)
    {
        _bus = bus;
    }

    [HttpPost("/items/create2")]
    public Task<ItemCreated> Create([FromBody] CreateItemCommand command)
    {
        // Using Jasper as a Mediator, and receive the
        // expected response from Jasper
        return _bus.Invoke<ItemCreated>(command);
    }
}

“Do it All Explicitly Controller”

Just for a comparison, here’s the CreateItemCommand workflow implemented inline in a controller action with explicit code to handle the Jasper “Outbox” support:

// This controller does all the transactional work and business
// logic all by itself
public class DoItAllMyselfItemController : ControllerBase
{
    private readonly IMessageContext _messaging;
    private readonly ItemsDbContext _db;

    public DoItAllMyselfItemController(IMessageContext messaging, ItemsDbContext db)
    {
        _messaging = messaging;
        _db = db;
    }

    [HttpPost("/items/create3")]
    public async Task Create([FromBody] CreateItemCommand command)
    {
        // Start the "Outbox" transaction
        await _messaging.EnlistInTransaction(_db);

        // Create a new Item entity
        var item = new Item
        {
            Name = command.Name
        };

        // Add the item to the current
        // DbContext unit of work
        _db.Items.Add(item);

        // Publish an event to anyone
        // who cares that a new Item has
        // been created
        var @event = new ItemCreated
        {
            Id = item.Id
        };

        // Because the message context is enlisted in an
        // "outbox" transaction, these outgoing messages are
        // held until the ongoing transaction completes
        await _messaging.Send(@event);

        // Commit the unit of work. This will persist
        // both the Item entity we created above, and
        // also a Jasper Envelope for the outgoing
        // ItemCreated message
        await _db.SaveChangesAsync();

        // After the DbContext transaction succeeds, kick out
        // the persisted messages in the context "outbox"
        await _messaging.SendAllQueuedOutgoingMessages();
    }
}

As a huge lesson learned from Jasper’s predecessor project, it’s always possible to easily bypass any kind of Jasper conventional “magic” and write explicit code as necessary.

There’s a lot more to say about Jasper and you can find a *lot* more information on its documentation website. I’ll be back sometime soon with more examples of Jasper, with probably some focus on functionality that goes beyond other mediator tools.

In the next post, I’ll talk about Jasper’s runtime execution pipeline and how it’s very different than other .Net tools with similar functionality (hint, it involves a boatload less generics magic than anything else).

 

Environment Checks and Better Command Line Abilities for your .Net Core Application

Oakton.AspNetCore is a new package built on top of the Oakton 2.0+ command line parser that adds extra functionality to the command line execution of ASP.Net Core and .Net Core 3.0 codebases. At the bottom of this blog post is a small section showing you how to set up Oakton.AspNetCore to run commands in your .Net Core application.

First though, you need to understand that when you use the dotnet run command to build and execute your ASP.Net Core application, you can pass arguments and flags both to dotnet run itself and to your application through the string[] args argument of Program.Main(). These two types of arguments or flags are separated by a double dash, like this example: dotnet run --framework netcoreapp2.0 -- ?. In this case, “–framework netcoreapp2.0” is used by dotnet run itself, and the values to the right of the “–” are passed into your application as the args array.

With that out of the way, let’s see what Oakton.AspNetCore brings to the table.

Extended “Run” Options

In the default ASP.Net Core templates, your application can be started with all its defaults by using dotnet run.  Oakton.AspNetCore retains that usage, but adds some new abilities with its “Run” command. To check the syntax options, type dotnet run -- ? run:

 Usages for 'run' (Runs the configured AspNetCore application)
  run [-c, --check] [-e, --environment <environment>] [-v, --verbose] [-l, --log-level <logleve>] [----config:<prop> <value>]

  ---------------------------------------------------------------------------------------------------------------------------------------
    Flags
  ---------------------------------------------------------------------------------------------------------------------------------------
                        [-c, --check] -> Run the environment checks before starting the host
    [-e, --environment <environment>] -> Use to override the ASP.Net Environment name
                      [-v, --verbose] -> Write out much more information at startup and enables console logging
          [-l, --log-level <logleve>] -> Override the log level
          [----config:<prop> <value>] -> Overwrite individual configuration items
  ---------------------------------------------------------------------------------------------------------------------------------------

To run your application under a different hosting environment name value, use a flag like so:

dotnet run -- --environment Testing

or

dotnet run -- -e Testing

To overwrite configuration key/value pairs, you’ve also got this option:

dotnet run -- --config:key1 value1 --config:key2 value2

which will overwrite the configuration keys for “key1” and “key2” to “value1” and “value2” respectively.

Lastly, you can have any configured environment checks for your application immediately before starting your application by using this flag:

dotnet run -- --check

More on this function in the next section.

 

Environment Checks

I’m a huge fan of building environment tests directly into your application. Environment tests allow your application to self-diagnose issues with deployment, configuration, or environmental dependencies upfront that would impact its ability to run.

As a very real world example, let’s say your ASP.Net Core application needs to access another web service that’s managed independently by other teams and maybe, just maybe your testers have occasionally tried to test your application when:

  • Your application configuration has the wrong Url for the other web service
  • The other web service isn’t running at all
  • There’s some kind of authentication issue between your application and the other web service

In the real world project that spawned the example above, we added a formal environment check that would try to touch the health check endpoint of the external web service and throw an exception if we couldn’t connect to the external system. The next step was to execute our application as it was configured and deployed with this environment check as part of our Continuous Deployment pipeline. If the environment check failed, the deployment itself failed and triggered off the normal set of failure alerts letting us know to go fix the environment rather than letting our testers waste time on a bad deployment.

With all that said, let’s look at what Oakton.AspNetCore does here to help you add environment checks. Let’s say your application uses a single Sql Server database, and the connection string should be configured in the “connectionString” key of your application’s connection. You would probably want an environment check just to verify at a minimum that you can successfully connect to your database as it’s configured.

In your ASP.Net Core Startup class, you could add a new service registration for an environment check like this example:

// This method gets called by the runtime. Use this method to add services to the container.
public void ConfigureServices(IServiceCollection services)
{
    // Other registrations we don't care about...
    
    // This extension method is in Oakton.AspNetCore
    services.CheckEnvironment<IConfiguration>("Can connect to the application database", config =>
    {
        var connectionString = config["connectionString"];
        using (var conn = new SqlConnection(connectionString))
        {
            // Just attempt to open the connection. If there's anything
            // wrong here, it's going to throw an exception
            conn.Open();
        }
    });
}

Now, during deployments or even just pulling down the code to run locally, we can run the environment checks on our application like so:

dotnet run -- check-env

Which in the case of our application above, blows up with output like this because I didn’t add configuration for the database in the first place:

Running Environment Checks
   1.) Failed: Can connect to the application database
System.InvalidOperationException: The ConnectionString property has not been initialized.
   at System.Data.SqlClient.SqlConnection.PermissionDemand()
   at System.Data.SqlClient.SqlConnectionFactory.Permissi
onDemand(DbConnection outerConnection)
   at System.Data.ProviderBase.DbConnectionInternal.TryOpenConnectionInternal(DbConnection outerConnection, DbConnectionFactory connectionFactory, TaskCompletionSource`1
 retry, DbConnectionOptions userOptions)
   at System.Data.ProviderBase.DbConnectionClosed.TryOpenConnection(DbConnection outerConnection, DbConnectionFactory connectionFactory, TaskCompletionSource`1 retry, 
DbConnectionOptions userOptions)
   at System.Data.SqlClient.SqlConnection.TryOpen(TaskCompletionSource`1 retry)
   at System.Data.SqlClient.SqlConnection.Open()
   at MvcApp.Startup.<>c.<ConfigureServices>b_
_4_0(IConfiguration config) in /Users/jeremydmiller/code/oakton/src/MvcApp/Startup.cs:line 41
   at Oakton.AspNetCore.Environment.EnvironmentCheckExtensions.<>c__DisplayClass2_0`1.<CheckEnvironment>b__0(IServ
iceProvider s, CancellationToken c) in /Users/jeremydmiller/code/oakton/src/Oakton.AspNetCore/Environment/EnvironmentCheckExtensions.cs:line 53
   at Oakton.AspNetCore.Environment.LambdaCheck.Assert(IServiceP
rovider services, CancellationToken cancellation) in /Users/jeremydmiller/code/oakton/src/Oakton.AspNetCore/Environment/LambdaCheck.cs:line 19
   at Oakton.AspNetCore.Environment.EnvironmentChecker.ExecuteAll
EnvironmentChecks(IServiceProvider services, CancellationToken token) in /Users/jeremydmiller/code/oakton/src/Oakton.AspNetCore/Environment/EnvironmentChecker.cs:line 31

If you ran this command during continuous deployment scripts, the command should cause your build to fail when it detects environment problems.

In some of Calavista’s current projects , we’ve been adding environment tests to our applications for items like:

  • Can our application read certain configured directories?
  • Can our application as it’s configured connect to databases?
  • Can your application reach other web services?
  • Are required configuration items specified? That’s been an issue as we’ve had to build out Continuous Deployment pipelines to many, many different server environments

I don’t see the idea of “Environment Tests” mentioned very often, and it might have other names I’m not aware of. I learned about the idea back in the Extreme Programming days from a blog post from Nat Pryce that I can’t find any longer, but there’s this paper from those days too.

 

Add Other Commands

I’ve frequently worked in projects where we’ve built parallel console applications that reproduce a lot of the same IoC and configuration setup to perform administrative tasks or add other diagnostics. It could be things like adding users, rebuilding an event store projection, executing database migrations, or loading some kind of data into the application’s database. What if instead, you could just add these directly to your .Net Core application as additional dotnet run -- [command] options? Fortunately, Oakton.AspNetCore let’s you do exactly that, and even allows you to package up reusable commands in other assemblies that could be distributed by Nuget.

If you use Lamar as your IoC container in an ASP.Net Core application (or .Net Core 3.0 console app using the new unified HostBuilder), we now have an add on Nuget called Lamar.Diagnostics that will add new Oakton commands to your application that give you access to Lamar’s diagnostic tools from the command line. As an example, this library adds a command to write out the “WhatDoIHave()” report for the underlying Lamar IoC container of your application to the command line or a file like this:

dotnet run --lamar-services

Now, using the command above as an example, to build or add your own commands start by decorating the assembly containing the command classes with this attribute:

[assembly:OaktonCommandAssembly]

Having this assembly tells Oakton.AspNetCore to search the assembly for additional Oakton commands. There is no other setup necessary.

If your command needs to use the application’s services or configuration, have the Oakton input type inherit from NetCoreInput type from Oakton.AspNetCore like so:

public class LamarServicesInput : NetCoreInput
{
    // Lots of other flags
}

Next, the new command for “lamar-services” is just this:

[Description("List all the registered Lamar services", Name = "lamar-services")]
public class LamarServicesCommand : OaktonCommand<LamarServicesInput>
{
    public override bool Execute(LamarServicesInput input)
    {
        // BuildHost() will return an IHost for your application
        // if you're using .Net Core 3.0, or IWebHost for
        // ASP.Net Core 2.*
        using (var host = input.BuildHost())
        {
            // The actual execution using host.Services
            // to get at the underlying Lamar Container
        }

        return true;
    }


}

Getting Started

In both cases I’m assuming that you’ve bootstrapped your application with one of the standard project templates like dotnet new webapi or dotnet new mvc. In both cases, you’ll first add a reference to the Oakton.AspNetCore Nuget. Next, break into the Program.Main()entry point method in your project and modify it like the following samples.

If you’re absolutely cutting edge and using ASP.Net Core 3.0:

public class Program
{
    public static Task<int> Main(string[] args)
    {
        return CreateHostBuilder(args)
            
            // This extension method replaces the calls to
            // IWebHost.Build() and Start()
            .RunOaktonCommands(args);
    }

    public static IHostBuilder CreateHostBuilder(string[] args) =>
        Host.CreateDefaultBuilder(args)
            .ConfigureWebHostDefaults(x => x.UseStartup<Startup>());
    
}

For what I would guess is most folks, the ASP.Net Core 2.* setup (and this would work as well for ASP.Net Core 3.0 as well):

public class Program
{
    public static Task<int> Main(string[] args)
    {
        return CreateWebHostBuilder(args)
            
            // This extension method replaces the calls to
            // IWebHost.Build() and Start()
            .RunOaktonCommands(args);
    }

    public static IWebHostBuilder CreateWebHostBuilder(string[] args) =>
        WebHost.CreateDefaultBuilder(args)
            .UseStartup<Startup>();
    
}

The two changes from the template defaults is to:

  1. Change the return value to Task<int>
  2. Replace the calls to IHost.Build() and IHost.Start() to use the RunOaktonCommands(args) extension method that hangs off IWebHostBuilder and the new unified IHostBuilder if you’re targeting netcoreapp3.0.

And that’s it, you’re off to the races.

 

Alba 3.1 supercharges your ASP.Net Core HTTP Contract Testing

I was just able to push a Nuget for Alba 3.1 that adds support for ASP.Net Core 3.0 and updated the documentation website to reflect the mild additions. Big thanks are in order to Lauri Kotilainen and Jonathan Mezach for making this release happen.

If you’re not familiar with Alba, in its current incarnation it’s a wrapper around the ASP.Net Core TestServer that adds a whole lot of useful helpers to make your testing of ASP.Net Core HTTP services be much more declarative and easier than it is with TestServer alone.

Alba is descended from some built in support for HTTP contract testing in FubuMVC that I salvaged and adapted for usage with ASP.Net Core a few years back. Finally, Alba has used TestServer rather than its own homegrown HTTP runner since the big 3.0 release this January.

Lamar and Oakton join the .Net Core 3.0 Party

Like many other .Net OSS authors, I’ve been putting in some time this week making sure that various .Net tools support the brand spanking new .Net Core and ASP.Net Core 3.0 bits that were just released. First up is Oakton and Lamar, with the rest of the SW Missouri projects to follow soon.

Oakton

Oakton is yet another command line parser tool for .Net. The main Oakton 2.0.1 library adds support for the Netstandard2.1 target, but does not change in any other way. The Oakton.AspNetCore library got a full 2.0.0 release. If you’re using ASP.Net Core v2.0, your usage is unchanged. However, if you are targeting netcoreapp3.0, the extension methods now depend on the newly unified IHostBuilder rather than the older IWebHostBuilder and the bootstrapping looks like this now:

public class Program
{
    public static Task<int> Main(string[] args)
    {
        return CreateHostBuilder(args)
            
            // This extension method replaces the calls to
            // IWebHost.Build() and Start()
            .RunOaktonCommands(args);
    }

    public static IHostBuilder CreateHostBuilder(string[] args) =>
        Host.CreateDefaultBuilder(args)
            .ConfigureWebHostDefaults(x => x.UseStartup<Startup>());
    
}

 

Oakton.AspNetCore provides an improved and extensible command line experience for ASP.Net Core. I’ve been meaning to write a bigger blog post about it, but that’s gonna wait for another day.

Lamar

The 3.0 support in Lamar was unpleasant, because the targeting covers the spread from .Net 4.6.1 to netstandard2.0 to netstandard2.1, and the test library covered several .Net runtimes. After this, I’d really like to not have to type “#if/#else/#endif” ever, ever again (and I do use ReSharper/Rider’s ALT-CTRL-J surround with feature religiously that helps, but you get my point).

The new bits and releases are:

  • Lamar v3.2.0 adds a netstandard2.1 target
  • LamarCompiler v2.1.0 adds a netstandard2.1 target (probably only used by Jasper, but who knows who’s picked it up, so I updated it)
  • LamarCodeGeneration v1.1.0 adds a netstandard2.1 target but is otherwise unchanged
  • Lamar.Microsoft.DependencyInjection v4.0.0 — this is the adapter library to replace the built in DI container in ASP.Net Core with Lamar. This is a full point release because some of the method signatures changed. I deleted the special Lamar handling for IOption<T> and ILogger<T> because they no longer added any value. If your application targets netcoreapp3.0, the UseLamar() method and its overloads hang off of IHostBuilder instead of IWebHostBuilder. If you are remaining on ASP.Net Core v2.*, UseLamar() still hangs off of IWebHostBuilder
  • Lamar.Diagnostics v1.1.0 — assuming that you use the Oakton.AspNetCore command line adapter for your ASP.Net Core application, adding a reference to this library adds new commands to expose the Lamar diagnostic capabilities from the command line of your application. This version targets both netstandard2.0 for ASP.Net Core v2.* and netstandard2.1 for ASP.Net Core 3.*.

 

The challenges

The biggest problem was that in both of these projects, I wanted to retain support for .Net 4.6+ and netstandard2.0 or netcoreapp2.* runtime targets. That’s unfortunately meant a helluva lot of conditional compilation and conditional Nuget references per target framework. In some cases, the move from the old ASP.Net Core <=2.* IWebHostBuilder to the newer, unified IHostBuilder took some doing in both the real code and in the unit tests. Adding to the fun is that there are real differences sometimes between the old, full .Net framework, netcoreapp2.0, netcoreapp2.1/2, and certainly the brand new netcoreapp3.0.

Another little hiccup was that the dotnet pack pathing support was fixed to what it was originally in the project.json early days, but that broke all our build scripts and that had to be adjusted (the artifacts path is relative to the current directory now rather than to the binary target path like is was).

At least on AppVeyor, we had to force the existing image to install the latest .Net SDK as part of the build because the image we were using didn’t yet support the brand new .Net SDK. I’d assume that is very temporary and I can’t speak to other hosted CI providers. If you’re wondering how to do that, see this example from Lamar that I got from Jimmy Byrd.

 

Other Projects I’m Involved With

  • StructureMap — If someone feels like doing it, I’d take a pull request to StructureMap.Microsoft.DependencyInjection to put out a .Net Core 3.0 compatible version (really just means supporting the new IHostBuilder instead of or in addition to the old IWebHostBuilder).
  • Alba — Some other folks are working on that one, and I’m anticipating an Alba on ASP.Net Core 3.0 release in the next couple days. I’ll write a follow up blog post when that’s ready
  • Marten — I’m not anticipating much effort here, but we should at least have our testing libraries target netcoreapp3.0 and see what happens. We did have some issues with netcoreapp2.1, so we’ll see I guess
  • Jasper — I’ve admittedly had Jasper mostly on ice until .Net Core 3.0 was released. I’ll start work on Jasper next week, but in that is going to be a true conversion up to netcoreapp3.0 and some additional structural changes to finally get to a 1.0 release of that thing.
  • Storyteller — I’m not sure yet, but I’ve started doing a big overhaul of Storyteller that’s gotten derailed by the .Net Core 3.0 stuff

 

Storyteller 5.0 – Streamlined CLI, Netstandard 2.0, and easier debugging

I published the Storyteller 5.0 release last night. I punted on doing any kind of big user interface overhaul for now, and just released the back end improvements on their own with some vague idea that there’d be an improved or at least restyled user interface later this year.

The key improvements are:

  • Netstandard 2.0 support
  • An easier getting started story
  • Streamlined command line usage
  • Easier “F5 debugging” for specifications in your IDE
  • No changes whatsoever to your Fixture code from 4.0

Getting Started with Storyteller 5

Previous versions of Storyteller have been problematic for new users getting started and setting up projects with the right Nuget dependencies. I felt like things got a little better with the dotnet cli, but the enduring problem with that is how few .Net developers seem to be using it or familiar with it. When you use Storyteller 5, you need two dependencies in your Storyteller specification project:

  1. A reference to the Storyteller 5.0 assembly via Nuget
  2. The dotnet-storyteller command line tool referenced as a dotnet cli tool in your project, and that’s where most of the trouble come in.

To start up a new Storyteller 5.0 specification project, first make the directory where you want the project to live. Next, use the dotnet new console command to create a new project with a csproj file and a Program.cs file.

In your csproj file, replace the contents with this, or just add the package reference for Storyteller and the cli tool reference for dotnet-storyteller as shown below:

  

  
    netcoreapp2.0
    EXE
  
  
    
  
  
    
  

Next, we need to get into the entry point to this new console application change the Program.Main() method to activate the Storyteller engine within this new project:

    public class Program
    {
        public static int Main(string[] args)
        {
            return StorytellerAgent.Run(args);
        }
    }

Internally, the StorytellerAgent is using Oakton to parse the arguments and carry out one of these commands:

  ------------------------------------------------------------------------------
    Available commands:
  ------------------------------------------------------------------------------
       agent -> Used by dotnet storyteller to remote control the Storyteller specification engine
         run -> Executes Specifications and Writes Results
        test -> Try to start and warmup the system under test for diagnostics
    validate -> Use to validate specifications for syntax errors or missing grammars or fixtures
  ------------------------------------------------------------------------------

If you execute the console application with no arguments like this:

|> dotnet run

It will execute all the specifications and write the results to a file named “stresults.htm.”

You can customize the running behavior by passing in optional flags with the pattern dotnet run -- run --flag flagvalue like this example that just writes the results file to a different location:

|> dotnet run -- run Arithmetic -r ./artifacts/results.htm

If you’re not already familiar with the dotnet cli, what’s going on here is that anything to the right of the “–” double dash is considered to be the command line arguments passed into your application’s Main() method. The “run” argument tells the StorytellerAgent that you actually want to run specifications and it’s unfortunately not redundant and very much mandatory if you want to customize how Storyteller runs specifications.

See the Storyteller 5.0 quickstart project for a working example.

Running the Storyteller Specification Editor

Assuming that you’ve got the cli tools reference to dotnet-storyteller and you’ve executed `dotnet restore` at least once (compiling through VS.Net or Rider does this for you), the only thing you need to do to launch the specification editor tool is this from the command line:

|> dotnet storyteller

F5 Debugging

Debugging complicated Storyteller specifications has been its Achille’s Heel from the very beginning. You can always attach a debugger to a running Storyteller process, but that’s clumsy (quicker in Rider than VS.Net, but still). As a cheap but effective improvement in v5, you can run a single specification from the command line with this signature:

|> dotnet run -- run "Suite1 / ChildSuite1 / Specification Name"

This is admittedly pretty ugly, but remember that you can tell either Rider or VS.Net to pass arguments to your console application when your press F5 to run an application in debug mode. I utilize this quite a bit in Jasper development to troubleshoot individual specifications. Here’s what the configuration looks like for this in Rider:

 

RunSingleSpec

See the “Program arguments” specifically. Once the path to the specification is configured, I can just hit F5 and jump right into a debugging session running just that specification.

We looked pretty hard at supporting the dotnet test tooling so you could run Storyteller specifications from either Visual Studio.Net’s or Rider/ReSharper’s test runners, but all I could think about after trying to reverse engineer xUnit’s tooling around that was a certain Monty Python scene.

Introducing Oakton — Command line parsing minus the usual cruft

As the cool OSS kids would say, “I made another thing.” Oakton is a library that I maintain that I use for command line parsing in the console applications I build and maintain. For those who’ve followed me for a long time, Oakton is an improved version of the command line parsing in FubuCore that now targets Netstandard 1.3 as well as .Net 4.5.1 and 4.6 on the full framework.

What sets Oakton apart from the couple dozen other tools like this in the .Net ecosystem is how it allows you to cleanly separate the command line parsing from your actual command parsing so that you can write cleaner code and more easily test your command execution in automated tests.

Here’s the quick start example from the documentation that’ll have prettier code output. Let’s say you just want a command that will print out a name with an optional title and the option to override the color of the text.

A command in Oakton comes in two parts, a concrete input class that just establishes the required arguments and optional flags through public fields or settable properties:

    public class NameInput
    {
        [Description("The name to be printed to the console output")]
        public string Name { get; set; }
        
        [Description("The color of the text. Default is black")]
        public ConsoleColor Color { get; set; } = ConsoleColor.Black;
        
        [Description("Optional title preceeding the name")]
        public string TitleFlag { get; set; }
    }

The [Description] attributes are optional and embed usage messages for the integrated help output.

Now then, the actual command would look like this:

    [Description("Print somebody's name")]
    public class NameCommand : OaktonCommand
    {
        public NameCommand()
        {
            // The usage pattern definition here is completely
            // optional
            Usage("Default Color").Arguments(x => x.Name);
            Usage("Print name with specified color").Arguments(x => x.Name, x => x.Color);
        }

        public override bool Execute(NameInput input)
        {
            var text = input.Name;
            if (!string.IsNullOrEmpty(input.TitleFlag))
            {
                text = input.TitleFlag + " " + text;
            }
            
            // This is a little helper in Oakton for getting
            // cute with colors in the console output
            ConsoleWriter.Write(input.Color, text);


            // Just telling the OS that the command
            // finished up okay
            return true;
        }
    }

Again, the [Description] attributes and the Usage property in the constructor function are all optional, but add more information to the user help display. You’ll note that your command is completely decoupled from any and all text parsing and does nothing but do work against the single input argument. That’s done very intentionally and we believe that this sets Oakton apart from most other command line parsing tools in .Net that too freely commingle parsing with the actual functionality.

Finally, you need to execute the command in the application’s main function:

    class Program
    {
        static int Main(string[] args)
        {
            // As long as this doesn't blow up, we're good to go
            return CommandExecutor.ExecuteCommand&lt;NameCommand&gt;(args);
        }
    }

Oakton is fairly full-featured, so you have the options to:

  1. Expose help information in your tool
  2. Support all the commonly used primitive types like strings, numbers, dates, and booleans
  3. Use idiomatic Unix style naming and usage conventions for optional flags
  4. Support multiple commands in a single tool with different arguments and flags (because the original tooling was too inspired by the git command line)

 

So a couple questions:

  • Does the .Net world really need a new library for command line parsing? Nope, there’s dozens out there and a semi-official one somewhere inside of ASP.Net Core. It’s no big deal on my part though because other than the docs I finally wrote up this week, this code is years old and “done.”
  • Where’s the code? The GitHub repo is here.
  • Is it documented, because you used to be terrible at that? Yep, the docs are at http://jasperfx.github.io/oakton.
  • If I really want to use this, where can I ask questions? You can always use GitHub issues, or try the Gitter room.
  • Are there any real world examples of this actually being used? Yep, try Marten.CommandLine, the dotnet-stdocs tool, and Jasper.CommandLine.
  • What’s the license? Apache v2.

Where does the name “Oakton” come from?

A complete lack of creativity on my part. Oakton is a bustling non-incorporated area not far from my grandparent’s farm on the back way to Lamar, MO that consists of a Methodist church, a cemetery, the crumbling ruins of the general store, and maybe 3-4 farmhouses. Fun fact, when I was really small, I tagged along with my grandfather when he’d take tractor parts to get fixed by the blacksmith that used to be there.

An Experience Report of Moving a Complicated Codebase to the CoreCLR

TL;DR – I like the CoreCLR, project.json project system, and the new dotnet CLI so far, but there are a lot of differences in API that could easily burn you when you try to port existing .Net code to the CoreCLR.

As I wrote about a couple weeks ago, I’ve been working to port my Storyteller project to the CoreCLR en route to it being cross platform and generally modernized. As of earlier this week I think I can declare that it’s (mostly) running correctly in the new world order. Moreover, I’ve been able to dogfood Storyteller’s documentation generation feature on Mac OSX today without much trouble so far.

As semi-promised, here’s my experience report of moving an existing codebase over to targeting the CoreCLR, the usage of the project.json project system, and the new dotnet CLI.

 

Random Differences

  • AppDomain is gone, and you’ll have to use a combination of AppContext or DependencyContext to replace some of the information you’ve gotten from AppDomain.CurrentDomain about the running process. This is probably an opportunity for polyfill’s
  • The Thread class is very different and a couple methods (Yield(), Abort()) were removed. This is causing me to eventually go down to pinvoke in Storyteller
  • A lot of convenience methods that were probably just syntactic sugar anyway have been removed. I’ve found differences with Xml support and Stream’s. Again, I’ve gotten around this by adding extension methods to the Netstandard/CoreCLR code to add back in some of these things

 

Project.json May be Just a Flash in the Pan, but it’s a Good One

Having one tiny file that controls how a library is compiled, Nuget dependencies, and how that library is packed up into a Nuget later has been a huge win. Even better yet, I really appreciate how the project.json system handles transitive dependencies so you don’t have to do so much bookkeeping within your downstream projects. I think at this point I even prefer the project.json + dotnet restore combination over the Nuget workflow we have been using with Paket (which I still think was much better than out of the box Nuget was).

I’m really enjoying having the wildcard file inclusions in project.json so you’re not constantly having all the aggravation from merging the Xml-based csproj files. It’s a little bit embarrassing that it’s taken Microsoft so long to address that problem, but I’ll take it now.

I really hope that the new, trimmed down csproj file format is as usable as project.json. Honestly, assuming that their promised automatic conversion works as advertised, I’d recommend going to project.json as an interim solution rather than waiting.

 

I Love the Dotnet CLI

I think the new dotnet CLI is going to be a huge win for .Net development and it’s maybe my favorite part of .Net’s new world order. I love being able to so quickly restore packages, build, run tests, and even pack up Nuget files without having to invest time in writing out build scripts to piece it altogether.

I’ve long been a fan of using Rake for automating build scripts and I’ve resisted the calls to move to an arbitrarily different Make clone. With some of my simpler OSS projects, I’m completely forgoing build scripts in favor of just depending on the dotnet cli commands. For example, I have a small project called Oakton using the dotnet CLI, and it’s entire CI build script is just:

rmdir artifacts
dotnet restore src
dotnet test src/Oakton.Testing
dotnet pack src/Oakton --output artifacts --version-suffix %build.number%

In Storyteller itself, I removed the Rake script altogether and just a small shell script that delegates to both NPM and dotnet to do everything that needs to be done.

I’m also a fan of the “dotnet test” command too, especially when you want to quickly run the support for one .Net framework version. I don’t know if this is the test adapter or the CoreCLR itself being highly optimized, but I’ve been seeing a dramatic improvement in test execution time since switching over to the dotnet cli. In Marten I think it cut the test execution time of the main testing suite down by 60-70% some how.

The best source I’ve found on the dotnet CLI has been Scott Hanselman’s blog posts on DotNetCore.

 

AppDomain No Longer Exists (For Now)

AppDomain’s getting ripped out of the CoreCLR (yes, I know they’re supposed to come back in Netstandard 2.0, but who knows when that’ll be) was the single biggest problem I had moving Storyteller over to the CoreCLR. I outlined the biggest problem in a previous post on how testing tools generally use AppDomain’s to isolate the system under test from the test harness itself.

I ended up having Storyteller spawn a separate process to run the system under test in a way so that users can rebuild that system without having to first shut down the Storyteller specification editor tool. The first step was to replace the little bit of Remoting I had been using between AppDomain’s with a different communication scheme that just shot JSON back and forth over sockets. Fortunately, I had already been doing something very similar through a remoting proxy, so it wasn’t that bad of a change.

The next step was to change Storyteller testing projects to be changed from a class library to an executable that could be invoked to start up the system under test and start listening on a supplied port for the JSON messages described in the previous paragraph.This was a lot of work, but it might end up being more usable. Instead of depending on something else to have pre-compiled the system under test, Storyteller can start up the system under test with a spawned call to “dotnet run” that does any necessary compilation for you. It also makes it pretty easy to direct Storyteller to run the system under test under different .Net versions.

Of course, the old System.IO.Process class behaves differently in the CoreCLR (and across platforms too), and that’s still causing me some grief.

 

Reflection Got Scrambled in the CoreCLR

So, that sucked…

Infrastructure or testing tools like Storyteller will frequently need to use a lot of reflection. Unfortunately, the System.Reflection namespace in the CoreCLR has a very different API than classic .Net and that has been consistently been a cause of friction as I’ve ported code to the CoreCLR. The challenge is even worse if you’re trying to target both classic .Net and the CoreCLR.

Here’s an example, in classic .Net I can check whether a Type is an enumeration type with “type.IsEnum.” In the CoreCLR, it’s “type.GetTypeInfo().IsEnum.” Not that big a change, but basically anything you need to do against a Type is now on the paired TypeInfo and you now have to bounce through Type.GetTypeInfo().

One way or another, if you want to multi-target both CoreCLR and .Net classic, you’ll be picking up the idea of “polyfills” you see all over the Javascript world. The “GetTypeInfo()” method doesn’t exist in .Net 4.6, so you might do:

public static Type GetTypeInfo(this Type type)
{
    return type;
}

to make your .Net 4.6 code look like the CoreCLR equivalent. Or in some cases, I’ve just built polyfill extension methods in the CoreCLR to make it look like the older .Net 4.6 API:

#if !NET45
        public static IEnumerable<Type> GetInterfaces(Type type)
        {
            return type.GetTypeInfo().GetInterfaces();
        }

#endif

Finally, you’re going to have to dip into conditional compilation fairly often like this sample from StructureMap’s codebase:

    public static class AssemblyLoader
    {
        public static Assembly ByName(string assemblyName)
        {
#if NET45
            // This method doesn't exist in the CoreCLR
            return Assembly.Load(assemblyName);
#else
            return Assembly.Load(new AssemblyName(assemblyName));
#endif
        }
    }

There are other random changes as well that I’ve bumped into, especially around generics and Assembly loading. Again, not something that the average codebase is going to get into, but if you try to do any kind of type scanning over the file system to auto-discover assemblies at runtime, expect some churn when you go to the CoreCLR.

If you’re thinking about porting some existing .Net code to the CoreCLR and it uses a lot of reflection, be aware that that’s potentially going to be a lot of work to convert over.

Definitely see Porting a .Net Framework Library to .Net Core from Michael Whelan.

Moving Storyteller to the CoreCLR and going Cross Platform

This is half me thinking out loud and half experience report with new .Net new world order. If you want to know more about what Storyteller is, there’s an online webinar here or my blog post about the Storyteller 3 reboot and vision.

Storyteller 3 is an OSS acceptance test automation tool that we use at work for executable specifications and end to end regression testing. Storyteller doesn’t have a huge number of users, but the early feedback has been mostly positive from the community and it gets plenty of pull requests that have helped quite a bit with usability. Not that my Marten work is settling down I’ve been able to start concentrating on Storyteller again.

My current focus for the moment is making Storyteller work on the CoreCLR as a precursor to being truly cross platform. Going a little farther than that, I’m proposing some changes to its architecture that I think will make it relatively painless to use the existing user interface and test runners with completely different, underlying test engines (I’m definitely thinking about a Node.js based runner and doing a port of the testing engine to Scala or maybe even Swift or Kotlin way down the road as a learning exercise).

My first step has been to chip away at Storyteller’s codebase by slowly replacing dependencies that aren’t supported on the CoreCLR (this work is in the project.json branch on Github):

Current State Proposed End State
  • Targets .Net 4.6
  • Self-hosted w/ Nowin
  • FubuMVC for the web application
  • Fleck for web sockets support
  • Tests execute in a separate AppDomain with all communication done via sending Json messages through .Net Remoting
  • FubuCore for the command line parsing
  • Uses Fixie for unit testing
  • RhinoMocks for mocking
  • Csproj/MSBuild for compiling, Paket for Nuget management
  • A single Nuget for the testing engine library and the test running/documentation generation executable
  • No Visual Studio or VS Code integration
  • Targets .Net 4.6 and the CoreCLR
  • Self-hosted with Kestrel
  • Raw ASP.Net Core middleware
  • Kestrel/ASP.Net Core for Websockets
  • Tests will execute in a separate process, and the communication between processes will all be done with sockets
  • Using Oakton for the command line parsing
  • Uses xUnit for unit testing
  • NSubstitute for mocking
  • The dotnet CLI for CI builds and all Nuget management, project.json for all the projects
  • A nuget for the .Net testing engine library, a second one for the command line tooling for specification running and editing, a third nuget for the documentation generation
  • A fourth nuget for integrating Storyteller with dotnet test
  • A VS Code plugin?

Some thoughts on the work so far:

  • Kestrel and the bits of ASP.Net Core I’m using have been pretty easy to get up and going. The Websockets support doesn’t feel terribly discoverable, but it was easy to find good enough examples and get it going. I was a little irritated with the ASP.Net team for effectively ditching the community-driven OWIN specification (yes, I know that ASP.Net Core supports OWIN, but it’s an emulation) for their own middleware signature. However, I think that what they did do is probably going to be much more discoverable and usable for the average user. I will miss referring to OWIN as the “mystery meat” API.
  • I actually like the new dotnet CLI and I’m looking forward to it stabilizing a bit. I think that it does a lot to improve the .Net development experience. It’s an upside down world when an alt.net founder like me is defending a new Microsoft tool that isn’t universally popular with more mainstream .Net folks.
  • I still like Fixie and I hope that project continues to move forward, but xUnit is the only game in town for the dotnet CLI and CoreCLR.
  • Converting the projects to the new project.json format was relatively harmless compared to the nightmare I had doing the same with StructureMap, but I’m not yet targeting the CoreCLR.
  • I’ve always been gun shy about attempting any kind of Visual Studio.Net integration, but from a cursory look around xUnit’s dotnet runner code, I’m thinking that a “dotnet test” adapter for Storyteller is very feasible.
  • The new Storyteller specification editing user interface is a React.js-based SPA. I’m thinking that that architecture should make it fairly simple to build a VS Code extension for Storyteller.

 

The Killer Problem: AppDomain’s are Gone (for now)

Storyteller, like most tools for automating tests against .Net, relies on AppDomain’s to isolate the application under test from the test harness so that you can happily rebuild your application and rerun without having to completely drop and restart the testing tool. Great, and other than .Net Remoting not being the most developer-friendly thing in the world, that’s worked out fairly well in Storyteller 3 (it had been a mess in Storyteller 1 & 2).

There’s just one little problem, AppDomain’s and Remoting are no longer in the CoreCLR until at least next year (and I’m not wanting to count on them coming back). It would be perfect if you were able to unload the new AssemblyLoadContext, but as far as I know that’s not happening any time soon.

At this point, I’m thinking that Storyteller will work by running tests in a completely separate process to be launched and shut down by the Storyteller test running executable. To make that work, users will have to make their Storyteller specification project be an executable that bootstraps their system under test and then pass an ISystem object and the raw command line parameters into some kind of Storyteller runner.

I’ve been experimenting with using raw sockets for the cross process communication and so far, so good. I’m just shooting json strings back and forth. I thought about using HTTP between the processes, but I came down to just feeling like that would be too heavy. I also considered using our LightningQueues project in its “ZeroMQ” mode, but again, I opted for lighter weight. The other advantage for me is that the “dotnet test” adapter communication is done by json over sockets as well.

I think this strategy of running separate processes would make Storyteller a little more complicated to set up compared to the existing “just make a class library and Storyteller will find your custom ISystem if one exists” strategy.  My big hope is that the combination of depending on a separate command line launched process and shooting json across sockets will make it much easier to bring on alternative test running engines that would be usable with the existing Storyteller user interface tooling.

 

 

 

Offline Event Processing in Marten with the new “Async Daemon”

The feature I’m talking about here was very difficult to write, brand new, and definitely in need of some serious user testing from anyone interested in kicking the tires on it. We’re getting a lot of interest in the Marten Gitter room about doing the kinds of use cases that the async daemon described below is meant to address. This was also the very last feature on Marten’s “must have for 1.0” list, so there’s a new 1.0-alpha nuget for Marten. 1.0 is still at least a couple months away, but it’s getting closer.

A couple weeks ago I pulled the trigger on a new, but long planned, feature in Marten we’ve been calling the “async daemon” that allows users to build and update projected views against the event store data in a background process hosted in your application or an external service.

To put this in context, let’s say that you are building an application to track the status of a Github repositories with event sourcing for the persistence. In this application, you would record events for things like:

  • Project started
  • A commit pushed into the main branch
  • Issue opened
  • Issue closed
  • Issue re-opened

There’s a lot of value to be had by recording the raw event data, but you still need to frequently see a rolled up view of each project that can tell you the total number of open issues, closed issues, how many lines of code are in the project, and how many unique contributors are involved.

To do that rollup, you can build a new document type called ActiveProject just to present that information. Optionally, you can use Marten’s built in support for making aggregated projections across a stream by adding Apply([Event Type]) methods to consume events. In my end to end tests for the async daemon, I used this version of ActiveProject (the raw code is on GitHub if the formatting is cut off for you):

    public class ActiveProject
    {
        public ActiveProject()
        {
        }

        public ActiveProject(string organizationName, string projectName)
        {
            ProjectName = projectName;
            OrganizationName = organizationName;
        }

        public Guid Id { get; set; }
        public string ProjectName { get; set; }

        public string OrganizationName { get; set; }

        public int LinesOfCode { get; set; }

        public int OpenIssueCount { get; set; }

        private readonly IList<string> _contributors = new List<string>();

        public string[] Contributors
        {
            get { return _contributors.OrderBy(x => x).ToArray(); }
            set
            {
                _contributors.Clear();
                _contributors.AddRange(value);
            }
        }

        public void Apply(ProjectStarted started)
        {
            ProjectName = started.Name;
            OrganizationName = started.Organization;
        }

        public void Apply(IssueCreated created)
        {
            OpenIssueCount++;
        }

        public void Apply(IssueReopened reopened)
        {
            OpenIssueCount++;
        }

        public void Apply(IssueClosed closed)
        {
            OpenIssueCount--;
        }

        public void Apply(Commit commit)
        {
            _contributors.Fill(commit.UserName);
            LinesOfCode += (commit.Additions - commit.Deletions);
        }
    }

Now, you can update projected views in Marten at the time of event capture with what we call “inline projections.” You could also build the aggregated view on demand from the underlying event data. Both of those solutions can be appropriate in some cases, but if our GitHub projects are very active with a fair amount of concurrent writes to any given project stream, we’d probably be much better off to move the aggregation updates to a background process.

That’s where the async daemon comes into play. If you have a Marten document store, you can start up a new instance of the async daemon like so (the underlying code shown below is in GitHub):

[Fact] 
public async Task build_continuously_as_events_flow_in()
{
    // In the test here, I'm just adding an aggregation for ActiveProject
    StoreOptions(_ =>
    {
        _.Events.AsyncProjections.AggregateStreamsWith<ActiveProject>();
    });

    using (var daemon = theStore.BuildProjectionDaemon(logger: _logger, settings: new DaemonSettings
    {
        LeadingEdgeBuffer = 1.Seconds()
    }))
    {
        // Start all of the configured async projections
        daemon.StartAll();

        // This just publishes event data
        await _fixture.PublishAllProjectEventsAsync(theStore);


        // Runs all projections until there are no more events coming in
        await daemon.WaitForNonStaleResults().ConfigureAwait(false);

        await daemon.StopAll().ConfigureAwait(false);
    }

    // Compare the actual data in the ActiveProject documents with 
    // the expectation
    _fixture.CompareActiveProjects(theStore);
}

In the code sample above I’m starting an async daemon to run the ActiveProject projection updating, and running a series of events through the event store. The async daemon is continuously detecting newly available events and applying those to the correct ActiveProject document. This is the only place in Marten where we utilize the idea of eventual consistency to allow for faster writes, but it’s clearly warranted in some cases.

Rebuilding a Projection From Existing Data

If you’re going to use event sourcing with read side projections (the “Q” in your CQRS architecture), you’re probably going to need a way to rebuild projected views from the existing data to fix bugs or add new data. You’ll also likely introduce new projected views after the initial rollout to production. You’ll absolutely need to rebuild projected view data in development as you’re iterating your system.

To that end, you can also use the async daemon to completely tear down and rebuild the population of a projected document view from the existing event store data.

// This is just some test setup to establish the DocumentStore
StoreOptions(_ => { _.Events.AsyncProjections.AggregateStreamsWith<ActiveProject>(); });

// Publishing some pre-canned event data
_fixture.PublishAllProjectEvents(theStore);


using (var daemon = theStore.BuildProjectionDaemon(logger: _logger, settings: new DaemonSettings
{
    LeadingEdgeBuffer = 0.Seconds()
}))
{
    await daemon.Rebuild<ActiveProject&gt().ConfigureAwait(false);
}

Taken from the tests for the async daemon on Github.

Other Functionality Possibilities

The async daemon can be described as just a mechanism to accurately and reliably execute the events in order through the IProjection interface shown below:

    public interface IProjection
    {
        Type[] Consumes { get; }
        Type Produces { get; }

        AsyncOptions AsyncOptions { get; }
        void Apply(IDocumentSession session, EventStream[] streams);
        Task ApplyAsync(IDocumentSession session, EventStream[] streams, CancellationToken token);
    }

Today, the only built in projections in Marten are to do one for one transformations of a certain event type to a view document and the aggregation by stream use case shown above in the ActiveProject example. However, there’s nothing preventing you from creating your own custom IProjection classes to:

  • Aggregate views across streams grouped by some kind of classification like region, country, person, etc.
  • Project event data into flat relational tables for more efficient reporting
  • Do complex event processing

 

 

 

What’s Next for the Async Daemon

The async daemon is the only major thing missing from the Marten documentation, and I need to fill that in soon. This blog post is just a down payment on the async daemon docs.

I cut a lot of content out on how the async daemon works. Since I thought this was one of the hardest things I’ve ever coded myself, I’d like to write a post next week just about designing and building the async daemon and see if I can trick some folks into effectively doing a code review on it;)

This was my first usage of the TPL Dataflow library and I was very pleasantly surprised by how much I liked using it. If I’m ambitious enough, I’ll write a post later on building producer/consumer queues and using back pressure with the dataflow classes.

Testing HTTP Handlers with No Web Server in Sight

FubuMVC 2.0 and 3.0 introduced some tooling I called “Scenarios” that allow users to write mostly declarative integration tests against the entire HTTP pipeline in memory without having to host the application in a web server. I promised a coworker that I would write a blog post about using Scenarios for an internal team that wants to start using it much more in their work. A week of procrastination later and here you go:

NOTE: All samples are using FubuMVC 3.0

Why Integration Tests?

From the very beginning, we tried very hard to make unit testing FubuMVC action methods in isolation as easy as possible. I think we largely succeeded in that goal. However, within the context of a handling an HTTP request, FubuMVC like most web frameworks will potentially wrap those action methods with various middleware strategies for cross cutting technical things like authentication, authorization, logging, transaction management, and content negotiation. At some point, to truly exercise an HTTP endpoint you really do need to write an integration test that exercises the entire chain of HTTP handlers for an HTTP request exactly the way it will be configured inside the running application.

Toward that end, I built a class called EndpointDriver in early versions of FubuMVC that you could use to write integration tests against a FubuMVC application hosted with an embedded Katana server. This early tooling just wrapped WebClient with a FubuMVC specific fluent interface for resolving url’s, setting common options like the content-type and accepts headers, and verifying parts of the HTTP response. Below is a sample from our content negotiation support integration tests in FubuMVC 1.3 (“endpoints” is a reference to the EndpointDriver object for the running application):

[Test]
public void force_to_json_with_querystring()
{
    endpoints.Get("conneg/override/Foo?Format=Json", acceptType: "text/html")
        .ContentTypeShouldBe(MimeType.Json)
        .ReadAsJson<OverriddenResponse>()
        .Name.ShouldEqual("Foo");
}

EndpointDriver was fine at first, but our test library started getting slower as we added more and more tests and the fluent interface just never kept up with everything we needed for HTTP testing (plus I think that WebClient is awkward to use).

Using OWIN for HTTP “Scenarios”

As part of my FubuMVC 2.0 effort last year, I knew that I wanted a much better mechanism than the older EndpointDriver for doing integration testing of HTTP endpoints. Specifically, I wanted:

  • To be able to run HTTP requests and verify the response without having to take the performance hit of a web server
  • To run a FubuMVC application as it would be configured in production
  • To completely configure any part of an HTTP request
  • To be able to declaratively express multiple assertions against the expected response
  • To utilize FubuMVC’s support for “reverse URL resolution” for more traceable tests
  • Access to the raw HTTP request and response for anything unusual you would need to do that didn’t have a specific helper

The end result was a mechanism I called “Scenario’s” that exploited FubuMVC’s OWIN support to run HTTP requests in memory using this signature off of the new FubuRuntime object I explained in an earlier blog post:

OwinHttpResponse Scenario(Action<Scenario> configuration)

The Scenario object models both the HTTP request provides a way to specify expectations about the HTTP response for commonly used things like HTTP status codes, header values, and checking for the presence of string values in the HTTP response body. If need be, you also have access to FubuMVC’s abstractions for the entire HTTP request and response (more on this later).

To make this concrete, let’s say that you’re working through a “Hello, World” exercise with FubuMVC with this class and action method that just returns the text “Hello, World” when you issue a GET to the root “/” url of an application:

public class HomeEndpoint
{
    public string Index()
    {
        return "Hello, World";
    }
}

A scenario test for the action above would look like this code below:

using (var runtime = FubuRuntime.Basic())
{
    // Execute the home route and verify
    // the response
    runtime.Scenario(_ =>
    {
        _.Get.Url("/");

        _.StatusCodeShouldBeOk();
        _.ContentShouldBe("Hello, World");
        _.ContentTypeShouldBe("text/plain");
    });
}

In the scenario above, I’m issuing a GET request to the “/” url of the application and specifying that the resulting status code should be HTTP 200, “content-type” response header should be “text/plain”, and the exact contents of the response body should be “Hello, World.” When a Scenario is executed, it will run every single assertion instead of quitting on the first failure and report on every failed expectation in the specification output. This behavior is valuable when you have to author specifications with slower running scenario setup.

Specifying Url’s

FubuMVC has a model for reverse URL lookup from any endpoint method or the input model that we exploited in Scenario’s for traceable tests:

host.Scenario(_ =>
{
    // Specify a GET request to the Url that runs an endpoint method:
    _.Get.Action<InMemoryEndpoint>(e => e.get_memory_hello());

    // Or specify a POST to the Url that would handle an input message:
    _.Post

        // This call serializes the input object to Json using the 
        // application's configured JSON serializer and setting
        // the contents on the Request body
        .Json(new HeaderInput {Key = "Foo", Value1 = "Bar"});

    // Or specify a GET by an input object to get the route parameters
    _.Get.Input(new InMemoryInput { Color = "Red" });
});

I like the reverse url lookup instead of specifying Url’s directly in the scenarios because:

  1. It makes your scenario tests traceable to the actual handling code
  2. It insulates your scenarios from changes to the Url structures later

Checking the Response Body

For the 3.0 work I did a couple months ago, I fleshed out the Scenario support with more mechanisms to analyze the HTTP response body:

host.Scenario(_ =>
{
    // set up a request here

    // Read the response body as text
    var bodyText = _.Response.Body.ReadAsText();

    // Read the response body by deserializing Json
    // into a .net type with the application's
    // configured Json serializer
    var output = _.Response.Body.ReadAsJson<MyResponse>();

    // If you absolutely have to work with Xml...
    var xml = _.Response.Body.ReadAsXml();
});

Some Other Things…

I’ll happily explain the details of this list on request, but here are some other attributes of Scenario’s that FubuMVC supports right now:

  • You can specify expected values for HTTP response headers
  • You can assert on status codes and descriptions
  • There are helpers to send Json or Xml serialized data based on an input object message
  • There is a mechanism that allows you to disable all security middleware in the application for a single Scenario that has been frequently helpful in testing
  • You have access to the underlying IoC container for the running application from the Scenario if you need to resolve and use application services
  • FubuMVC is now StructureMap 4.0-only for its IoC usage, so we’re able to rely on StructureMap’s child container feature to resolve services during a Scenario execution from a unique child container per run. This allows you to replace services in your application with fakes, mocks, and stubs in a way that prevents your fake services from impacting more than one test.

Scenarios in Jasper

If you didn’t see my blog post earlier this year, FubuMVC is getting a complete reboot into a new project called Jasper late this year/early next year. I absolutely plan on bringing the Scenario support forward into Jasper very early, but this time around we’re completely dropping all of FubuMVC’s HTTP abstractions in favor of directly using the OWIN environment dictionary as the single model of HTTP requests and responses. My thought right now is that we’ll invest heavily in extension methods hanging off of IDictionary<string, object> for commonly used operations against that OWIN dictionary.

To some extent, we’re hoping as well that there will be a good ecosystem of OWIN helpers from other people and projects that will be usable from within Jasper.

Other Reading