Avoid the Serialization Burn with Marten’s Patching API

This is a logical follow up to my last post on Document Transformations in Marten with Javascript and yet another signpost on the way to Marten 1.0. Instead of having to write your own Javascript, Marten supplies the “Patch API” described here for very common scenarios.

Before I started working on Marten, I read an article comparing the performance of writing and querying JSON data between MongoDB and Postgresql (I couldn’t find the link when I was writing this post). Long story short, Postgresql very clearly comes out on top in terms of throughput, but the author still wanted to stick with MongoDB because of its ability to do document patching where you’re able to change elements within the persisted document without having to first load it into your application, change it, and persist the whole thing back. It’s a fair point and a realistic scenario that I used with RavenDb’s Patch Commands in the past.

Fortunately, that argument is a moot point because we have a working “Patch API” model in Marten for doing document patches. This feature does require PLV8 be enabled in your Postgresql database if you want to play with this feature in our latest nugets.

For an example, let’s say that you want to change the user name of a User document without first loading it. To update a single property or field in a document by its Id, it’s just this:

public void change_user_name(IDocumentSession session, Guid userId, string newName)
{
    session.Patch<User>(userId).Set(x => x.UserName, newName);
    session.SaveChanges();
}

When IDocumentSession.SaveChanges() (or its async equivalent) is called, it will send all the patching requests queued up with all of the pending document changes in a single database call.

I should also point out that the Set() mechanism can be used with nested properties or fields and non-primitive types.

Looking at another example, what if you just want to add a new role to an existing User? For that, Marten exposes the Append() method:

public void append_role(IDocumentSession session, Guid userId, string role)
{
    session.Patch<User>(userId).Append(x => x.Roles, role);
    session.SaveChanges();
}

In the case above, the new role will be appended to the “Roles” collection in the persisted JSON document in the Postgresql database. Again, this method can be used for nested or deep properties or fields and with non-primitive elements.

As a third example, let’s say that you only want to increment some kind of counter in the JSON document:

public void increment_login_count(IDocumentSession session, Guid userId)
{
    session.Patch<User>(userId).Increment(x => x.LoginCount);
    session.SaveChanges();
}

When the above command is issued, Marten will find the current numeric value in the JSON document, add 1 to it (the increment is an optional argument not shown here), and persist the new JSON data without ever fetching it into the client. The Increment() method can be used with int’s, long’s, double’s, and float’s.

Lastly, if you want to make a patch update to many documents by some kind of criteria, you can do that too:

public void append_role_to_internal_users(IDocumentSession session, Guid userId, string role)
{
    // Adds the role to all internal users
    session.Patch<User>(x => x.Internal)
        .Append(x => x.Roles, role);

    session.SaveChanges();
}

Other “Patch” mechanisms include the ability to rename a property or field within the JSON document and the ability to insert an item into a child collection at a given index. Other patch mechanisms are planned for later as well.

Document Transformations in Marten with Javascript

In all likelihood, Marten would garner much more rapid adoption if we were able to build on top of Sql Server 2016 instead of Postgresql. Hopefully, .Net folks will be willing to try switching databases after they see how many helpful capabilities that Postgresql has that Sql Server can’t match yet. This blog post, yet one more stop along the way to Marten 1.0, demonstrates how we’re taking advantage of Postgresql’s built in Javascript engine (PLV8). 

A Common .Net Approach to “Readside” Views

Let’s say that you’re building HTTP services, and some of your HTTP endpoints will need to return some sort of “readside” representation of your persisted domain model. For the purpose of making Marten shine, let’s say that you’re going to need to work with hierarchical data. In a common .Net technology stack, you’d:

  1. Load the top level model object through Entity Framework or some other kind of ORM. EF would issue a convoluted SQL query with lots of OUTER JOIN’s so that it can make a single call to the database to fetch the entire hierarchy of data you need from various tables. EF would then proceed to iterate through the sparsely populated recordset coming back and turn that into the actual domain model object represented by the data with lots of internally generated code.
  2. You’d then use something like AutoMapper to transform the domain model object into a “read side” Data Transfer Object (view models, etc.) that’s more suitable to going over the wire to clients outside of your service.
  3. Serialize your DTO to a JSON string and write that out to the HTTP response

Depending on how deep your hierarchy is, #1 can be expensive in the database query. The serialization in #3 is also somewhat CPU intensive.

As a contrast, here’s an example of how you might approach that exact same use case with Marten:

    var json = session.Query<User>()
        .Where(x => x.Id == user.Id)
        .TransformToJson("get_fullname").Single();

In the usage above, I’m retrieving the data for a single User document from Marten and having Postgresql transform the persisted JSON data to the format I need for the client with a pre-loaded Javascript transformation. In the case of Marten, the workflow is to:

  1. Find the entire hierarchical document JSON in a single database row by its primary key
  2. Apply a Javascript function to transform the persisted JSON to the format that the client needs and return a JSON representation as a String
  3. Stream the JSON from the Linq query directly to the HTTP response without any additional serialization work

Not to belabor the point too much, but the Marten mechanics are simpler and probably much more efficient at runtime because:

  • The underlying database query is much simpler if all the data is in one field in one row
  • The Javascript transformation probably isn’t that much faster or slower than the equivalent AutoMapper mechanics, so let’s call that a wash
  • You don’t have the in memory allocations to load a rich model object just to immediately transform that into a completely different model object
  • You avoid the now unnecessary cost of serializing the DTO view models to a JSON string

A couple additional points:

  • Jimmy Bogard reviewed this and pointed out that in some cases you could bypass the Domain Model to DTO transformation by selecting straight to the DTO, but that wouldn’t cover all cases by any means. The same limitations apply to Marten and its Select() transformation features.
  • To get even more efficient in your Marten usage, the Javascript transformations can be used inside of Marten’s Compiled Query feature to avoid the CPU cost of repetitively parsing Linq statements. You can also do Javascript transformations inside of batched queries – which can of course, also be combined with the aforementioned compiled queries;)

 

Now, let’s see how it all works…

 

Building the Javascript Function

The way this works in Marten is that you write your Javascript function into a single file and export the main function with the “module.exports = ” CommonJS syntax. Marten is expecting the main function to have the signature “function(doc)” and return the transformed document.

Here’s a sample Javascript function I used to test this feature that works against a User document type:

module.exports = function(doc) {
    return {fullname: doc.FirstName + ' ' + doc.LastName};
}

Given the persisted JSON for a User document, this transformation would return a different object that would then be streamed back to the client as a JSON string.

There is some thought and even infrastructure for doing Javascript transformations with multiple, related documents, but that feature won’t make it into Marten 1.0.

To load the function into a Javascript-enabled Postgresql schema, Marten exposes this method:

    var store = DocumentStore.For(_ =>
    {
        _.Connection(ConnectionSource.ConnectionString);

        // Let Marten derive the transform name
        // from the file name
        _.Transforms.LoadFile("get_fullname.js");

        // or override the transform name
        _.Transforms.LoadFile("get_fullname.js", "fullname");
    });

Internally, Marten will wrap a PLV8 function wrapper around your Javascript function like this:

CREATE OR REPLACE FUNCTION public.mt_transform_get_fullname(doc jsonb)
  RETURNS jsonb AS
$BODY$

  var module = {export: {}};

module.exports = function (doc) {
    return {fullname: doc.FirstName + ' ' + doc.LastName};
}

  var func = module.exports;

  return func(doc);

$BODY$
  LANGUAGE plv8 IMMUTABLE STRICT;


My intention with the approach shown above was to allow users to write simple Javascript functions and be able to test their transformations in simple test harnesses like Mocha. By having Marten wrap the raw Javascript in a generated PLV8 function, users won’t have to be down in the weeds and worrying about Postgresql mechanics.

Depending on the configuration, Marten is good enough to build, or rebuild, the function to match the current version of the Javascript code on the first usage of that transformation. The Javascript transforms are also part of our schema management support for database migrations.

 

Transformations

The persisted JSON documents in Marten are a reflection of your .Net classes. Great, that makes it absurdly easy to keep the database schema in synch with your application code at development time — especially compared to the typical development process against a relational database. However, what happens when you really do need to make breaking changes or additions to a document type but you already have loads of persisted documents in your Marten database with the old structure?

To that end, Marten allows you to use Javascript functions to alter the existing documents in the database. As an example, let’s go back to the User document type and assume for some crazy reason that we didn’t immediately issue a user name to some subset of users. As a default, we might just assign their user names by combining their first and last names like so:

module.exports = function (doc) {
    doc.UserName = (doc.FirstName + '.' + doc.LastName).toLowerCase();

    return doc;
}

To apply this transformation to existing rows in the database, Marten exposes this syntax:

    var store = DocumentStore.For(_ =>
    {
        _.Connection(ConnectionSource.ConnectionString);

        _.Transforms.LoadFile("default_username.js");
    });

    store.Transform
        .Where<User>("default_username", x => x.UserName == null);

When you run the code above, Marten will issue a single SQL statement that issues an UPDATE to the rows matching the given criteria by applying the Javascript function above to alter the existing document. No data is ever fetched or processed in the actual application tier.

Supercharging Marten with the Jil Serializer

Some blog posts you write for attention or self promotion, some you write just because you’re excited about the topic, and some posts you write just to try to stick some content for later into Google searches. This one’s all about users googling for this information down the road.

Out of the box, Marten uses Newtonsoft.Json as its primary JSON serialization mechanism. While Newtonsoft has outstanding customizability and the most flexible feature set, you can opt to forgo some of that flexibility in favor of higher performance by switching instead to the Jil serializer.

In the last couple months I finally made a big effort to be able to run Marten’s test suite using the Jil serializer. I had to make one small adjustment to our JilSerializer (turning on includeInherited), and a distressingly intrusive structural change to make the internal handling of Enum values (!) in Linq queries be dependent upon the internal serializer’s behavior for enum storage.

At this point, we’re not supplying a separate Marten.Jil adapter package, but the code to swap in Jil is just this class:

public class JilSerializer : ISerializer
{
    private readonly Options _options 
        = new Options(dateFormat: DateTimeFormat.ISO8601, includeInherited:true);

    public string ToJson(object document)
    {
        return JSON.Serialize(document, _options);
    }

    public T FromJson<T>(string json)
    {
        return JSON.Deserialize<T>(json, _options);
    }

    public T FromJson<T>(Stream stream)
    {
        return JSON.Deserialize<T>(new StreamReader(stream), _options);
    }

    public object FromJson(Type type, string json)
    {
        return JSON.Deserialize(json, type, _options);
    }

    public string ToCleanJson(object document)
    {
        return ToJson(document);
    }

    public EnumStorage EnumStorage => EnumStorage.AsString;
}

And this one line of code in your document store set up:

var store = DocumentStore.For(_ =>
{
    _.Connection("the connection string");

    // Replace the ISerializer w/ the JilSerializer
    _.Serializer<JilSerializer>();
});

A couple things to note about using Jil in place of Newtonsoft:

  • The enumeration persistence behavior is different from Newtonsoft as it stores enum values by their string representation. Most Marten users seem to prefer this anyway, but watch the value of the “EnumStorage” property in your custom serializer.
  • We’ve tried very hard with Marten to ensure that the Json stored in the database doesn’t require .Net type metadata, but the one thing we can’t address is having polymorphic child collections. For that particular use case, you’ll have to stick with Newtonsoft.Json and turn on its type metadata handling.

Optimistic Concurrency with Marten

The Marten community has been making substantial progress toward a potential 1.0-alpha release early next week. As part of that effort, I’m going to be blogging about the new changes and features.

Recent versions of Marten (>0.9.5) have a new feature that allows you to enforce offline optimistic concurrency checks against documents that you are attempting to persist. You would use this feature if you’re concerned about a document in your current session having been modified by another session since you originally loaded the document.

I first learned about this concept from Martin Fowler’s PEAA book. From his definition, offline optimistic concurrency:

Prevents conflicts between concurrent business transactions by detecting a conflict and rolling back the transaction.

In Marten’s case, you have to explicitly opt into optimistic versioning for each document type. You can do that with either an attribute on your document type like so:

    [UseOptimisticConcurrency]
    public class CoffeeShop : Shop
    {
        // Guess where I'm at as I code this?
        public string Name { get; set; } = "Starbucks";
    }

Or by using Marten’s configuration API to do it programmatically:

    var store = DocumentStore.For(_ =>
    {
        _.Connection(ConnectionSource.ConnectionString);

        // Configure optimistic concurrency checks
        _.Schema.For<CoffeeShop>().UseOptimisticConcurrency(true);
    });

Once optimistic concurrency is turned on for the CoffeeShop document type, a session will now only be able to update a document if the document has been unchanged in the database since it was initially loaded.

To demonstrate the failure case, consider the following  acceptance test from Marten’s codebase:

[Fact]
public void update_with_stale_version_standard()
{
    var doc1 = new CoffeeShop();
    using (var session = theStore.OpenSession())
    {
        session.Store(doc1);
        session.SaveChanges();
    }

    var session1 = theStore.DirtyTrackedSession();
    var session2 = theStore.DirtyTrackedSession();

    var session1Copy = session1.Load<CoffeeShop>(doc1.Id);
    var session2Copy = session2.Load<CoffeeShop>(doc1.Id);

    try
    {
        session1Copy.Name = "Mozart's";
        session2Copy.Name = "Dominican Joe's";

        // Should go through just fine
        session2.SaveChanges();

        // When session1 tries to save its changes, Marten will detect
        // that the doc1 document has been modified and Marten will
        // throw an AggregateException
        var ex = Exception<AggregateException>.ShouldBeThrownBy(() =>
        {
            session1.SaveChanges();
        });

        // Marten will throw a ConcurrencyException for each document
        // that failed its concurrency check
        var concurrency = ex.InnerExceptions.OfType<ConcurrencyException>().Single();
        concurrency.Id.ShouldBe(doc1.Id);
        concurrency.Message.ShouldBe($"Optimistic concurrency check failed for {typeof(CoffeeShop).FullName} #{doc1.Id}");
    }
    finally
    {
        session1.Dispose();
        session2.Dispose();
    }

    // Just proving that the document was not overwritten
    using (var query = theStore.QuerySession())
    {
        query.Load<CoffeeShop>(doc1.Id).Name.ShouldBe("Dominican Joe's");
    }

}

Marten is throwing an AggregateException for the entire batch of changes being persisted from SaveChanges()/SaveChangesAsync() after rolling back the current database transaction. The individual ConcurrencyException’s inside of the aggregated exception expose information about the actual document type and identity that failed.

Schema Management with Marten (Why document databases rock)

This blog post describes the preliminary support and thinking behind how we’ll manage schema changes to production using Marten. I’m writing this to further the conversation one of our teams is having about how best to accomplish this. Expect the details of how this works to change after we face real world usages for awhile;)

Some of our projects at work are transitioning from RavenDb to an OSS project I help lead called Marten that uses Postgresql as a fully fledged document database. Among the very large advantages of document databases over relational databases is how much simpler it is to evolve a system over time because it takes so much less mechanical work to keep your document database synchronized to the application code.

Exhibit #1 in the case against relational databases is the need for laboriously tracking database migrations (assuming you give a damn about halfway decent software engineering in regards to your database).

Let’s compare the steps in adding a new property to one of your persisted objects in your system. Using a relational database with any kind of ORM (even if it describes itself as “micro” or “simple”), your steps in some order would be to:

  1. Add the new property
  2. Add a migration script that adds a new column to your database schema
  3. Change your ORM mapping or SQL statements to reflect the new property

Using a document database approach like Marten’s, you’d:

  1. Add the new property and continue on with your day

Notice which list is clearly shorter and simpler — not to mention less error prone for that matter.

Marten does still need to create matching schema objects in your Postgresql database, and it’s unlikely that any self-respecting DBA is going to allow your application to have rights to execute schema changes programmatically, so we’re stuck needing some kind of migration strategy as we add document types, Javascript transformations, and retrofit indexes. Fortunately, we’ve got a decent start on doing just that that’s demonstrated below:

 

Just Get Stuff Done in Development!

As long as you have rights to alter your Postgresql database, you can happily set up Marten in one of the “AutoCreate” modes and not worry about schema changes at all as you happily code new features and change existing document types:

var store = DocumentStore.For(_ =>
{
    // Marten will create any new objects that are missing,
    // attempt to update tables if it can, but drop and replace
    // tables that it cannot patch. 
    _.AutoCreateSchemaObjects = AutoCreate.All;


    // Marten will create any new objects that are missing or
    // attempt to update tables if it can. Will *never* drop
    // any existing objects, so no data loss
    _.AutoCreateSchemaObjects = AutoCreate.CreateOrUpdate;


    // Marten will create missing objects on demand, but
    // will not change any existing schema objects
    _.AutoCreateSchemaObjects = AutoCreate.CreateOnly;
});

As long as you’re using a permissive auto creation mode, you should be able to code in your application model and let Marten change your development database as needed behind the scenes.

Patching Production Databases

In the next section, I demonstrate how to dump the entire data definition language (DDL) that matches your Marten configuration as if you were starting from an empty database, but first, I want to focus on how to make incremental changes between production or staging releases.

In the real world, you’re generally not going to allow your application to willy nilly make changes to the running schema and you’ll be forced into this setting:

var store = DocumentStore.For(_ =>
{
    // Marten will not create or update any schema objects 
    // and throws an exception in the case of a schema object
    // not reflecting the Marten confi
guration
    _.AutoCreateSchemaObjects = AutoCreate.None;
});

This leaves us with the problem of how to get our production database matching however we’ve configured Marten in our application code. At this point, our theory is that we’ll use the “WritePatch” feature to generate delta DDL files:

IDocumentStore.Schema.WritePatch(string file);

When this is executed against a configured Marten document store, it will loop through all of the known document types, javascript transforms, the event store usage, and check the configured storage against the actual database. Marten writes two files, one to move your schema “up” to match the configured document store, and a second “drop” file that would rollback your database schema to reverse the changes in the “up” file.

The patching today is able to:

  1. Add all new tables, indexes, and functions
  2. Detect when a generated function has changed and rebuild it after dropping the old version
  3. Determine which indexes are new or modified and generate the necessary DDL to match
  4. Add the event store schema objects if they’re active and missing
  5. Add the database objects Marten needs for its “Hilo” identity strategy

This is very preliminary, but my concept of how we’ll use this in real life (admittedly with some gaps) is to:

  • Use “AutoCreateSchemaObjects = AutoCreate.All” in development and CI and basically not worry at all about incremental schema changes.
  • For each deployment to staging or production, we’ll use the WritePatch() method shown above to generate a patch SQL file that will then be committed to Git.
  • I’m assuming that the patch SQL files generated by Marten could feed into a real database migration tool like RoundhousE, and we would incorporate RoundhousE into our automated deployments to execute the “up” scripts to the most current database version.

 

 

Dump all the Sql

If you just want a database script that will build all the necessary schema objects for your Marten configuration, you can export either a single file:

// Export the SQL to a file
store.Schema.WriteDDL("my_database.sql");

Or write a SQL file for each document type and functional area of Marten to a directory like this:

// Or instead, write a separate sql script
// to the named directory
// for each type of document
store.Schema.WriteDDLByType("some folder");

In the second usage, Marten also writes a file called “all.sql” that executes the constituent sql files in the correct order just in case you’re using Marten’s support for foreign keys between document types.

The SQL dumps from the two methods shown above will write out every possible database schema object necessary to support your Marten configuration (document types, the event store, and a few other things) including tables, the generated functions, indexes, and even a stray sequence or two.

Relational Databases are the Buggy Whips of Software Development

I think that there’s going to be a day when you tell your children stories about how we built systems against relational databases with ORM’s or stored procedures or hand written SQL and they’re going to be appalled at how bad we had it, much like I did when my grandfather told me stories about ploughing with a horse during the Great Depression.

My Mini NDC Oslo 2016 Wrapup

I had a great time at NDC Oslo last week and I’ve got to send a pretty big thank you to the folks behind the NDC conferences for what an outstanding job they did bringing it all together.

I got to see several old friends, meet some new folks, and generally have a blast interacting with other developers doing interesting work. I got to give a talk on using React and Redux to build a large application that I thought went pretty well (my audience was great and that always makes talks go much better for me). I’ll post the link to the video when that’s up.

As always, I was reminded that I like many folks far better in real life than I do their online personas — and I’ve received the same feedback about myself over the past decade too. Betting there’s some kind of deeper meaning to that and how we need to be more careful communicating on Twitter and trying to stop being so easily offended, but that’s too deep for a Monday morning for me;)

And as an “achievement unlocked,” I gave my entire conference talk without ever once opening Visual Studio.Net. I think being part of the de facto React.js track has to give me a touch of hipster cred.

So here’s what I saw and what stood out for me in terms of development trends:

Elixir

There was a lot of Elixir content and buzz. I’m not doing any code in it, but most of what I saw looks very positive. Erlang’s syntax has always thrown me off, but I could happily live with Ruby inspired syntax. I think that community is going to struggle just a bit by having to build everything (web frameworks, service bus, etc.) from scratch, and they maybe aren’t learning lessons from other communities as well as they could in their efforts.

I’ve got to say that Bryan Hunter‘s “Elixir for Node.js Developers” was one of my favorite sessions of the week.

Electron

I’m excited to try out Electron on a couple projects this year. I took in David Neal‘s talk on Electron and got to speak with some other folks that are using it. I’d love to turn the Storyteller 3 client and maybe a forthcoming admin tool for Marten into Electron apps. We have one big WPF application that is justifiably a desktop application, but we prefer to write user interfaces as web apps, and Electron could be a good long term solution for us.

On the state of .Net OSS, yet again

I’ve written too many belly gazing types of posts about the state of OSS in .Net lately (and a follow up), but that topic came up a couple times in Oslo. On the positive side, I had several folks come up and ask about Marten, a couple nice comments on StructureMap, and even a positive remark on FubuMVC.

On the negative side, I had to field the inevitable question about Marten in regards to its potential adoption in the greater .Net community: “is it really worth the effort?” It is from the perspective that my employer will benefit and I’m generally enjoying the work. I’m not suffering from any delusions about Marten taking off into the .Net mainstream — at least until it’s technically feasible to build Marten’s functionality on top of Sql Server.

One of the things I’ve said about the CoreCLR and ASP.Net Core efforts from Microsoft is that I think they’ll suck all the Oxygen out of the room for .Net OSS offerings for quite awhile, and NDC was a perfect example of that. The talks on anything related to CoreCLR were jam packed, and I really don’t remember there being much else .Net content to be honest. If you’re a .Net OSS enthusiast, I think I’d either restrict my efforts to add ons to MS’s work or just sit it out until the CoreCLR hype settles down.

Talking React & Redux at NDC Oslo Next Week

I’ll be at NDC Oslo next week to give a talk entitled An Experience Report from Building a Large React.js/Redux Application. While I’ll definitely pull some examples from some of our ongoing React.js projects at work, most of the talk will be about the development of the new React.js-based web application within the open source Storyteller 3 application.

Honestly, just to help myself start getting more serious about preparation, I’m thinking that I’ll try to cover:

  1. How the uni-directional flow idea works in a React.js application
  2. The newer things in ES2015 that I think make React components easier to write and read. That might be old news to many people, but it’s still worth talking about
  3. Using React.js for dynamically determined layouts and programmatic form generation
  4. The transition from an adhoc Flux-like architecture built around Postal.js and hand-rolled data stores to a more systematic approach with Redux.
  5. Utilizing the combination of pure function components in React.js with the react-redux bridge package and what that means for testability.
  6. The very positive impact of Redux in your ability to automate behavioral testing of your screens, especially when the application has to constantly respond to frequent messages from the server
  7. Integrating websocket communication to the Redux store, including a discussion of batching, debouncing, and occasional snapshots to keep the server and client state synchronized without crushing the browser from too many screen updates.
  8. Designing the shape of your Redux state to avoid unnecessary screen updates.
  9. Composing the reducer functions within your Redux store — as in, how will Jeremy get out of the humongous switch statement of doom?
  10. Using Immutable.js within your Redux store and why you would or wouldn’t want to do that

By no means is this an exhaustive look at the React/Redux ecosystem (I’m not talking about Redux middleware, the Redux specific development tools, or GraphQL among other things), but it’s the places where I think I have something useful to say based on my experiences with Storyteller in the past 18 months.

The last time I was at NDC (2009) I gave a total of 8 talks in 4 days. While I still hope to get to do a Marten talk while I’m there, I’m looking forward to not being a burnt out zombie this time around with the much lighter speaking load;)

 

My surprisingly positive take on .Net Core’s current direction

I’m not nearly as upset about the recent changes and churn in the direction of .Net Core as many of the folks I follow online. Mostly it’s because I was refusing to invest very much into it during the early stages and therefore, didn’t really lose much when the direction shifted. Honestly, my main thought after the changes in direction is how much less rework it’s going to take to move some of the tools I support to .Net Core and less work for me is always a win.

My thoughts, such as they are:

  • True cross platform .Net? As soon as JetBrains Rider has a test runner and supports .Net Core, my plan is to switch almost all of my day to day development work to the Mac side of things and keep my Windows VM off. Yes I know about Mono, but it never worked out very well for any project where I tried to use it.
  • Strong naming is going to have much less negative impact on day to day development (maybe none) with the changes to how .Net Core will resolve assemblies. As some of you may know, I feel like strong naming is a huge source of friction and holds back the .Net OSS ecosystem by adding extra cost to development through binding conflicts and all the extra work OSS developers have to do to (hopefully) shield their downstream users from potential binding conflict issues. In other words, the author of Newtonsoft.Json will no longer be the most hated person in the entire .Net world once binding conflicts go away. The OSS signing option and the VS.Net or Nuget kinda, sorta being able to write binding redirects for you were not sufficient solutions for the strong naming pain.
  • Finally getting working wildcard includes in whatever the CsProj file replacement is. I just finished resolving a merge conflict with a *.csproj file after rebasing an older branch and it’s such a pain in the neck. Another common source of friction in .Net development gone.
  • On the subject of AppDomain’s getting put back in for .Net Core, I have mixed feelings. Taking out AppDomain’s and Remoting was going to almost completely denude .Net of working automated testing tools. I know there’s some talk and work toward a lightweight AssemblyLoadContext that might be a replacement, but I’ve found very, very little information about it and most of that has been contradictory. I don’t really like messing with .Net Remoting and separate AppDomain’s, but I wasn’t looking forward to making some kind of .Net Core alternative from scratch.
  • I’ve seen other folks making the point that .Net is now going to avoid the nasty Python 2/3 style bifurcation of its entire ecosystem. I don’t think all the common OSS tools are going to be quickly moved to .Net Core because of people waiting for it to be stable, but now the mechanics of doing that is going to be much less.
  • On the demise of project.json and the new, hopefully cutdown csproj file, I suspect that there’s some pretty seriously harmful coupling in MSBuild itself. It should have been possible to use project.json as just a new configuration mechanism for the underlying MSBuild engine. Hopefully for all our sakes, they get those structural issues resolved in the new project file system. I definitely approve of their plans to decouple much more of the project system from Visual Studio.Net.
  • I would hope that the fallback to csproj files means that Paket development continues. I personally think that Paket does a better job from the command line than OOTB Nuget.
  • I like some of the new mechanics around the new “dotnet” CLI support. I think they did a nice job of taking some of the things I like about the Node.js/NPM ecosystem. I’ve never thought that the .Net teams at Redmond were all that great at innovating minus huge hits like Linq or Roslyn, but they are pretty good at adapting ideas from other communities.
  • On the communication and mismanaged expectation front? Yeah, I think they blew that one pretty badly, but it’s not the end of the world for most of us. I suspect the problem was due to the organization structure in Microsoft and the lack of collaboration between some of the groups — but that seems to be better now.
  • The static linker sounds cool, and having far easier mechanisms for supporting multiple versions of the .Net runtime is going to be great for the OSS projects that try to support everything. I’m not all that wild about microservices, but I think that the .Net Core/static linker/Kestrel combination would make .Net a lot more attractive for developing microservice architectures.

For my part, StructureMap already supports .Net Core. Marten is going to get .Net Core soon, except we’re going to punt on running unit tests in .Net Core for awhile due to our usage of NSubstitute and that not supporting .Net Core yet. Storyteller is a lot more complicated and I want things to settle down before I even think of doing that one. Since .Net Core is no longer all that different from existing .Net 4.5/6, our current thinking is to just restart FubuMVC work and slowly morph that into a new, much more efficient and far smaller framework.

 

Automated Testing of Message Based Systems

Technology wise, everything in this blog post is related to OSS projects you’re very unlikely to ever use (FubuMVC and Storyteller 3), but I’d hope that the techniques and ideas would still be useful in whatever technical stack you happen to be using. At a minimum, I’m hoping this blog post is useful to some of our internal teams who have to wrestle with the tooling and the testing scenarios in this blogpost. And who knows, we still might try to make a FubuMVC comeback (renamed as “Jasper”) if the dust ever does settle on .Net Core and ASP.Net Core.

We heavily utilize message-based architectures using service bus tools at work. We’ve also invested somewhat in automating testing against those message based integrations. Automating tests against distributed systems with plenty of asynchronous behavior has been challenging. We’ve come up with a couple key aspects of our architectures and technical tooling that I feel have made our automated  testing efforts around message-based architectures easier mechanically, more reliable, and even create some transparency into how the system actually behaves.

My recipe for automating tests against distributed, messaging systems is:

  1. Try to choose tools that allow you to “inject” configuration programmatically at runtime. As I’ll try to make clear in this post, a tight coupling to the .Net configuration file can almost force you into having to swallow the extra complexity of separate AppDomain’s
  2. Favor tools that require minimal steps to deploy. Ideally, I love using tools that allow for fullblown xcopy deployment, at least in functional testing scenarios
  3. Try very hard to be able to run all the various elements of a distributed application in a single process because that makes it so much simpler to coordinate the test harness code with all the messages going through the greater system. It also makes debugging failures a lot simpler.
  4. At worst, have some very good automated scripts that can install and teardown everything you need to be running in automated tests
  5. Make sure that you can cleanly shut your distributed system down between tests to avoid having friction from locked resources on your testing box
  6. Have some mechanism to completely rollback any intermediate state between tests to avoid polluting system state between tests

Repeatable Application Bootstrapping and Teardown

The first step in automating a message-based system is simply being able to completely and reliably start the system with all the related publisher and subscriber nodes you need to participate in the automated test. You could demand a black box testing approach where you would deploy the system under test and all of its various services by scripting out the installation and startup of Windows services or *nix daemon processes or programmatically launching other processes to host various distributed elements from the command line.

Personally, I’m a big fan of using whitebox testing techniques to make test automation efforts much more efficient, meaning that I think

My strong preference however is to adopt architectures that make it possible for the test automation harness itself to bootstrap and start the entire distributed system, rather than depending on external scripts to deploy and start the application services. In effect, this has meant running all of the normally distributed elements collapsed down to the single test harness process.

Our common technical stack at the moment is:

  1. FubuMVC 3 as both our web application framework and as our service bus
  2. Storyteller 3 as our test automation harness for integrated acceptance testing
  3. LightningQueues as a “store and forward” persistent queue transport. Hopefully you’ll hear a lot more about this tool when we’re able to completely move off of Esent and onto LightningDB as the underlying storage engine.
  4. Serenity as an add-on library as a standard recipe to host FubuMVC applications in Storyteller with diagnostics integration

In all cases, these tools can be programmatically bootstrapped in the test harness itself. LightningQueues being “x-copy deployable” as our messaging transport has been a huge advantage over queueing tools that require external servers or installations. Additionally, we have programmatic mechanisms in LightningQueues to delete any previous state in the persistent queues to prevent leftover state bleeding between automated tests.

After a lot of work last year to streamline the prior mess, a FubuMVC web and/or service bus node application is completely described with a FubuRegistry class (here’s a link to an example), conceptually similar to the “Startup” concept in the new ASP.Net Core.

Having the application startup and teardown expressed in a single place makes it easy for us to launch the application from within a test harness. In our Serenity/Storyteller projects, we have a base “SerenitySystem” class we use to launch one or more applications before any tests are executed. To build a new SerenitySystem, you just supply the name of a FubuRegistry class of your application as the generic argument to SerenitySystem like so:

public class TestSystem : SerenitySystem<WebsiteRegistry>

That declaration above is actually enough to tell Storyteller how to bootstrap the application defined by “WebsiteRegistry.”

It’s not just about bootstrapping applications, it’s also about being able to cleanly shut down your distributed application. By cleanly, I mean that you release all system resources like IP ports or file locks (looking at you Esent) that could prevent you from being able to quickly restart the application. This becomes vital anytime it takes more than one iteration of code changes to fix a failing test. My consistent experience over the years is that the ability to quickly iterate between code changes and executing a failing test is vital for productivity in test automation efforts.

We’ve beaten this problem through standardization of test harnesses. The Serenity library “knows” to dispose the running FubuMVC application when it shuts down, which will do all the work of releasing shared resources. As long as our teams use the standard Serenity test harness, they’re mostly set for clean activation and teardown (it’s an imperfect world and there are *always* complications).

Bootstrapping the Entire System in One Process

Ideally, I’d prefer to get away with running all of the running service bus nodes in a single AppDomain. Here’s a pretty typical case for us, say you have a web application defined as “WebRegistry” that communicates bi-directionally with a headless service bus service defined by a “BusRegistry” class that normally runs in a Windows service:

    // FubuTransportRegistry is a special kind of FubuRegistry
    // specifically to help configure service bus applications
    // BusSettings would hold configuration data for BusRegistry
    public class BusRegistry : FubuTransportRegistry
    {
    }

    public class WebRegistry : FubuTransportRegistry
    {
        
    }

To bootstrap both the website application and the bus registry in a single AppDomain, I could use code like this:

        public static void Bootstrap_Two_Nodes()
        {
            var busApp = FubuRuntime.For<BusRegistry>();
            var webApp = FubuRuntime.For<WebRegistry>(_ =>
            {
                // WebRegistry would normally run in full IIS,
                // but for the test harness we tend to use 
                // Katana to run all in process
                _.HostWith();
            });

            // Carry out tests that depend on both
            // busApp and webApp
        }

This is the ideal, and I hope to see us use this pattern more often, but it’s often defeated by tools that have hard dependencies on .Net’s System.Configuration that may cause configuration conflicts between running nodes. Occasionally we’ll also have conflicts between assembly versions across the nodes or hit cases where we cannot have a particular assembly deployed in one of the nodes.

In these cases we resort to using a separate AppDomain for each service bus application. We’ve built this pattern into Serenity itself to standardize the approach with what I named “Remote Systems.” For an example from the FubuMVC acceptance tests, we execute tests against an application called “WebsiteRegistry” running as the main application in the test harness, and open a second AppDomain for a testing application called “ServiceNode” that exchanges messages with “WebsiteRegistry.” In the Serenity test harness, I can just declare the second AppDomain for “ServiceNode” like so:

    public class TestSystem : SerenitySystem<WebsiteRegistry>
    {
        public TestSystem()
        {
            // Running the "ServiceNode" app in a second AppDomain
            AddRemoteSubSystem("ServiceNode", x =>
            {
                x.UseParallelServiceDirectory("ServiceNode");
                x.Setup.ShadowCopyFiles = false.ToString();
            });
        }

The code above directs Serenity to start a second application found at a directory named “ServiceNode” that is parallel to the main application. So if the main application is hosted at “src/FubuMVC.IntegrationTesting”, then “ServiceNode” is at “src/ServiceNode.” The Serenity harness is smart enough to figure out how to launch the second AppDomain pointed at this other directory. Serenity can also bootstrap that application by scanning the assemblies in that AppDomain to find a class that inherits from FubuRegistry — very similar to how ASP.Net Core is going to use the Startup class convention to bootstrap applications.

The biggest problem now is generally in dealing with the asynchronous behavior in the different AppDomain’s and “knowing” when it’s safe for the test harness to start checking for the outcome of the messages that are processed within the “arrange” and “act” portions of a test. In a following post, I’ll talk about the tooling and technique we use to coordinate activities between the different AppDomain’s.

 

 

This All Changes in .Net Core

AppDomain’s and .Net Remoting all go away in .Net Core. While no one is really that disappointed by those features going away, I think that’s going to set back the state of test automation tools in .Net for a little while because almost every testing tool uses AppDomain’s in some fashion. For our part, my thought is that we’d move to launching all new Process’s for the additional service bus nodes in testing.

I know there’s also some plans for an “AssemblyLoadContext” that sounds to me like a better, lightweight way of doing the kinds of Assembly loading sandboxes for testing that are only possible with separate AppDomain’s in .Net today. Other than rumors and an occasionally cryptic hint from members of the ASP.Net team, there’s basically no information about what AssemblyLoadContext will be able to do.

The new “Startup” mechanism in .Net Core might serve in the exact same fashion that our FubuRegistry does today in FubuMVC. That should make it much easier to carry out this strategy of collapsing distributed or microservice applications down into a single process.

I also think that the improved configuration mechanisms in ASP.Net Core (is this in .Net Core itself? I really don’t know). The tight coupling of certain libraries we use to the existence of that single app.config/web.config file today is a headache and the main reason we’re sometimes forced into the more complicated, multiple AppDomain issue.

What’s Next

This blog post went a lot longer than I anticipated, so I cut it in half. Next time up I’ll talk about how we coordinate the test harness timing with the various messaging nodes and how we expose application diagnostics in automated test outputs to help understand what’s actually happening in your application when things fail or just run too slowly.

 

 

New Marten Release and What’s Next?

I uploaded a new Nuget for Marten v0.9.2 yesterday with a couple new features and about two weeks worth of bug fixes and some refinements. You can find the full list of issues and pull requests in this release from the v0.9.1 and v0.9.2 milestones in GitHub.

The highlight of this release in terms of raw usability is probably some overdue improvements to Marten’s underlying schema management:

  1. Marten can detect when the configured upsert functions are missing or do not match the configuration and rebuild them
  2. Marten can detect missing or changed indexes and make the appropriate updates.

Some other things that are new:

  • There’s now a synchronous batch querying option
  • You can now use the AsJson() Linq operator in combination with Select() transforms (this is going to get its own blog post soon-ish).
  • The default transaction isolation level is ReadCommitted
  • It won’t provide much value until there’s more there, but I’ve added some rolling buffer queueing support for being able to do asynchronous projections in the event store. There’ll be a blog post about that one soon just to see if I can trick some of you into being technical reviewers or contributors on that one;)

The two big features are discussed below:

Paging Support

We flat out copied part of RavenDb’s paging support for more efficient paging support. Take the example of showing a large data set in a user interface and wanting to do that one page at a time. You need to know how many total documents match the query criteria to be able to present an accurate paging bar. Fortunately, you can get that total number now without making a second round trip to the database with this syntax:

// We're going to use stats as an output
// parameter to the call below, so we
// have to declare the "stats" object
// first
QueryStatistics stats = null;

var list = theSession
    .Query<Target>()
    .Stats(out stats)
    .Where(x => x.Number > 10).Take(5)
    .ToList();

list.Any().ShouldBeTrue();

// Now, the total results data should
// be available
stats.TotalResults.ShouldBe(count);

In combination with the existing support for the Take() and Skip() Linq operators, you should have everything you need for efficient paging with Marten.

Include() inside of Compiled Queries

The Include() feature is now usable from within the compiled query feature, so finally, two of our best features for optimizing data access can work together. Below is a sample:

public class IssueByTitleIncludingUsers : ICompiledQuery<Issue>
{
    public string Title { get; set; }
    public User IncludedAssignee { get; private set; } = new User();
    public User IncludedReported { get; private set; } = new User();
    public JoinType JoinType { get; set; } = JoinType.Inner;

    public Expression<Func<IQueryable<Issue>, Issue>> QueryIs()
    {
        return query => query
            .Include<Issue, IssueByTitleIncludingUsers>(x => x.AssigneeId, x => x.IncludedAssignee, JoinType)
            .Include<Issue, IssueByTitleIncludingUsers>(x => x.ReporterId, x => x.IncludedReported, JoinType)
            .Single(x => x.Title == Title);
    }
}

 

What’s Next?

Besides whatever bug fixes come up, I think the next things I’m working on for the document database support are soft deletes, bulk insert improvements, and finally getting a versioned document story going. On the event store side of things, it’s all about projections. We’ll have a working asynchronous projection feature in the next release, maybe support for arbitrary categorization inside of aggregated projections, and some preliminary support for Javascript projections.

Got other requests, needs, or problems with Marten? Tell us all about it anytime in the Gitter room.