Author Archives: jeremydmiller

Marten V4 Preview: Command Line Administration

TL;DR — It’s going to be much simpler in V4 to incorporate Marten’s command line administration tools into your .Net Core application.

In my last post I started to lay out some of the improvements in the forthcoming Marten V4 release with our first alpha Nuget release. In this post, I’m going to show the improvements to Marten’s command line package that can be used for some important database administration and schema migrations.

Unlike ORM tools like Entity Framework (it’s a huge pet peeve of mine when people describe Marten as an ORM), Marten by and large tries to allow you to be as productive as possible by keeping your focus on your application code instead of having to spend much energy and time on the details of your database schema. At development time you can just have Marten use its AutoCreate.All mode and it’ll quietly do anything it needs to do with your Postgresql database to make the document storage work at runtime.

For real production though, it’s likely that you’ll want to explicitly control when database schema changes happen. It’s also likely that you won’t want your application to have permissions to change the underlying database schema on the fly. To that end, Marten has quite a bit of functionality to export database schema updates for formal database migrations.

We’ve long supported an add on package called Marten.CommandLine that let’s you build your own command line tool to help manage these schema updates, but to date it’s required you to build a separate console application parallel to your application and has probably not been that useful to most folks.

In V4 though, we’re exploiting the Oakton.AspNetCore library that allows you to embed command line utilities directly into your .Net Core application. Let’s make that concrete with a small sample application in Marten’s GitHub repository.

Before I dive into that code, Marten v3.12 added a built in integration for Marten into the .Net Core generic HostBuilder that we’re going to depend on here. Using the HostBuilder for configuring and bootstrapping Marten into your application allows you to use the exact same Marten configuration and application configuration in the Marten command utilities without any additional work.

This sample application was built with the standard dotnet new webapi template. On top of that, I added a reference to the Marten.CommandLine library.

.Net Core applications tend to be configured and bootstrapped by a combination of a Program.Main() method and a StartUp class. First, here’s the Program.Main() method from the sample application:

public class Program
{
// It's actually important to return Task<int>
// so that the application commands can communicate
// success or failure
public static Task<int> Main(string[] args)
{
return CreateHostBuilder(args)

// This line replaces Build().Start()
// in most dotnet new templates
.RunOaktonCommands(args);
}

public static IHostBuilder CreateHostBuilder(string[] args) =>
Host.CreateDefaultBuilder(args)
.ConfigureWebHostDefaults(webBuilder =>
{
webBuilder.UseStartup<Startup>();
});
}

Note the signature of the Main() method and how it uses the RunOaktonCommands() method to intercept the command line arguments and execute named commands (with the default being to just run the application like normal).

Now, the Startup.ConfigureServices() method with Marten added in is this:

public void ConfigureServices(IServiceCollection services)
{
    // This is the absolute, simplest way to integrate Marten into your
    // .Net Core application with Marten's default configuration
    services.AddMarten(Configuration.GetConnectionString("Marten"));
}

Now, to the actual command line. As long as the Marten.CommandLine assembly is referenced by your application, you should see the additional Marten commands. From your project’s root directory, run dotnet run -- help and we see there’s some additional Marten-related options:

Oakton command line options with Marten.CommandLine in play

And that’s it. Now you can use dotnet run -- dump to export out all the SQL to recreate the Marten database schema, or maybe dotnet run -- patch upgrade_staging.sql --e Staging to create a SQL patch file that would make any necessary changes to upgrade your staging database to reflect the current Marten configuration (assuming that you’ve got an appsettings.Staging.json file with the right connection string pointing to your staging Postgresql server).

Check out the Marten.CommandLine documentation for more information on what it can do, but expect some V4 improvements to that as well.

Marten V4 Preview: Linq and Performance

Marten is an open source library for .Net that allows developers to treat the robust Postgresql database as a full featured and transactional document database (NoSQL) as well as supporting the event sourcing pattern of application persistence.

After a false start last summer, development on the long awaited and delayed Marten V4.0 release is heavily in flight and we’re making a lot of progress. The major focus of the remaining work is improving the event store functionality (that I’ll try to blog about later in the week if I can). We posted the first Marten V4 alpha on Friday for early adopters — or folks that need Linq provider fixes ASAP! — to pull down and start trying out. So far the limited feedback has been a nearly seamless upgrade.

You can track the work and direction of things through the GitHub issues that are already done and the ones that are still planned.

For today though, I’d like to focus on what’s been done so far in V4 in terms of making Marten simply better and faster at its existing feature set.

Being Faster by Doing Less

One of the challenging things about Marten’s feature set is the unique permutations of what exactly happens when you store, delete, or load document to and from the database. For example, some documents may or may not be:

On top of that, Marten supports a couple different flavors of document sessions:

  • Query-only sessions that are strictly read only querying
  • The normal session that supports an internal identity map functionality that caches previously loaded documents
  • Automatic dirty checking sessions that are the heaviest Marten sessions
  • “Lightweight” sessions that don’t use any kind of identity map caching or automatic dirty checking for faster performance and better memory usage — at the cost of a little more developer written code.

The point here is that there’s a lot of variability in what exactly happens in Marten when you save, load, or delete a document with Marten. In the current version, Marten uses a combination of runtime if/then logic, some “Nullo” classes, and a little bit of “Expression to Lambda” runtime compilation.

For V4, I completely re-wired the internals to use C# code generated and compiled at runtime using Roslyn’s runtime compilation capabilities. Marten is using the LamarCompiler and LamarCodeGeneration libraries as helpers. You can see these two libraries and this technique in action in a talk I gave at NDC London in 2019.

The end result of all this work is that we can generated the tightest possible C# handling code and the tightest possible SQL for the exact permutation of document storage characteristics and session type. Along the way, we’ve striven to reduce the number of dictionary lookups, runtime branching logic, empty Nullo objects, and generally the number of computer instructions that would have to be executed by the underlying processor just to save, load, or delete a document.

So far, so good. It’s hard to say exactly how much this is going to impact any given Marten-using application, but the existing test suite clearly runs faster now and I’m not seeing any noticeable issue with the “cold start” of the initial, one time code generation and compilation (that was a big issue in early Roslyn to the point where we ripped that out of pre 1.0 Marten, but seems to be solved now).

If anyone is curious, I’d be happy to write a blog post diving into the guts of how that works. And why the new .Net source generator feature wouldn’t work in this case if anyone wants to know about that too.

Linq Provider Almost-Rewrite

To be honest, I think Marten’s existing Linq provider (pre-V4) is pretty well stuck at the original proof of concept stage thrown together 4-5 years ago. The number of open issues where folks had hit limitations in the Linq provider support built up — especially with anything involving child collections on document types.

For V4, we’ve heavily restructured the Linq parsing and SQL generation code to address the previous shortcomings. There’s a little bit of improvement in the performance of Linq parsing and also a little bit of optimization of the SQL generated by avoiding unnecessary CASTs. Most of the improvement has been toward addressing previously unsupported scenarios. A potential improvement that we haven’t yet exploited much is to make the SQL generation and Linq parsing more able to support custom value types and F#-isms like discriminated unions through a new extensibility mechanism that teaches Marten about how these types are represented in the serialized JSON storage.

Querying Descendent Collections

Marten pre-V4 didn’t handle querying through child collections very well and that’s been a common source of user issues. With V4, we’re heavily using the Common Table Expression query support in Postgresql behind the scenes to make Linq queries like this one shown below possible:

var results = theSession.Query<Top>()
.Where(x => x.Middles.Any(b => b.Bottoms.Any()))
.ToList();

I think that at this point Marten can handle any combination of querying through child collections through any number of levels with all possible query operators (Any() / Count()) and any supported Where() fragment within the child collection.

Multi-Document Includes

Marten has long had some functionality for fetching related documents together in one database round trip for more efficient document reading. A long time limitation in Marten is that this Include() capability was only usable for logical “one to one” or “many to one” document relationships. In V4, you can now use Include() querying for “one to many” relationships as shown below:

[Fact]
public void include_many_to_list()
{
var user1 = new User { };
var user2 = new User { };
var user3 = new User { };
var user4 = new User { };
var user5 = new User { };
var user6 = new User { };
var user7 = new User { };

theStore.BulkInsert(new User[]{user1, user2, user3, user4, user5, user6, user7});

var group1 = new Group
{
Name = "Odds",
Users = new []{user1.Id, user3.Id, user5.Id, user7.Id}
};

var group2 = new Group {Name = "Evens", Users = new[] {user2.Id, user4.Id, user6.Id}};

using (var session = theStore.OpenSession())
{
session.Store(group1, group2);
session.SaveChanges();
}

using (var query = theStore.QuerySession())
{
var list = new List<User>();

query.Query<Group>()
.Include(x => x.Users, list)
.Where(x => x.Name == "Odds")
.ToList()
.Single()
.Name.ShouldBe("Odds");

list.Count.ShouldBe(4);
list.Any(x => x.Id == user1.Id).ShouldBeTrue();
list.Any(x => x.Id == user3.Id).ShouldBeTrue();
list.Any(x => x.Id == user5.Id).ShouldBeTrue();
list.Any(x => x.Id == user7.Id).ShouldBeTrue();
}
}

This was a longstanding request from users, and to be honest, we had to completely rewrite the Include() internals to add this support. Again, we used Common Table Expression SQL statements in combination with per session temporary tables to pull this off.

Compiled Queries Actually Work

I think the Compiled Query feature is unique in Marten. It’s probably easiest and best to think of it as a “stored procedure” for Linq queries in Marten. The value of a compiled query in Marten is:

  1. It potentially cleans up the application code that has to interact with Marten queries, especially for more complex queries
  2. It’s potentially some reuse for commonly executed queries
  3. Mostly though, it’s a significant performance improvement because it allows Marten to “remember” the Linq query plan.

While compiled queries have been supported since Marten 1.0, there’s been a lot of gap between what works in Marten’s Linq support and what functions correctly inside of compiled queries. With the advent of V4, the compiled query planning was rewritten with a new strategy that so far seems to support all of the Linq capabilities of Marten. We think this will make the compiled query feature much more useful going forward.

Here’s an example compiled query that was not possible before V4:

public class FunnyTargetQuery : ICompiledListQuery<Target>
{
public Expression<Func<IMartenQueryable<Target>, IEnumerable<Target>>> QueryIs()
{
return q => q
.Where(x => x.Flag && x.NumberArray.Contains(Number));
}

public int Number { get; set; }
}

And in usage:

var actuals = session.Query(new FunnyTargetQuery{Number = 5}).ToArray();

Multi-Level SelectMany because why not?

Marten has long supported the SelectMany() keyword in the Linq provider support, but in V4 it’s much more robust with the ability to chain SelectMany() clauses n-deep and do that in combination with any kind of Count() / Distinct() / Where() / OrderBy() Linq clauses. Here’s an example:

[Fact]
public void select_many_2_deep()
{
var group1 = new TargetGroup
{
Targets = Target.GenerateRandomData(25).ToArray()
};

var group2 = new TargetGroup
{
Targets = Target.GenerateRandomData(25).ToArray()
};

var group3 = new TargetGroup
{
Targets = Target.GenerateRandomData(25).ToArray()
};

var groups = new[] {group1, group2, group3};

using (var session = theStore.LightweightSession())
{
session.Store(groups);
session.SaveChanges();
}

using var query = theStore.QuerySession();

var loaded = query.Query<TargetGroup>()
.SelectMany(x => x.Targets)
.Where(x => x.Color == Colors.Blue)
.SelectMany(x => x.Children)
.OrderBy(x => x.Number)
.ToArray()
.Select(x => x.Id).ToArray();

var expected = groups
.SelectMany(x => x.Targets)
.Where(x => x.Color == Colors.Blue)
.SelectMany(x => x.Children)
.OrderBy(x => x.Number)
.ToArray()
.Select(x => x.Id).ToArray();

loaded.ShouldBe(expected);
}

Again, we pulled that off with Common Table Expression statements.

In tepid defense of…

Hey all, I’ve been swamped at work and haven’t had any bandwidth or energy for blogging, but I’ve actually been working up ideas for a new blog series. I’m going to call it “In tepid defense of [XYZ]”, where XYZ is some kind of software development tool or technique that’s:

  • Gotten a bad name from folks overusing it, or using it in some kind of dogmatic way that isn’t useful
  • Is disparaged by a certain type of elitist, hipster developer
  • Might still have some significant value if used judiciously

My list of topics so far is:

  • IoC Containers — I’m going to focus on where, when, and how they’re still useful — but with a huge dose of what I think are the keys to using them successfully in real projects. Which is more or less gonna amount to using them very simply and not making them do too much weird runtime switcheroo.
  • S.O.L.I.D. — Talking about the principles as a heuristic to think through designing code internals, but most definitely not throwing this out there as any kind of hard and fast programming laws. This will be completely divorced from any discussion about you know who.
  • UML — I’m honestly using UML more now than I had been for years and it’s worth reevaluating UML diagramming after years of the backlash to silly things like “Executable UML”
  • Don’t Repeat Yourself (DRY) — I think folks bash this instead of thinking more about when and how they eliminate duplication in their code without going into some kind of really harmful architecture astronaut mode

I probably don’t have the energy or guts to tackle OOP in general or design patterns in specific, but we’ll see.

Anything interesting to anybody?

Just Finished a Not Really Awesome Project, Here’s What I Learned

To be as clear as possible, I’m speaking only for myself and any views or opinions expressed here do not represent my employer whatsoever.

Years ago I wrote a post called The Surprisingly Valuable and Lasting Lessons I Learned from a Horrible Project that’s exactly what it sounds like. I was on a genuinely awful project ripe with Dilbertesque elements. As usual though, horrible experiences can lead to some very valuable lessons. From that terrible project, I learned plenty about team communication, Agile processes, and doing software design within an Agile process (XP in this case).

If you’ve followed my twitter feed the past couple years, you know that I have routinely expressed some frustrations working within a long running waterfall project. I finally rolled off that project this past Friday after 2+ years, and I have some thoughts. I definitely appreciate some of the personal relationships that came out of the project, but I’m not leaving feeling very satisfied by how the project went overall. That makes it time to reflect and figure out what actionable lessons I can take from the project for future endeavors.

Because it’s an easy writing crutch, let’s go to the good, the bad, and the ugly of the project and what I (re)learned this time around:

The Good

Let’s start with some positives. I’ve been a big believer in capturing business rule requirements as “executable specifications” (acceptance test driven development or some folk’s version of behavior driven development) for years. While we weren’t allowed by the client to use my preferred tooling (grumble) to do that and had to write some custom tooling, we had some genuine success with executable specifications as requirements. Think “if we have this exact inputs and run the business rules, these are the exact validation errors and/or transactions that should be triggered next.” Our client had some very involved business rules that were driven by even more complex databases, and having the client review and adjust our acceptance tests written as examples instead of just sticking with vaguely worded Word documents made a hugely positive difference for us.

Automating all deployments through Azure DevOps was a big win, especially when we finally got folks outside of our organization to stop deploying directly from Visual Studio.Net. It’s vitally important to have good traceability from the binaries deployed in testing or production to the revision of the code in source control. I learned that way back when in my previous “worst project ever”, and we re-learned that in this project. The one aspect of this project that was an undeniable success was introducing continuous integration and some continuous delivery to our customer.

The Bad

Developers that do not communicate or interact well with other developers simply cannot be placed in important positions in integration projects regardless of their technical ability or domain knowledge.

We started adding some environment tests (with an ancient, but still working feature in StructureMap!) into our deployment pipelines about halfway, but I wish we’d gone much farther. The overall technical ecosystem was not perfectly reliable and there were dependencies that still had manual deployments, so it became very important to have self-diagnosing deployments that could tell you quickly when things like database connectivity, configuration to external dependencies, or expected network shares were unreachable.

I was on a call this morning for a new project just getting off the ground and wanted to give a giant high five through Zoom to our DevOps architect when he showed how he was planning to build environment tests directly into this new project’s CD pipeline.

Conway’s Law is not evil per se, but it does exist and you absolutely need to be aware of it when you lay out both your desired architecture and organizational structure. We were absolutely screwed over by Conway’s Law on this project, and the consequence to the customer is a system that has performance problems and is harder to troubleshoot and support than it needed to be because the service boundaries fell in problematic ways based on the organizational model rather than on what would have made sense from a technical or even problem domain perspective.

I wish we’d engaged much earlier with the client’s operations team. It was waterfall after all, so they only expected to get support and troubleshooting documentation near the end of the project. That said, while I still think our basic architecture was appropriate from a technical perspective, it didn’t at all fit into the operation team’s comfort zone and the customer’s existing infrastructure for application support. Either we could have worked with them to make them comfortable with alternative tooling for operations monitoring much earlier in the project, or we could have bitten the bullet and made the systems act much more like the batch driven mainframe tools they were used to.

The error handling we designed into our asynchronous support was heavily based off of Jasper’s existing error handling, which in turn grew out of my experiences in my previous company where we dealt with large volumes and frequent transient errors. In this ecosystem, our problems were really more systematic when a downstream system would be either completely down or mis-configured so that every interaction with it failed. In this case, we really needed a circuit breaker strategy for the error handling inside the message handling code. The main lesson here is to be careful you aren’t trying to fight the last war.

I wish there had been an overall architect over all the elements of this large initiative (preferably me). I only had a window into the specific elements my teams were building out early on, and didn’t get to see the bigger picture until much later — assuming that I ever did. Everyone else was in the same boat, and I felt like we were all the proverbial blind men trying to describe an elephant by feel.

The Ugly

It’s hard to describe, but my single biggest regret on this project was in not pushing much harder to create an effective automated testing strategy against our integration with an extremely problematic 3rd party system. Having to depend so much on very slow, laborious manual testing against this 3rd party system at the center of everything was the bottleneck of all our efforts in my opinion. Most of our technical risk and production issues have been related to this 3rd party system. A fully automated test suite might have allowed us to iterate much faster and find & remove the problems we found in the integration.

When you have the choice, don’t write custom infrastructure when there are viable, commonly used, off the shelf components. The senior management at the beginning of this project were very apprehensive of using any kind of new infrastructure, especially open source tools.

Since this was primarily an integration project, asynchronous messaging should have been a very good fit for the project. We wrote a tiny shared library for managing asynchronous communication between applications using Rabbit MQ as the underlying transport, but with some hooks for an easy move later to Azure Service Bus. That tiny library had to continuously evolve into something much larger later as the use cases became more complex to the point where I felt like it was dominating our workload.

I won’t even say “in retrospect” because we knew this full well from day one, but the project would have gone better if we’d been able to use an off the shelf toolset like MassTransit, NServiceBus, or my own Jasper framework for the messaging support. I wish that I’d made a much bigger push at the time to build out a much more robust messaging foundation, but I felt lucky at the time just to get Rabbit MQ approved.

At the end we actually had a consensus agreement to rip out our custom messaging library and replace that with MassTransit, but the clock ran out on us. If and when the customer is able to do that themselves, I think they’ll have much more robust error handling and instrumentation that should result in more successful daily operations.

There were more egregious “NIH” violations than what I described above, but I’m only going to deal with issues where I had some level of control or influence.

The waterfall software process on this project was just as problematic as it ever was. We had to spend a lot of energy upfront on intermediate deliverables that didn’t add much value, but the killer as usual with waterfall was how damn slow the feedback cycles were and not doing any substantial integration testing until very late in the project. I’m aware that many of you reading this will have very negative opinions and experiences with Agile (I blame Scrum though), but Agile done reasonably well means having rapid and early feedback cycles to find and fix problems quickly.

Shared databases are a common scourge of enterprise architectures. Dating all the way back to my 2005 (!) post Overthrowing the Tyranny of the Shared Database, sharing databases between an application has been a massive pet peeve of mine. Hell, tilting at the shared database windmill at my previous company contributed a little bit to me leaving. At the very least, make sure the %^$&$^&%ing shared database structure is completely described in source control somewhere and fully scripted out so any developer can quickly spin up an up to date copy of that database for testing as needed. If you depend on manual database changes independent of the application development around the shared database, you need to expect a great deal of friction and production problems related to your shared database.

One more time with feeling for my longtime readers:

Sharing a database between applications is like drug users sharing needles

Things to research for later

The big takeaways from me on this project are to add some additional error handling and distributed tracing approaches to my integration project tool belt. As soon as I get a chance, I’m doing a deeper dive into the OpenTelemetry specification with a thought toward adding direct support in Jasper and maybe Marten as a learning experience. I’m also going to add some circuit breaker support directly into Jasper.

For any of you who are huge fans of Stephen King’s Dark Tower novels, you know that King modeled Roland on Clint Eastwood’s character from the spaghetti westerns, but living inside of a Lord of the Rings style epic tale. I think Idris Elba would have been awesome as Roland in the Dark Tower movie if they hadn’t changed the story and the character so much from the books. Grrr.

Calling Generic Methods from Non-Generic Code in .Net

Somewhat often (or at least it feels that way this week) I’ll run into the need to call a method with a generic type argument from code that isn’t generic. To make that concrete, here’s an example from Marten. The main IDocumentSession service has a method called Store() that directs Marten to persist one or more documents of the same type. That method has this signature:

void Store<T>(params T[] entities);

That method would typically be used like this:

using (var session = store.OpenSession())
{
    // The generic constraint for "Team" is inferred from the usage
    session.Store(new Team { Name = "Warriors" });
    session.Store(new Team { Name = "Spurs" });
    session.Store(new Team { Name = "Thunder" });

    session.SaveChanges();
}

Great, and easy enough (I hope), but Marten also has this method where folks can add a heterogeneous mix of any kind of document types all at once:

void StoreObjects(IEnumerable<object> documents);

Internally, that method groups the documents by type, then delegates to the property Store<T>() method for each document type — and that’s where this post comes into play.

(Re-)Introducing Baseline

Baseline is a library available on Nuget that provides oodles of little helper extension methods on common .Net types and very basic utilities that I use in almost all my projects, both OSS and at work. Baseline is an improved subset of what was long ago FubuCore (FubuCore was huge, and it also spawned Oakton), but somewhat adapted to .Net Core.

I wanted to call this library “spackle” because it fills in usability gaps in the .Net base class library, but Jason Bock beat me to it with his Spackle library of extension methods. Since I expected this library to be used as a foundational piece from within basically all the projects in the JasperFx suite, I chose the name “Baseline” which I thought conveniently enough described its purpose and also because there’s an important throughway near the titular Jasper called “Baseline”. I don’t know for sure that it’s the basis for the name, but the Battle of Carthage in the very early days of the US Civil War started where this road is today.

Crossing the Non-Generic to Generic Divide with Baseline

Back to the Marten StoreObjects(object[]) calling Store<T>(T[]) problem. Baseline has a helper extension method called CloseAndBuildAs<T>() method I frequently use to solve this problem. It’s unfortunately a little tedious, but first design a non-generic interface that will wrap the calls to Store<T>() like this:

internal interface IHandler
{
void Store(IDocumentSession session, IEnumerable<object> objects);
}

And a concrete, open generic type that implements IHandler:

internal class Handler<T>: IHandler
{
public void Store(IDocumentSession session, IEnumerable<object> objects)
{
// Delegate to the Store<T>() method
session.Store(objects.OfType<T>().ToArray());
}
}

Now, the StoreObjects() method looks like this:

public void StoreObjects(IEnumerable<object> documents)
{
assertNotDisposed();

var documentsGroupedByType = documents
.Where(x => x != null)
.GroupBy(x => x.GetType());

foreach (var group in documentsGroupedByType)
{
// Build the right handler for the group type
var handler = typeof(Handler<>).CloseAndBuildAs<IHandler>(group.Key);
handler.Store(this, group);
}
}

The CloseAndBuildAs<T>() method above does a couple things behind the scenes:

  1. It creates a closed type for the proper Handler<T> based on the type arguments passed into the method
  2. Uses Activator.CreateInstance() to build the concrete type
  3. Casts that object to the interface supplied as a generic argument to the CloseAndBuildAs<T>() method

The method shown above is here in GitHub. It’s not shown, but there are some extra overloads to also pass in constructor arguments to the concrete types being built.

Marten Quickstart with .Net Core HostBuilder

The Marten Community just released Marten 3.12 with a mix of new functionality and plenty of bug fixes this past week. In particular, we added some new extension methods directly into Marten for integration into .Net Core applications that are bootstrapped by the new generic host builder in .Net Core 3.*.

There’s a new runnable, sample project in GitHub called AspNetCoreWebAPIWithMarten that contains all the code from this blog post.

For a small sample ASP.Net Core web service project using Marten’s new integration, let’s start a new web service project with the dotnet new webapi template. Doing this gives you some familiar files that we’re going to edit in a bit:

  1. appsettings.json — standard configuration file for .Net Core
  2. Program.cs — has the main command line entry point for .Net Core applications. We aren’t actually going to touch this right now, but there will be some command line improvements to Marten v4.0 soon that will add some important development lifecycle utilities that will require a 2-3 line change to this file. Soon.
  3. Startup.cs — the convention based Startup class that holds most of the configuration and bootstrapping for a .Net Core application.

Marten does sit on top of Postgresql, so let’s add a docker-compose.yml file to the codebase for our local development database server like this one:

version: '3'
services:
 postgresql:
 image: "clkao/postgres-plv8:latest"
ports:
 - "5433:5432"

At the command line, run docker-compose up -d to start up your new Postgresql database in Docker.

Next, we’ll add a reference to the latest Marten Nuget to the main project. In the appsettings.json file, I’ll add the connection string to the Postgresql container we defined above:

"ConnectionStrings": {
  "Marten": "Host=localhost;Port=5433;Database=postgres;Username=postgres;password=postgres"
}

Finally, let’s go the Startup.ConfigureServices() method and add this code to register Marten services:

public class Startup
{
    // We need the application configuration to get
    // our connection string to Marten
    public Startup(IConfiguration configuration)
    {
        Configuration = configuration;
    }

    public IConfiguration Configuration { get; }

    public void ConfigureServices(IServiceCollection services)
    {
         // This line registers Marten services with all default
         // Marten settings
         var connectionString = Configuration.GetConnectionString("Marten");
         services.AddMarten(connectionString);

         // And some other stuff we don't care about in this post...

And that’s it, we’ve got Marten configured in its most basic “getting started” usage with these services in our application’s IoC container:

  1. IDocumentStore with a Singleton lifetime. The document store can be used to create sessions, query the configuration of Marten, generate schema migrations, and do bulk inserts.
  2. IDocumentSession with a Scoped lifetime for all read and write operations. By default, this is done with the IDocumentStore.OpenSession() method and the session created will have the identity map behavior
  3. IQuerySession with a Scoped lifetime for all read operations against the document store.

Now, let’s build out a very rudimentary issue tracking web service to capture and persist new issues as well as allow us to query for existing, open issues:

public class IssueController : ControllerBase
{
// This endpoint captures, creates, and persists
// new Issue entities with Marten
[HttpPost("/issue")]
public async Task<IssueCreated> PostIssue(
[FromBody] CreateIssue command,
[FromServices] IDocumentSession session)
{
var issue = new Issue
{
Title = command.Title,
Description = command.Description
};

// Register the new Issue entity
// with Marten
session.Store(issue);

// Commit all pending changes
// to the underlying database
await session.SaveChangesAsync();

return new IssueCreated
{
IssueId = issue.Id
};
}

[HttpGet("/issues/{status}/")]
public Task<IReadOnlyList<IssueTitle>> Issues(
[FromServices] IQuerySession session,
IssueStatus status)
{
// Query Marten's underlying database with Linq
// for all the issues with the given status
// and return an array of issue titles and ids
return session.Query<Issue>()
.Where(x => x.Status == status)
.Select(x => new IssueTitle {Title = x.Title, Id = x.Id})
.ToListAsync();
}

// Return only new issues
[HttpGet("/issues/new")]
public Task<IReadOnlyList<IssueTitle>> NewIssues(
[FromServices] IQuerySession session)
{
return Issues(session, IssueStatus.New);
}
}

And that is actually a completely working little application. In its default settings, Marten will create any necessary database tables on the fly, so we didn’t have to worry about much database setup other than having the new Postgresql database started in a Docker container. Likewise, we didn’t have to subclass any kind of Marten service like you would with Entity Framework Core and we most certainly didn’t have to create any kind of Object/Relational Mapping just to get going. Additionally, we didn’t have to care too awfully much about how to get the right Marten services integrated into our application’s IoC container with the right scoping because that’s all handled for us with the UseMarten() extension method that comes out of the box with Marten 3.12 and greater.

In a follow up post later this week, I’ll show you how to customize the Marten service registrations for per-HTTP request multi-tenancy and traceable, contextual logging of Marten-related SQL execution.

The Fundamentals of Continuous Software Design

Continuing my recent theme of remember why we originally thought Agile was a good thing before it devolved into whatever it is now.

I had the opportunity over the weekend to speak online as part of CouchCon Live. My topic was to revisit some of the principles of designing software inside of an adaptive Agile Software process in a talk entitled “The Fundamentals of Continuous Software Design.”

The video has posted on YouTube, and the slides are available on SlideShare.

I went back through the Agile greatest hits with:

  • YAGNI
  • Do the Simplest Thing that Could Possibly Work
  • The Last Responsible Moment
  • Reversibility in Software Architecture
  • Designing for Testability
  • How the full development team should be involved throughout
  • And why I think contemporary Scrum is the Scrappy Doo of Agile Software Development

 

Remembering Why Agile was a Big Deal

Earlier this year I recorded a podcast for Software Engineering Radio with Jeff Doolittle on Agile versus Waterfall Software Development where we discussed the vital differences between Agile development and traditional waterfall, and why those differences still matter. This post is the long-awaited companion piece I couldn’t manage to finish before the podcast posted. I might write a follow up post on some software engineering practices like continuous design later so that I can strictly focus here on softer process related ideas.

I started my software development career writing Shadow IT applications and automation for my engineering group. No process, no real practices either, and lots of code. As you’d probably guess, I later chafed very badly at the formal waterfall processes and organizational model of my first “real” software development job for plenty of reasons I’ll detail later in this post.

During that first real IT job, I started to read and learn about alternative iterative development processes like the Rational Unified Process (RUP), but I was mostly inspired by the brand new, shiny Extreme Programming (XP) method. After tilting the windmill a bit at my waterfall shop to try to move us away from the waterfall to RUP (Agile would have been a bridge too far), I jumped to a consulting company that was an influential, early adopter of XP. I’ve worked almost exclusively with Agile processes and inside more or less Agile organizational models ever since — until recently. In the past couple years I’ve been involved with a long running project in a classic waterfall shop which has absolutely reinforced my belief in the philosophical approaches to software development that came out of Agile Software (or Lean Programming). I also think some of those ideas are somewhat lost in contemporary Scrum’s monomaniacal focus on project management, so here’s a long blog post talking about what I think really was vital in the shift from waterfall to agile development.

First, what I believe in

A consistent theme in all of these topics is trying to minimize the amount of context switching throughout a projectand anybody’s average day. I think that Agile processes made a huge improvement in that regard over the older waterfall models, and that by itself is enough to justify staking waterfall through the heart permanently in my book.

Self-contained, multi-disciplinary teams

My strong preference and a constant recommendation to our clients is to favor self-contained, multi-disciplinary teams centered around specific projects or products. What I mean here is that the project is done by a team who is completely dedicated to working on just that project. That team should ideally have every possible skillset it needs to execute the project so that there’s reduced need to coordinate with external teams, so it’s whatever mix you need of developers, testers, analysts all working together on a common goal.

In the later sections on what I think is wrong with the waterfall model, I bring up the fallacy of local optimization. In terms of a software project, we need to focus on optimizing the entire process of delivering working software, not just on what it takes to code a feature, or write tests, or quickly checking off a process quality gate. A self-contained team is hopefully all focused on delivering just one project or sprint, so there should be more opportunity to optimize the delivery. This generally means things like developers purposely thinking about how to support the testers on their team or using practices like Executable Specifications for requirements that shortens the development and testing time overall, even if it’s more work for the original analysts upfront.

A lot of the overhead of software projects is communication between project team members. To be effective, you need to improve the quality of that communication so that team members have a shared understanding of the project. I also think you’re a lot less brittle as a project if you have fewer people to communication with. In a classic waterfall shop, you may need to be involving a lot of folks from external projects who are simultaneously working on several other initiatives and have a different schedule than your project’s schedule. That tends to force communication into either occasional meetings (which impact productivity on its own) or through asynchronous communication via emails or intermediate documentation like design specifications.

Let’s step back and think about the various types of communication you use with your colleagues, and what actually works. Take a look at this diagram from Scott Ambler’s Communication on Agile Software Teams essay (originally adapted from some influential writings by Alistair Cockburn that I couldn’t quite find this morning):

 

communicationModes

In a self-contained, multi-disciplinary team, you’re much more likely to be using the more effective forms of communication in the upper, right hand of the graph. In a waterfall model where different disciplines (testers, developers, analysts) may be working in different teams and on different projects at any one time, the communication is mostly happening at the less effective, lower left of this diagram.

I think that a self-contained team suffers much less from context switching, but I’ll cover that in the section on delivering serially.

Another huge advantage to self-contained teams is the flexibility in scheduling and the ability to adapt to changing project circumstances and whatever you’re learning as you go. This is the idea of Reversibility from Extreme Programming:

“If you can easily change your decisions, this means it’s less important to get them right – which makes your life much simpler. ” — Martin Fowler

In a self-contained team, your reversibility is naturally higher, and you’re more likely able to adapt and incorporate learning throughout the project. If you’re dependent upon external teams or can’t easily change approved specification documents, you have much lower reversibility.

If you’re interested in Reversibility, I gave a technically focused talk on that at Agile Vancouver several years ago

I think everything in this section still applies to teams that are focused on a product or family of products.

Looking over my history, I’ve written about this topic quite a bit over the years:

  1. On Software Teams
  2. Call me a Utopian, but I want my teams flat and my team members broad
  3. Self Organizing Teams are Superior to Command n’ Control Teams
  4. Once Upon a Team
  5. The Anti Team
  6. On Process and Practices
  7. Want productivity? Try some team continuity (and a side of empowerment too) – I miss this team.
  8. The Will to be Good
  9. Learning Lessons — Can You Make Mistakes at Work?
  10. Indelible Proof of a Healthy Team

Deliver serially over working in parallel

A huge shift in thinking behind Agile Software Development is simply the idea that the only thing that matters is delivering working software. Not design specifications, not intermediate documents, not process checkpoints passed, but actual working software deployed to production.

In practice, this means that Agile teams seek to deliver completely working features or “vertical slices” of functionality at one time. In this model a team strives for the continuous delivery model of constantly making little releases of working software.

Contrast this idea to the old “software as construction” metaphor from my waterfall youth where we generally developed by:

  1. Get the business requirements
  2. Do a high level architecture document
  3. Maybe do a lower level design specification (or do a prototype first and pretend you wrote the spec first)
  4. Design the database model for the new system
  5. Code the data layer
  6. Code the business layer
  7. Code any user interface
  8. Rework 4-6 because you inevitably missed something or realized something new as you went
  9. Declare the system “code complete”
  10. Start formal testing of then entire system
  11. Fix lots of bugs on 4-6
  12. User acceptance testing (hopefully)
  13. Release to production

The obvious problems in this cycle is that you deliver no actual value until the very, very end of the project. You’re also struggling quite a bit in the later parts of the project because you’re needing to re-visit work that was done much earlier and you often struggle with the context switching that entails.

In contrast, delivering in vertical slices allows you to:

  • Minimize context switching because you’re finishing work much closer to when it’s started. With a multi-disciplinary team, every body is focused on a small subset of features at any one time which tends to improve the communication between disciplines.
  • Actually deliver something much earlier to production and start accruing some business payoff. Moreover, you should be working in the order of business priority, so the most important things are worked on and completed first. Which also serves to fail softer compared to a waterfall project cycle.
  • Fail softer by delivering part of a project on time if even if you’re not able to complete all the planned features by the theoretical end date — as apposed to failing completely to deliver anything on time in a waterfall model.

In the Extreme Programming days we talked a lot about the concept of done, done, done as opposed to being theoretically “code complete.”

 

Rev’ing up feedback loops

After coming back to waterfall development the past couple years, the most frustrating thing to me is how slow the feedback cycles are between doing or deciding something and knowing whether or not anything you did was really correct. It also turns out that having a roomful of people staring at a design specification document in a formal review doesn’t do a great job at spotting a lot of problems that present themselves later in the project when actual code is being written.

Any iterative process helps tighten feedback cycles and enables you to fix issues earlier. What Agile brought to the table was an emphasis on better, faster, more fine-grained feedback cycles through project automation and practices like continuous integration and test driven development.

More on the engineering practices in later posts. Maybe. It literally took me 5 years to go from an initial draft to publishing this post so don’t hold your breathe.

 

What I think is wrong with classic waterfall development

Potentially a lot. Your mileage may vary from mine (my experiences with formal waterfall processes has been very negative) and I’m sure some of you are succeeding just fine with waterfall processes of one sort or another.

At its most extreme, I’ve observed these traits in shops with formal waterfall methods, all of which I think work against successful delivery and why I think these traits are problematic.

Over-specialization of personnel

I’m not even just talking about developers vs testers. I mean having some developers or groups who govern the central batch scheduling infrastructure, others that own integrations, a front end team maybe, and obviously the centralized database team. Having folks specialized in their roles like this means that it takes more people involved in a project in order to have all the necessary skillset, and that means having a lot more effort to communicate and collaborate between people who aren’t in the same teams or even in the same organizations. That’s a lot of potential project overhead, and it makes your project less flexible as you’re bound by the constraints of your external dependencies.

The other problem with over-specialization is the fallacy of local optimization problem, because many folks only have purview over part of a project and don’t necessarily see or understand the whole project.

Formal, intermediate documents

I’m not here to start yet another argument over how much technical documentation is enough. What I will argue about is a misplaced focus on formal, intermediate documents as a quality or process gate. Especially if those intermediate documents are really meant to serve as the primary communication between architects, analysts, and developers. One, because that’s a deeply inefficient way to communicate. Two, because those documents are always wrong because they’re written too early. Three because it’s just a lot of overhead to go through authoring those documents to get through a formal process gate that could be better spent on getting real feedback about the intended system or project.

Slow feedback cycles

Easily the thing I hate the most about “true” waterfall projects is the length of time between getting adequate feedback between the early design and requirements documents and an actually working system getting tested or going through some user acceptance testing from real users. This is an especially pernicious issue if you hew to the idea that formal testing can only start after all designed features are complete.

The killer problem in larger waterfall projects over my career is that you’re always trying to remember how some piece of code you wrote 3-6 months ago works when problems finally surface from real testing or usage.

 

Summary

I’d absolutely choose some sort of disciplined Agile process with solid engineering practices over any kind of formal waterfall process any day of the week. I think waterfall processes do a wretched job managing project risks by the slow, ineffective feedback cycles and waste a lot of energy on unevenly useful intermediate documentation.

Agile hasn’t always been pretty for me though, see The Surprisingly Valuable and Lasting Lessons I Learned from a Horrible Project about an absolutely miserable XP project.

Ironically, the most successful project I’ve ever worked on from a business perspective was technically a waterfall project (a certain 20-something first time technical lead basically ignored the official process), but process wasn’t really an issue because:

  • We had a very good relationship with the business partners including near constant feedback about what we were building. That project is still the best collaboration I’ve ever experienced with the actual business experts
  • There was a very obvious problem to solve for the business that was likely to pay off quickly
  • Our senior management did a tremendous job being a “shit umbrella” to keep the rest of the organization from interfering with us
  • It was a short project

And projects like that just don’t come around very often, so I wouldn’t read much into the process being the deciding factor in its success.

 

 

 

 

 

 

 

Marten v4.0 Planning Document (Part 1)

As I wrote about a couple weeks ago in a post called Kicking off Marten V4 Development, the Marten team is actively working on the long delayed v4.0 release with planned improvements for performance, the Linq support, and a massive planned jump in capability for the event store functionality. This post is a the result of a long comment thread and many discussions between the Marten community.

We don’t yet have a great consensus about the exact direction that the event store improvements are going to go, so I’m strictly focusing on improvements to the Marten internals and the Document Database support. I’ll follow up with a part 2 on the event store as that starts to gel.

If you’re interested in Marten, here’s some links:

 

Pulling it Off

We’ve got the typical problems of needing to address incoming pull requests and bug issues in master while probably needing to have a long lived branch for v4.

As an initial plan, let’s:

  1. Start with the unit testing improvements as a way to speed up the build before we dive into much more of this? This is in progress with about a 25% reduction in test throughput time so far in this pull request
  2. Do a bug sweep v3.12 release to address as many of the tactical problems as we can before branching to v4
  3. Possibly do a series of v4, then v5 releases to do this in smaller chunks? We’ve mostly said do the event store as v4, then Linq improvements as v5 — Nope, full speed ahead with a large v4 release in order to do as many breaking changes as possible in one release
  4. Extract the generic database manipulation code to its own library to clean up Marten, and speed up our builds to make the development work be more efficient.
  5. Do the Event Store v4 work in a separate project built as an add on from the very beginning, but leave the existing event store in place. That would enable us to do a lot of work and mostly be able to keep that work in master so we don’t have long-lived branch problems. Break open the event store improvement work because that’s where most of the interest is for this release.

Miscellaneous Ideas

  • Look at some kind of object pooling for the DocumentSession / QuerySession objects?
  • Ditch the document by document type schema configuration where Document A can be in one schema, and Document “B” is in another schema. Do that, and I think we open the door for multi-tenancy by schema.
  • Eliminate ManagedConnection altogether. I think it results in unnecessary object allocations and it’s causing more harm that help as it’s been extended over time. After studying that more today, it’s just too damn embedded. At least try to kill off the Execute methods that take in a Lambda. See this GitHub issue.
  • Can we consider ditching < .Net Core or .Net v5 for v4? The probable answer is “no,” so let’s just take this off the table.
  • Do a hunt for classes in Marten marked public that should be internal. Here’s the GitHub issue.
  • Make the exceptions a bit more consistent

Dynamic Code Generation

If you look at the pull request for Document Metadata and the code in Marten.Schema.Arguments you can see that our dynamic Expression to Lambda compilation code is getting extremely messy, hard to reason with, and difficult to extend.

Idea: Introduce a dependency on LamarCodeGeneration and LamarCompiler. LamarCodeGeneration has a strong model for dynamically generating C# code at runtime. LamarCompiler adds runtime Roslyn support to compile assemblies on the fly and utilities to attach/create these classes. We could stick with Expression to Lambda compilation, but that can’t really handle any kind of asynchronous code without some severe pain and it’s far more difficult to reason about (Jeremy’s note: I’m uniquely qualified to make this statement unfortunately).

What gets dynamically generated today:

  • Bulk importer handling for a single entity
  • Loading entities and tracking entities in the identity map or version tracking

What could be generated in the future:

  • Document metadata properties — but sad trombone, that might have to stay with Expressions if the setters are internal/private :/
  • Much more of the ISelector implementations, especially since there’s going to be more variability when we do the document metadata
  • Finer-grained manipulation of the IIdentityMap

Jeremy’s note: After doing some detailed analysis through the codebase and the spots that would be impacted by the change to dynamic code generation, I’m convinced that this will lead to significant performance improvements by eliminating many existing runtime conditional checks and casts

Track this work here.

Unit Testing Approach

This is in progress, and going well.

If we introduce the runtime code generation back into Marten, that’s unfortunately a non-trivial “cold start” testing issue. To soften that, I suggest we get a lot more aggressive with reusable xUnit.Net class fixtures between tests to reuse generated code between tests, cut way down on the sheer number of database calls by not having to frequently check the schema configuration, and other DocumentStore overhead.

A couple other points about this:

  • We need to create more unique document types so we’re not having to use different configurations for the same document type. This would enable more reuse inside the testing runtime
  • Be aggressive with separate schemas for different configurations
  • We could possibly turn on xUnit.net parallel test running to speed up the test cycles

Document Metadata

  • From the feedback on GitHub, it sounds like the desire to extend the existing metadata to tracking data like correlation identifiers, transaction ids, user ids, etc. To make this data easy to query on, I would prefer that this data be separate columns in the underlying storage
  • Use the configuration and tests from pull request for Document Metadata, but use the Lamar-backed dynamic code generation from the previous section to pull this off.
  • I strongly suggest using a new dynamic codegen model for the ISelector objects that would be responsible for putting Marten’s own document metadata like IsDeleted or TenantId or Version onto the resolved objects (but that falls apart if we have to use private setters)
  • I think we could expand the document metadata to allow for user defined properties like “user id” or “transaction id” much the same way we’ll do for the EventStore metadata. We’d need to think about how we extend the document tables and how metadata is attached to a document session

My thought is to designate one (or maybe a few?) .Net type as the “metadata type” for your application like maybe this one:

    public class MyMetadata
    {
        public Guid CorrelationId { get; set; }
        public string UserId { get; set; }
    }

Maybe that gets added to the StoreOptions something like:

var store = DocumentStore.For(x => {
    // other stuff

    // This would direct Marten to add extra columns to
    // the documents and events for the metadata properties
    // on the MyMetadata type.

    // This would probably be a fluent interface to optionally fine tune
    // the storage and applicability -- i.e., to all documents, to events, etc.
    x.UseMetadataType<MyMetadata>();
});

Then at runtime, you’d do something like:

session.UseMetadata<MyMetadta>(metadata);

Either through docs or through the new, official .Net Core integration, we have patterns to have that automatically set upon new DocumentSession objects being created from the IoC to make the tracking be seemless.

Extract Generic Database Helpers to its own Library

  • Pull everything to do with Schema object generation, difference detection, and DDL generation to a separate library (IFeatureSchemaISchemaObject, etc.). Mostly to clean out the main library, but also because this code could easily be reused outside of Marten. Separating it out might make it easier to test and extend that functionality, which is something that occasionally gets requested. There’s also the possibility of further breaking that into abstractions and implementations for the long run of getting us ready for Sql Server or other database engine support. The tests for this functionality are slow, and rarely change. It would be advantageous to get this out of the main Marten library and testing project.
  • Pull the ADO.Net helper code like CommandBuilder and the extension methods into a small helper library somewhere else (I’m nominating the Baseline repository). This code is copied around to other projects as it is, and it’s another way of getting stuff out of the main library and the test suite.

Track this work in this GitHub issue.

F# Improvements

We’ll have a virtual F# subcommittee to be watching this work for F#-friendliness:

HostBuilder Integration

We’ll bring Joona-Pekka Kokko’s ASP.Net Core integration library into the main repository and make that the officially blessed and documented recipe for integrating Marten into .Net Core applications based on the HostBuilder in .Net Core. I suppose we could also multi-target IWebHostBuilder for ASP.Net Core 2.*.

That HostBuilder integration could be extended to:

  • Optionally set up the Async Daemon in an IHostedService — more on this in the Event Store section
  • Optionally register some kind of IDocumentSessionBuilder that could be used to customize session construction?
  • Have some way to have container resolved IDocumentSessionListener objects attached to IDocumentSession. This is to have an easy recipe for folks who want events broadcast through messaging infrastructure in CQRS architectures

See the GitHub issue for this.

Command Line Support

The Marten.CommandLine package already uses Oakton for command line parsing. For easier integration in .Net Core applications, we could shift that to using the Oakton.AspNetCore package so the command line support can be added to any ASP.net Core 2.* or .Net Core 3.* project by installing the Nuget. This might simplify the usage because you’d no longer need a separate project for the command line support.

There are some long standing stories about extending the command line support for the event store projection rebuilding. I think that would be more effective if it switches over to Oakton.AspNetCore.

See the GitHub issue

Linq

This is also covered by the Linq Overhaul issue.

  • Bring back the work in the linq branch for the revamped IField model within the Linq provider. This would be advantageous for performance, cleans up some conditional code in the Linq internals, could make the Linq support be aware of Json serialization customizations like [JsonProperty], and probably helps us deal more with F# types like discriminated unions.
  • Completely rewrite the Include() functionality. Use Postgresql Common Table Expression and UNION queries to fetch both the parent and any related documents in one query without needing to do any kind of JOIN s that complicate the selectors. There’d be a column for document type the code could use to switch. The dynamic code generation would help here. This could finally knock out the long wished for Include() on child collections feature. This work would nuke the InnerJoin stuff in the ISelector implementations, and that would hugely simplify a lot of code.
  • Finer grained code generation would let us optimize the interactions with IdentityMap. For purely query sessions, you could completely skip any kind of interaction with IdentityMap instead of wasting cycles on nullo objects. You could pull out a specific IdentityMap<TEntity, TKey> out of the larger identity map just before calling selectors to avoid some repetitive “find the right inner dictionary” on each document resolved.
  • Maybe introduce a new concept of ILinqDialect where the Expression parsing would just detect what logical thing it finds (like !BoolProperty), and turns around and calls this ILinqDialect to get at a WhereFragment or whatever. This way we could ready ourselves to support an alternative json/sql dialect around JSONPath for Postgresql v12+ and later for Sql Server vNext. I think this would fit into the theme of making the Linq support more modular. It should make the Linq support easier to unit test as we go. Before we do anything with this, let’s take a deep look into the EF Core internals and see how they handle this issue
  • Consider replacing the SelectMany() implementation with Common Table Expression sql statements. That might do a lot to simplify the internal mechanics. Could definitely get us to an n-deep model.
  • Do the Json streaming story because it should be compelling, especially as part of the readside of a CQRS architecture using Marten’s event store functionality.
  • Possibly use a PLV8-based JsonPath polyfill so we could use sql/json immediately in the Linq support. More research necessary.

Partial Updates

Use native postgresql partial JSON updates wherever possible. Let’s do a perf test on that first though.

A Small Case Study in Test Automation (and other things)

I’m trying to walk a line here in this post between avoiding specifics about a client project for obvious reasons, but providing enough detail to make this post worthwhile for that client. One of our client’s development managers is interested in speeding up their testing, and I’m hoping to use this post to lay out some ideas and approaches to improve the testing procedures in this system.

I’ve been part of an integration project for the past couple years that validates, routes, and processes financial transactions coming from an external partner of our client’s all the way to a very large 3rd party hosted in our client’s environment. We’re in the middle of some significant changes in the integration to that 3rd party application that is going to trigger a round of regression testing of the entire system — and that’s where this post comes in. Testing this application has been very challenging and extremely time consuming. Any opportunity to make regression testing be quicker and more effective is going to make everyone’s jobs easier.

It’s not just that the testing itself is slower than desired. Because the testing is slow and not easily repeatable, the development team can’t really do much technical improvement through refactoring as they learn more about the system behavior and how the code structure is working out over time. That’s been a definite negative for code and architectural quality.

Before I get into the details of the existing system, know that what I’m showing and discussing here is a bit of an idealized version of how I wish we had architected the system and what we’ve recommended to the client for the longer term. The real system is a bit messier and significantly harder to test than what I’m presenting here — but there’s a lesson for you, testability should be a first class architectural goal in many cases (and Conway’s Law is legitimately something to work around).

From a 10,000 foot level, here’s the entire system:

TestAutomationScenario-High Level

The workflow is:

  1. A couple times a day, a new flat file containing new transactions will be dropped into a file share
  2. The File Reader console application is executed to find this file, parse it into little transaction messages, and publish those messages to Rabbit MQ. There’s a little bit of database tracking going on for reporting and just general activity tracking.
  3. Rabbit MQ publishes the transaction messages to the subscribing Transaction Processor application (an ASP.Net Core application with an active subscriber for these incoming messages).
  4. The Transaction Processor handles each transaction message by:
    1. Pulling in a helluva lot of information from the 3rd Party Application and other information from a Configuration DB related to the account number in the transaction message
    2. Using the information from the previous step to validate whether the transaction can be processed normally, or has to go into a queue for manual resolution
    3. For the valid transactions, use the information from step #1 to decide how the money in the incoming transaction will be applied (routing to sub-transactions)
    4. Send the routed sub-transactions from the previous step to the 3rd Party Application through its externally facing API.

While there are some unit tests and intermediate level integration tests today on some of the subsystems, the overall official testing effort to date has relied strictly on end to end, manual testing of the entire system. Some of the emphasis on black box, end to end testing is due to our client’s mandatory regulatory auditing requirements and that can’t completely go away. However, there’s worlds of opportunity and new willingness to explore other alternatives like white box testing techniques or new processes for testing as a complement to the formal audit-style testing, so let’s jump into some ideas for making things work more efficiently.

Some Necessary Shifts in Testing Philosophy

First off, there’s an important shift from trying to prove that the system is working perfectly with strictly black box testing to thinking about testing as a feedback mechanism to identify and remove problems in the code so that the code can be deployed to production. If you look at testing as more of a feedback cycle, you can utilize the testing pyramid idea to maximize feedback about how your system functions with more efficient testing techniques.

Secondly, I think you have to have collaboration between testers, developers, and architects to make white box testing more effective. Part of that is increasing the testability of the system architecture, and another part of that is trying to avoid duplication in effort between tests written by developers and other tests performed by the testers. Moreover, if developers are actively engaged in writing tests — and they should be in my world view — it’s very helpful to have the testers involved in the content of those developer-written tests. In other words, I think that having strict separation between testers and development can be very inefficient. I know there are folks who strongly believe that strict independence for the testers from the developers is necessary, but I think that does more harm than good.

For more information on whitebox vs. blackbox testing and improving test feedback, also see:

If you’ll buy any of the two previous paragraphs, or you’re at least open-minded to continue, let’s see how some of this Test Pyramid thinking would play out in our big integration system.

At a high level, I would want the testing strategy to focus on:

  • Some kind of Behavioral Driven Development approach for all the business rules for validating and routing the transactions.
  • Mid-level integration tests on all the code that acts as a gateway or service proxy to the 3rd Party Application. This would include both the code that sends commands to the 3rd Party Application and the code that queries or reads the 3rd Party Application.
  • Mid-level integration tests on the File Reader that probably stubs the outgoing Rabbit MQ and just measures how the File Reader parses the incoming files, writes tracking information, and what messages it publishes.
  • A handful of fully end to end tests through the entire system to prove out all the integration points — but by and large you use finer grained tests to test out business rules and the integration with the 3rd Party Application.

 

The Transaction Processor

Most of the meat of the bigger transaction processing project is within what I’m calling the Transaction Processor shown below in a little more detail:

TestAutomationScenario-Transaction Processor

There’s a couple big responsibilities here:

  • Querying data from the 3rd Party Application with its heinously unusable, custom Xml query language to use inside the business rules
  • Look up some configuration parameters about accounts from a second Configuration DB
  • Carry out validation rules against incoming transactions
  • Route the incoming transactions into sub-transactions based on business rules
  • Post the sub-transactions to the 3rd Party Application with its, shall we say, interesting XML API.

Channeling some Domain-Driven Design thinking here, let’s go straight into the business rules for validation and routing. The business rules required a lot of input parameters, there were a lot of permutations to build and test, and the developers new to the problem domain had plenty of misunderstandings early on about the desired behavior.

From an architectural standpoint, I think it is extremely important to completely isolate these business rules from the 3rd Party Application, the configuration databases, and even the incoming flat file format because:

  • It was very difficult to set up test scenario inputs in the 3rd Party Application
  • There’s a tremendous number of test cases because of the permutations on account state and transaction parameters involved, so there would be a large benefit to tests being quick to author and execute
  • This logic is key to the business and has already evolved significantly since this project started. It’s imperative that this logic be safe to change over time, and that happens most effectively when it’s cheap to write new tests and quick to execute the existing test coverage.
  • I probably shouldn’t say this too loudly, but I think this client should reconsider coupling their ecosystem to the 3rd Party Application

To that end, the business rules should only depend on a domain model that’s internal to the Transaction Processor. We’ll use the A-Frame Architecture idea from Jim Shore’s Testing Without Mocks paper to isolate the business rule behavior from the infrastructure. The domain model objects that implement all the business logic will have no dependency whatsoever on the external dependencies. Instead, we’ll effectively write our own mapping layer to take the data returned from the 3rd Party Application and the Configuration DB and build all the state the domain model needs, then hand that to the business logic code in the domain model.

From the perspective of testing, there’s a lot of opportunity to get the business rules wrong. Rather than depend solely on design or requirements documents, I strongly recommend using Behavior Driven Development (BDD) techniques here to author executable specifications that are readable and reviewed (if not written) by the business domain experts and testers. What I largely recommend here is that the developers mostly write the test harness code, but business domain experts and more likely testers will own the content and meaning of the BDD specifications. Working in this manner, we should be able to treat the BDD specifications as the official tests for the business rule behavior even though this doesn’t run the entire process.

So that handles the business rules, now on to the rest of the Transaction Processor. The “controller” code in the diagram is playing a coordination role to mediate between the business rules and the code that interacts directly with the 3rd Party Application’s external API endpoints. I’d mostly use unit tests and maybe even *gasp* interaction testing with mock objects to test out the workflow and error handling of this code.

The service gateway code that interacts with the 3rd Party Application was extremely problematic in both development and testing. In retrospect, I wish we’d hammered at this code in isolation much more before even bothering trying to run end to end tests. The big issue we never pushed through (yet) was how to establish known system state in the 3rd Party Application so that we could write reliable automated tests around just the service gateway code in the Transaction Processor. I think it would be worthwhile for domain experts and/or testers to be involved in this step as well to verify the expected results are really happening in the 3rd Party Application.

Lastly, I’d opt to do some bigger tests for just the Transaction Processor where you directly enqueue the transaction messages in Rabbit MQ and test the entire Transaction Processor stack all the way down to the external dependencies. The point of these tests are to prove out the integrations and configuration. You don’t try to recreate all the business rules functionality tests covered by the smaller, faster unit tests.

 

“Some” End to End Tests

There are absolutely some issues that can only be tested through true, end to end tests. Integrations, configuration, environments, and security are examples. We’ll still write and perform some end to end tests, but we won’t try to recreate the business functionality tests covered.

No matter what though, the tests need to be as easily repeatable as possible so there’s still going to be a level of automation to speed things along. Here’s my thoughts on what that might look like:

  • The flat file format was originally used by mainframe applications, so as you can imagine, it’s not remotely user friendly to edit or read. I’d suggest using some custom code that can transform a much simpler format to the mainframe-friendly format so the testers can write new test cases more efficiently and everyone else can actually read and understand the test inputs
  • The undeniable, cardinal rule of automated testing is that you have to have known inputs and expected outcomes. In this system, that means being able to set up the 3rd Party Application in a known state for each end to end testing scenario. The failure to do that (not a technical impossibility, but it’s a long story) is my single biggest regret from this project. See My Opinions on Data Setup for Functional Tests for more on what I recommend for test input data.
  • Automating the testing of asynchronous workflows like this system can be very challenging. The biggest issue is making an automated test harness understand when the work is really done across multiple systems so it can proceed to the “assert” part of the standard “arrange, act, assert” test workflow. I’ve had some success with this in the past by making the test harness listen to the various application logging or some kind of visible side effect like data being written to a database to “know” when the work is complete.
  • Tests do fail from time to time, so I’d actually try to have the end to end test harness able to gather up the relevant logs for all the systems active in the test. That’s even more valuable if you can somehow manage to correlate the logging activity with only the active test run.
  • Finally, the big expensive end to end tests my client has to follow for official certification and auditing? Yeah, you have to do that, but my very strong recommendation and where I think they’re starting to head is to use finer-grained and more efficient testing techniques to remove problems first. Then come back and do the laboriously slow audit tests when you can justifiably expect success with few iterations.

 

 

Summary

There’s a couple big points I wanted to drive home in this post:

  • Embrace the test pyramid idea, and try to get over any aversion to white box testing because of its advantages for efficiency
  • Treat testing as a feedback mechanism more than a certification process
  • Tests of all type need to be repeatable to be effective feedback. Manual testing, and especially manual testing where it’s time consuming to set up the necessary system state first, is not very repeatable
  • I think you need to embrace the Agile idea of blurring the lines between roles. Developers and architects need to be involved in the automated testing for a better chance of success. Testers may need to get their hands dirty directly in the code or at least exploit their knowledge of the coding internals in order to make the testing more efficient
  • Developers, testers, and architects need to collaborate to be truly successful in testing. Waterfall style testing where all testing happens at the end is just not the way to be successful
  • Try to avoid duplicating effort between developer written tests and the tester activity, which might be just yet another way of saying the testers and developers need to be collaborating as the project goes on
  • Feedback cycles of all kinds are valuable for quality software