Author Archives: jeremydmiller

What would it take for you to adopt Marten?

If you’re stumbling in here without any knowledge of the Marten project (or me), Marten is an open source .Net library that developers can adopt in their project to use the rock solid Postgresql database as a pretty full featured document database and event store. If you’re unfamiliar with Marten, I think I’d say its feature set makes it similar to MongoDb (but the usage is significantly different), RavenDb, or Cosmos Db. On the event sourcing side of things, I think the only comparison in .Net world is GetEventStore itself, but you can certainly piece together an event store by combining other OSS libraries and database engines.

The Marten community is working very hard on our forthcoming (and long delayed) V4.0 release. We’ve already made some big strides on the document database side of things, and now we’re deep into some significant event store improvements (this link looks best in VS Code w/ the Mermaid plugin active). At Calavista, we’re considering if and how we can build a development practice around Marten for existing and potential clients. I’ve obviously got a lot of skin in the game here as the original creator of Marten. Nothing would make me happier than Marten being even more successful and that I get to help Calavista clients use Marten in real life systems as part of my day job.

I’d really like to hear from other folks what it would really take for them to seriously consider adopting Marten. What is Marten lacking now that you would need, or what kind of community or company support options would be necessary for your shop to use Marten in projects? I’m happy to hear any and all feedback or suggestions from as many people as I can get to respond.

I’m happy to take comments here, or the discussion for this topic is also on GitHub.

Existing Strengths

  • Marten is only a library, and at least for the document database features it’s very unobtrusive into your application code compared to many other persistence options
  • The Marten community is active and I hope you’d say that we’re welcoming to newcomers
  • By building on top of Postgresql, Marten comes with good cloud support from all the major cloud providers and plenty of existing monitoring options
  • Marten comes with many of the very real productivity advantages of a NoSQL solution, but has very strong transactional support from Postgresql itself
  • Marten’s event sourcing functionality comes “in the box” and there’s less work to do to fully incorporate event sourcing — including the all important “read side projection” support — into a .Net architecture than many other alternatives
  • Marten is part of the .Net Foundation
  • If you need commercial support for Marten, you can engage with Calavista Software.

Does any of that resonate with you? If you’ve used Marten before, is there anything missing from that list? And feel free to tell me you’re dubious about anything I’m claiming in the list above.

What’s already done or in flight

  • We made a lot of improvements to Marten’s Linq provider support. Not just in terms of expanding the querying scenarios we support, but also in improving the performance of the library across the board. I know this has been a source of trouble for many users in the past, and I’m excited about the improvements we’ve made in V4.
  • The event store functionality will get a lot more documentation — including sample applications — for V4
  • An important part of many event sourcing architectures is a background process to continuously build “projected” views of the raw events coming in. The current version of Marten has this capability, but it requires the user to do a lot of heavy architectural lifting to use it in any kind of clustered application. In V4, we’ll have an in the box recipe that will be used to do leader election and work distribution through an application cluster in “real server applications.” The asynchronous projection support in V4 will also support multi-tenancy (finally) and we have some ideas to greatly optimize projection rebuilds without system downtime
  • Using native Postgresql sharding for scalability, especially for the event store
  • Allowing users to specify event archival rules to keep the event store tables smaller and more performant
  • Adding more integration with .Net’s generic HostBuilder and standard logging abstractions for easier integration into .Net applications
  • Improving multi-tenancy usage based on user feedback
  • Document and event store metadata capabilities like you’d need for Marten to take part in end to end Open Telemetry tracing within your architecture.
  • More sample applications. To be honest, I’m hoping to find published reference applications built with Entity Framework Core and shift them to Marten. This might be part of an effort to show Jasper as a replacement for MediatR or NServiceBus/MassTransit as well.

And again, does any of that address whatever concerns you might have about adopting Marten? Or that you’d already had in the past?

Other Ideas?

Here are some other ideas that have been kicked around for improving Marten usage, but these ideas would probably need to come through some sort of Marten commercialization or professional support.

  • Cloud hosting recipes. Hopefully through Calavista projects, I’d like to develop some pre-built guidance and quick recipes for standing up scalable and maintainable Marten/Postgresql environments on both Azure and AWS. This would include schema migrations, monitoring, dynamic scaling, and any necessary kind of database provisioning. I think this might get into Terraform/Pulumi infrastructure as well.
  • Cloud hosting models for parallelizing and distributing work with asynchronous event projections. Maybe even getting into dynamic scaling.
  • Multi-tenancy through separate databases for each client tenant. You can pull this off today yourself, but there’s a lot of things to manage. Here I’m proposing more cloud hosting recipes for Marten/Postgresql that would include schema migrations and distributed work strategies for processing asynchronous event projections across the tenant databases.
  • Some kind of management user interface? I honestly don’t know what we’d do with that yet, but other folks have asked for something.
  • Event streaming Marten events through Kafka, Pulsar, AWS Kinesis, or Azure Event Hubs
  • Marten Outbox and Inbox approaches with messaging tools. I’ve already got this built and working with Jasper, but we could extend this to MassTransit or NServiceBus as well.

My 2021 OSS Plans (Marten, Jasper, Storyteller, and more)

I don’t know about you, but my 2020 didn’t quite go the way I planned, Among other things, my grand OSS plans really didn’t go the way I hoped. Besides the obvious issues caused by the pandemic, I was extremely busy at work most of the year on projects unrelated to any of my OSS projects and just didn’t have the energy or time to do much outside of work.

Coming into the new year though, my workload has leveled out and I’m re-charged from the holidays. Moreover, I’m going to get to use some of my OSS tools for at least one client next year and that’s helping my enthusiasm level. At the end of the day though, I still enjoy the creative aspect of my OSS work and I’m ready to get things moving again.

Here’s what I’m hoping to accomplish in 2021:

Marten V4.0 is already heavily underway with huge improvements ongoing for its Event Sourcing support. We’ve also had some significant success improving the Linq querying support and performance all around by almost doing a full re-write of Marten’s internals. There’s a lot more to do yet, but I’m hopeful that Marten V4.0 will be released in the 1st quarter of 2021.

In the slightly longer term, the Marten core team is talking about ways to possibly monetize Marten through either add on products or a services model of some sort. I’m also talking with my Calavista colleagues about about how we might create service offerings around Marten (scalable cloud hosting for Marten, DevOps guidance, migration projects?).

Regardless, Marten is getting the lion’s share of my attention for the time being and I’m excited about the work we have in flight.

Jasper is a toolkit for common messaging scenarios between .Net applications with a robust in process command runner that can be used either with or without the messaging. 

After having worked on it for over half a decade, I actually did release Jasper V1.0 last year! But it was during the first awful wave of Covid-19 and it just got lost in the shuffle of everything else going on. I also didn’t promote it very much.

I’m going to change that this year and make a big push to blog about it and promote it. I think there’s a lot of possible synergy between Jasper and Marten to build out CQRS architectures on .Net.

Development wise, I’m hoping to:

  • Add end to end open telemetry tracing support
  • Async API standard support (roughly Swagger for messaging based architectures if I’m understanding things correctly)
  • Kafka & Pulsar support has been basically done for 6 months through Jarrod‘s hard work.
  • Performance optimizations
  • A circuit breaker error handling option at the transport layer similar to what MassTransit just added here: https://masstransit-project.com/advanced/middleware/killswitch.html. This would have been an extremely useful feature to have had last year for a client, and I’ve wanted it in Jasper ever since
  • Jasper has an unpublished add on for building HTTP services in ASP.Net Core with a very lightweight “Endpoint” model that I’d like to finish, document, and release. It’s more or less the old FubuMVC style of HTTP handlers, but completely built on an ASP.Net Core foundation rather than its own framework.

Storyteller has been completely dormant as a project last year, but I know I’ve got a project coming up at work next year where it could be a great fit. I had started a lot of work a couple years ago for a big V6 overhaul of Storyteller. If and when I’m able, I’d like to dust off those plans and revamp Storyteller this year, but with a twist.

Instead of fighting the overwhelming tide, I think Storyteller will finally embrace the Gherkin specification language. I think this is probably a no-brainer decision to just opt into something that lots of people already understand and common development tools like JetBrains Rider or VS Code already have first class support for Gherkin.

I still think there’s value in having the Storyteller user interface even with the Gherkin support, so I’ll be looking at an all new client that tries to take the things that worked with the huge Storyteller 3.0 re-write a few years ago and puts that in a more modern shell. The current client is a hodgepodge of very early React.js and Redux, and I’d honestly want to tackle a re-write mostly to update my own UI/UX skillset. I’m still leaning toward using the very latest React.js, but I’ve at least looked at Blazor and sort of following MAUI.

I’ve mostly been just keeping up with bugs, pull requests, and new .Net versions for Lamar. At some point, Lamar needs support for IAsyncDisposable. I also get plenty of questions about how to override Lamar service registrations during integration testing scenarios, which is tricky just because of the weird gyrations that go on with HostBuilder bootstrapping and external IoC containers. There is some existing functionality in Lamar that could be useful for this, but I need to document it.

I might think about cutting the existing LamarCodeGeneration and LamarCompiler projects to their own first class library status because they’re developing a life of their own independent from Lamar. LamarCodeGeneration might be helpful for authoring source generators in .Net.

The farm road you see at the edge of this picture is almost perfectly flat with almost no traffic, and you can see for miles. I may or may not have used that to see how fast I could get my first car to go as a teenager. Let’s not tell my parents about that:)

There was just a wee bit of work to move Oakton and Oakton.AspNetCore to .Net 5.0. In the new year I think I’d like to just merge those two projects into one single library, and look at using Spectre Console to make the output of the built in environment test commands look a helluva lot spiffier and easier to read.

Alba

Alba just got .Net 5.0 support. I’ll get a chance to use Alba on a client project this year to do HTTP API testing, and we’ll see if that leads to any new work.

StructureMap

I’ll occasionally answer StructureMap questions as they come in, but that’s it. I’ll be helping one of Calavista’s clients migrate from StructureMap to Lamar in 2021, so I’ll be using it for work at least.

FubuMVC

It’s still dead as a door knob. There are plenty of bits of FubuMVC floating around in Oakton, Alba, Jasper, and Baseline though.

Planned Event Store Improvements for Marten V4, Daft Punk Edition

There’s a new podcast about Marten on the .Net Core Show that posted last week.

Marten V4 development has been heavily underway this year. To date, the work has mostly focused on the document store functionality (Linq, general performance improvements, and document metadata).

While I certainly hope the other improvements to Marten V4 will make a positive difference to our users, the big leap forward in capability is going to be on the event sourcing side of Marten. We’ve gathered a lot of user feedback on this feature set in the past couple years, but there’s always room for more discussion as things are taking shape.

First though, to set the mood:

The master issue for V4 event sourcing improvements is on GitHub here.

Scalability

We know there’s plenty of concern about how well Marten’s event store will scale over time. Beyond the performance improvements I’ll try to outline in following sections below, we’re planning to introduce support for:

Event Metadata

Similar to the document storage, the event storage in V4 will allow users to capture additional metadata to the event storage. There will be support in the event store Linq provider to query against this metadata, and this metadata will be available to the projections. Right now, the plan is to have opt in, additional fields for:

  • Correlation Id
  • Causation Id
  • User name

Additionally, the plan is to also have a “headers” field for user defined data that does not fall into the fields listed above. Marten will capture the metadata at the session level, with the thinking being that you could opt into custom Marten session creation that would automatically apply metadata for the current HTTP request or service bus message or logical unit of work.

There’ll be a follow up post on this soon.

Event Capture Improvements

When events are appended to event streams, we’re planning some small improvements for V4:

Projections, Projections, Projections!

This work is heavily in flight, so please shoot any feedback you might have our (Marten team’s) way.

Building your own event store is actually pretty easy — until the time you want to actually do something with the events you’ve captured or keep a “read-side” view of the status up to date with the incoming events. Based on a couple years of user feedback, all of that is exactly where Marten needs to grow up the most.

The master issue tracking the projection improvements is here. The Marten community (mostly me to be honest) has gone back and forth quite a bit on the shape of the new projection work and nothing I say here is set in stone. The main goals are to:

  • Significantly improve performance and throughput. We’re doing this partially by reducing in memory object allocations, but mostly by introducing much, much more parallelization of the projection work in the async daemon.
  • Simplify the usage of immutable data structures as the projected documents (note that we have plenty of F# users, and now C# record types make that a lot easier too).
  • Introduce snapshotting
  • Supplement the existing ViewProjection mechanism with conventional methods similar to the .Net StartUp class
  • Completely gut the existing ViewProjection to improve its performance while hopefully avoiding breaking API compatibility

There is some thought about breaking the projection support into its own project or making the event sourcing support be storage-agnostic, but I’m not sure about that making it to V4. My personal focus is on performance and scalability, and way too many of the possible optimizations seem to require coupling to details of Marten’s existing storage.

“Async Daemon”

The Async Daemon is an under-documented Marten subsystem we use to process asynchronously built event projections and do projection rebuilds. While it’s “functional” today, it has a lot of shortcomings (it can only run in one node at a time, and we don’d have any kind of leader election or failover) that prevent most folks from adopting it.

The master issue for the Async Daemon V4 is here, but the tl:dr is:

  • Make sure there’s adequate documentation (duh.)
  • Should be easy to integrate in your application
  • Has to be able to run in an application cluster in such a way that it guarantees that every projected view (or slice of a projected view) is being updated on exactly one node at a time
  • Improved performance and throughput of normal projection building
  • No downtime projection rebuilds
  • Way, way faster projection rebuilds

Now, to the changes coming in V4. Let’s assume that you’re doing “serious” work and needing to host your Marten-using .Net Core application across multiple nodes via some sort of cloud hosting. With minimal configuration, you’d like to have the asynchronous projection building “just work” across your cluster.

Here’s a visual representation of my personal “vision” for the async daemon in V4:

In V4 the async daemon will become a .Net Core BackgroundService that will be registered by the AddMarten() integration with HostBuilder. That mechanism will allow us to run background work inside of your .Net Core application.

Inside that background process the async daemon is going to have to elect a single “leader/distributor” agent that can only run on one node. That leader/distributor agent will be responsible for assigning work to the async daemon running inside all the active nodes in the application. What we’re hoping to do is to distribute and parallelize the projection building across running nodes. And oh yeah, do this without having to need any other kind of infrastructure besides the Postgresql database.

Within a single node, we’re adding a lot more parallelization to the projection building instead of treating everything as a dumb “left fold” single threaded queue problem. I’m optimistic that that’s going to make a huge difference for throughput. On top of that, I’m hoping that the new async daemon will be able to split work between different nodes without the nodes stepping on each other.

There’s still plenty of details to work out, and this post is just meant to be a window into some of the work that is happening within Marten for our big V4 release sometime in 2021.

Marten V4 Preview: Command Line Administration

TL;DR — It’s going to be much simpler in V4 to incorporate Marten’s command line administration tools into your .Net Core application.

In my last post I started to lay out some of the improvements in the forthcoming Marten V4 release with our first alpha Nuget release. In this post, I’m going to show the improvements to Marten’s command line package that can be used for some important database administration and schema migrations.

Unlike ORM tools like Entity Framework (it’s a huge pet peeve of mine when people describe Marten as an ORM), Marten by and large tries to allow you to be as productive as possible by keeping your focus on your application code instead of having to spend much energy and time on the details of your database schema. At development time you can just have Marten use its AutoCreate.All mode and it’ll quietly do anything it needs to do with your Postgresql database to make the document storage work at runtime.

For real production though, it’s likely that you’ll want to explicitly control when database schema changes happen. It’s also likely that you won’t want your application to have permissions to change the underlying database schema on the fly. To that end, Marten has quite a bit of functionality to export database schema updates for formal database migrations.

We’ve long supported an add on package called Marten.CommandLine that let’s you build your own command line tool to help manage these schema updates, but to date it’s required you to build a separate console application parallel to your application and has probably not been that useful to most folks.

In V4 though, we’re exploiting the Oakton.AspNetCore library that allows you to embed command line utilities directly into your .Net Core application. Let’s make that concrete with a small sample application in Marten’s GitHub repository.

Before I dive into that code, Marten v3.12 added a built in integration for Marten into the .Net Core generic HostBuilder that we’re going to depend on here. Using the HostBuilder for configuring and bootstrapping Marten into your application allows you to use the exact same Marten configuration and application configuration in the Marten command utilities without any additional work.

This sample application was built with the standard dotnet new webapi template. On top of that, I added a reference to the Marten.CommandLine library.

.Net Core applications tend to be configured and bootstrapped by a combination of a Program.Main() method and a StartUp class. First, here’s the Program.Main() method from the sample application:

public class Program
{
// It's actually important to return Task<int>
// so that the application commands can communicate
// success or failure
public static Task<int> Main(string[] args)
{
return CreateHostBuilder(args)

// This line replaces Build().Start()
// in most dotnet new templates
.RunOaktonCommands(args);
}

public static IHostBuilder CreateHostBuilder(string[] args) =>
Host.CreateDefaultBuilder(args)
.ConfigureWebHostDefaults(webBuilder =>
{
webBuilder.UseStartup<Startup>();
});
}

Note the signature of the Main() method and how it uses the RunOaktonCommands() method to intercept the command line arguments and execute named commands (with the default being to just run the application like normal).

Now, the Startup.ConfigureServices() method with Marten added in is this:

public void ConfigureServices(IServiceCollection services)
{
    // This is the absolute, simplest way to integrate Marten into your
    // .Net Core application with Marten's default configuration
    services.AddMarten(Configuration.GetConnectionString("Marten"));
}

Now, to the actual command line. As long as the Marten.CommandLine assembly is referenced by your application, you should see the additional Marten commands. From your project’s root directory, run dotnet run -- help and we see there’s some additional Marten-related options:

Oakton command line options with Marten.CommandLine in play

And that’s it. Now you can use dotnet run -- dump to export out all the SQL to recreate the Marten database schema, or maybe dotnet run -- patch upgrade_staging.sql --e Staging to create a SQL patch file that would make any necessary changes to upgrade your staging database to reflect the current Marten configuration (assuming that you’ve got an appsettings.Staging.json file with the right connection string pointing to your staging Postgresql server).

Check out the Marten.CommandLine documentation for more information on what it can do, but expect some V4 improvements to that as well.

Marten V4 Preview: Linq and Performance

Marten is an open source library for .Net that allows developers to treat the robust Postgresql database as a full featured and transactional document database (NoSQL) as well as supporting the event sourcing pattern of application persistence.

After a false start last summer, development on the long awaited and delayed Marten V4.0 release is heavily in flight and we’re making a lot of progress. The major focus of the remaining work is improving the event store functionality (that I’ll try to blog about later in the week if I can). We posted the first Marten V4 alpha on Friday for early adopters — or folks that need Linq provider fixes ASAP! — to pull down and start trying out. So far the limited feedback has been a nearly seamless upgrade.

You can track the work and direction of things through the GitHub issues that are already done and the ones that are still planned.

For today though, I’d like to focus on what’s been done so far in V4 in terms of making Marten simply better and faster at its existing feature set.

Being Faster by Doing Less

One of the challenging things about Marten’s feature set is the unique permutations of what exactly happens when you store, delete, or load document to and from the database. For example, some documents may or may not be:

On top of that, Marten supports a couple different flavors of document sessions:

  • Query-only sessions that are strictly read only querying
  • The normal session that supports an internal identity map functionality that caches previously loaded documents
  • Automatic dirty checking sessions that are the heaviest Marten sessions
  • “Lightweight” sessions that don’t use any kind of identity map caching or automatic dirty checking for faster performance and better memory usage — at the cost of a little more developer written code.

The point here is that there’s a lot of variability in what exactly happens in Marten when you save, load, or delete a document with Marten. In the current version, Marten uses a combination of runtime if/then logic, some “Nullo” classes, and a little bit of “Expression to Lambda” runtime compilation.

For V4, I completely re-wired the internals to use C# code generated and compiled at runtime using Roslyn’s runtime compilation capabilities. Marten is using the LamarCompiler and LamarCodeGeneration libraries as helpers. You can see these two libraries and this technique in action in a talk I gave at NDC London in 2019.

The end result of all this work is that we can generated the tightest possible C# handling code and the tightest possible SQL for the exact permutation of document storage characteristics and session type. Along the way, we’ve striven to reduce the number of dictionary lookups, runtime branching logic, empty Nullo objects, and generally the number of computer instructions that would have to be executed by the underlying processor just to save, load, or delete a document.

So far, so good. It’s hard to say exactly how much this is going to impact any given Marten-using application, but the existing test suite clearly runs faster now and I’m not seeing any noticeable issue with the “cold start” of the initial, one time code generation and compilation (that was a big issue in early Roslyn to the point where we ripped that out of pre 1.0 Marten, but seems to be solved now).

If anyone is curious, I’d be happy to write a blog post diving into the guts of how that works. And why the new .Net source generator feature wouldn’t work in this case if anyone wants to know about that too.

Linq Provider Almost-Rewrite

To be honest, I think Marten’s existing Linq provider (pre-V4) is pretty well stuck at the original proof of concept stage thrown together 4-5 years ago. The number of open issues where folks had hit limitations in the Linq provider support built up — especially with anything involving child collections on document types.

For V4, we’ve heavily restructured the Linq parsing and SQL generation code to address the previous shortcomings. There’s a little bit of improvement in the performance of Linq parsing and also a little bit of optimization of the SQL generated by avoiding unnecessary CASTs. Most of the improvement has been toward addressing previously unsupported scenarios. A potential improvement that we haven’t yet exploited much is to make the SQL generation and Linq parsing more able to support custom value types and F#-isms like discriminated unions through a new extensibility mechanism that teaches Marten about how these types are represented in the serialized JSON storage.

Querying Descendent Collections

Marten pre-V4 didn’t handle querying through child collections very well and that’s been a common source of user issues. With V4, we’re heavily using the Common Table Expression query support in Postgresql behind the scenes to make Linq queries like this one shown below possible:

var results = theSession.Query<Top>()
.Where(x => x.Middles.Any(b => b.Bottoms.Any()))
.ToList();

I think that at this point Marten can handle any combination of querying through child collections through any number of levels with all possible query operators (Any() / Count()) and any supported Where() fragment within the child collection.

Multi-Document Includes

Marten has long had some functionality for fetching related documents together in one database round trip for more efficient document reading. A long time limitation in Marten is that this Include() capability was only usable for logical “one to one” or “many to one” document relationships. In V4, you can now use Include() querying for “one to many” relationships as shown below:

[Fact]
public void include_many_to_list()
{
var user1 = new User { };
var user2 = new User { };
var user3 = new User { };
var user4 = new User { };
var user5 = new User { };
var user6 = new User { };
var user7 = new User { };

theStore.BulkInsert(new User[]{user1, user2, user3, user4, user5, user6, user7});

var group1 = new Group
{
Name = "Odds",
Users = new []{user1.Id, user3.Id, user5.Id, user7.Id}
};

var group2 = new Group {Name = "Evens", Users = new[] {user2.Id, user4.Id, user6.Id}};

using (var session = theStore.OpenSession())
{
session.Store(group1, group2);
session.SaveChanges();
}

using (var query = theStore.QuerySession())
{
var list = new List<User>();

query.Query<Group>()
.Include(x => x.Users, list)
.Where(x => x.Name == "Odds")
.ToList()
.Single()
.Name.ShouldBe("Odds");

list.Count.ShouldBe(4);
list.Any(x => x.Id == user1.Id).ShouldBeTrue();
list.Any(x => x.Id == user3.Id).ShouldBeTrue();
list.Any(x => x.Id == user5.Id).ShouldBeTrue();
list.Any(x => x.Id == user7.Id).ShouldBeTrue();
}
}

This was a longstanding request from users, and to be honest, we had to completely rewrite the Include() internals to add this support. Again, we used Common Table Expression SQL statements in combination with per session temporary tables to pull this off.

Compiled Queries Actually Work

I think the Compiled Query feature is unique in Marten. It’s probably easiest and best to think of it as a “stored procedure” for Linq queries in Marten. The value of a compiled query in Marten is:

  1. It potentially cleans up the application code that has to interact with Marten queries, especially for more complex queries
  2. It’s potentially some reuse for commonly executed queries
  3. Mostly though, it’s a significant performance improvement because it allows Marten to “remember” the Linq query plan.

While compiled queries have been supported since Marten 1.0, there’s been a lot of gap between what works in Marten’s Linq support and what functions correctly inside of compiled queries. With the advent of V4, the compiled query planning was rewritten with a new strategy that so far seems to support all of the Linq capabilities of Marten. We think this will make the compiled query feature much more useful going forward.

Here’s an example compiled query that was not possible before V4:

public class FunnyTargetQuery : ICompiledListQuery<Target>
{
public Expression<Func<IMartenQueryable<Target>, IEnumerable<Target>>> QueryIs()
{
return q => q
.Where(x => x.Flag && x.NumberArray.Contains(Number));
}

public int Number { get; set; }
}

And in usage:

var actuals = session.Query(new FunnyTargetQuery{Number = 5}).ToArray();

Multi-Level SelectMany because why not?

Marten has long supported the SelectMany() keyword in the Linq provider support, but in V4 it’s much more robust with the ability to chain SelectMany() clauses n-deep and do that in combination with any kind of Count() / Distinct() / Where() / OrderBy() Linq clauses. Here’s an example:

[Fact]
public void select_many_2_deep()
{
var group1 = new TargetGroup
{
Targets = Target.GenerateRandomData(25).ToArray()
};

var group2 = new TargetGroup
{
Targets = Target.GenerateRandomData(25).ToArray()
};

var group3 = new TargetGroup
{
Targets = Target.GenerateRandomData(25).ToArray()
};

var groups = new[] {group1, group2, group3};

using (var session = theStore.LightweightSession())
{
session.Store(groups);
session.SaveChanges();
}

using var query = theStore.QuerySession();

var loaded = query.Query<TargetGroup>()
.SelectMany(x => x.Targets)
.Where(x => x.Color == Colors.Blue)
.SelectMany(x => x.Children)
.OrderBy(x => x.Number)
.ToArray()
.Select(x => x.Id).ToArray();

var expected = groups
.SelectMany(x => x.Targets)
.Where(x => x.Color == Colors.Blue)
.SelectMany(x => x.Children)
.OrderBy(x => x.Number)
.ToArray()
.Select(x => x.Id).ToArray();

loaded.ShouldBe(expected);
}

Again, we pulled that off with Common Table Expression statements.

In tepid defense of…

Hey all, I’ve been swamped at work and haven’t had any bandwidth or energy for blogging, but I’ve actually been working up ideas for a new blog series. I’m going to call it “In tepid defense of [XYZ]”, where XYZ is some kind of software development tool or technique that’s:

  • Gotten a bad name from folks overusing it, or using it in some kind of dogmatic way that isn’t useful
  • Is disparaged by a certain type of elitist, hipster developer
  • Might still have some significant value if used judiciously

My list of topics so far is:

  • IoC Containers — I’m going to focus on where, when, and how they’re still useful — but with a huge dose of what I think are the keys to using them successfully in real projects. Which is more or less gonna amount to using them very simply and not making them do too much weird runtime switcheroo.
  • S.O.L.I.D. — Talking about the principles as a heuristic to think through designing code internals, but most definitely not throwing this out there as any kind of hard and fast programming laws. This will be completely divorced from any discussion about you know who.
  • UML — I’m honestly using UML more now than I had been for years and it’s worth reevaluating UML diagramming after years of the backlash to silly things like “Executable UML”
  • Don’t Repeat Yourself (DRY) — I think folks bash this instead of thinking more about when and how they eliminate duplication in their code without going into some kind of really harmful architecture astronaut mode

I probably don’t have the energy or guts to tackle OOP in general or design patterns in specific, but we’ll see.

Anything interesting to anybody?

Just Finished a Not Really Awesome Project, Here’s What I Learned

To be as clear as possible, I’m speaking only for myself and any views or opinions expressed here do not represent my employer whatsoever.

Years ago I wrote a post called The Surprisingly Valuable and Lasting Lessons I Learned from a Horrible Project that’s exactly what it sounds like. I was on a genuinely awful project ripe with Dilbertesque elements. As usual though, horrible experiences can lead to some very valuable lessons. From that terrible project, I learned plenty about team communication, Agile processes, and doing software design within an Agile process (XP in this case).

If you’ve followed my twitter feed the past couple years, you know that I have routinely expressed some frustrations working within a long running waterfall project. I finally rolled off that project this past Friday after 2+ years, and I have some thoughts. I definitely appreciate some of the personal relationships that came out of the project, but I’m not leaving feeling very satisfied by how the project went overall. That makes it time to reflect and figure out what actionable lessons I can take from the project for future endeavors.

Because it’s an easy writing crutch, let’s go to the good, the bad, and the ugly of the project and what I (re)learned this time around:

The Good

Let’s start with some positives. I’ve been a big believer in capturing business rule requirements as “executable specifications” (acceptance test driven development or some folk’s version of behavior driven development) for years. While we weren’t allowed by the client to use my preferred tooling (grumble) to do that and had to write some custom tooling, we had some genuine success with executable specifications as requirements. Think “if we have this exact inputs and run the business rules, these are the exact validation errors and/or transactions that should be triggered next.” Our client had some very involved business rules that were driven by even more complex databases, and having the client review and adjust our acceptance tests written as examples instead of just sticking with vaguely worded Word documents made a hugely positive difference for us.

Automating all deployments through Azure DevOps was a big win, especially when we finally got folks outside of our organization to stop deploying directly from Visual Studio.Net. It’s vitally important to have good traceability from the binaries deployed in testing or production to the revision of the code in source control. I learned that way back when in my previous “worst project ever”, and we re-learned that in this project. The one aspect of this project that was an undeniable success was introducing continuous integration and some continuous delivery to our customer.

The Bad

Developers that do not communicate or interact well with other developers simply cannot be placed in important positions in integration projects regardless of their technical ability or domain knowledge.

We started adding some environment tests (with an ancient, but still working feature in StructureMap!) into our deployment pipelines about halfway, but I wish we’d gone much farther. The overall technical ecosystem was not perfectly reliable and there were dependencies that still had manual deployments, so it became very important to have self-diagnosing deployments that could tell you quickly when things like database connectivity, configuration to external dependencies, or expected network shares were unreachable.

I was on a call this morning for a new project just getting off the ground and wanted to give a giant high five through Zoom to our DevOps architect when he showed how he was planning to build environment tests directly into this new project’s CD pipeline.

Conway’s Law is not evil per se, but it does exist and you absolutely need to be aware of it when you lay out both your desired architecture and organizational structure. We were absolutely screwed over by Conway’s Law on this project, and the consequence to the customer is a system that has performance problems and is harder to troubleshoot and support than it needed to be because the service boundaries fell in problematic ways based on the organizational model rather than on what would have made sense from a technical or even problem domain perspective.

I wish we’d engaged much earlier with the client’s operations team. It was waterfall after all, so they only expected to get support and troubleshooting documentation near the end of the project. That said, while I still think our basic architecture was appropriate from a technical perspective, it didn’t at all fit into the operation team’s comfort zone and the customer’s existing infrastructure for application support. Either we could have worked with them to make them comfortable with alternative tooling for operations monitoring much earlier in the project, or we could have bitten the bullet and made the systems act much more like the batch driven mainframe tools they were used to.

The error handling we designed into our asynchronous support was heavily based off of Jasper’s existing error handling, which in turn grew out of my experiences in my previous company where we dealt with large volumes and frequent transient errors. In this ecosystem, our problems were really more systematic when a downstream system would be either completely down or mis-configured so that every interaction with it failed. In this case, we really needed a circuit breaker strategy for the error handling inside the message handling code. The main lesson here is to be careful you aren’t trying to fight the last war.

I wish there had been an overall architect over all the elements of this large initiative (preferably me). I only had a window into the specific elements my teams were building out early on, and didn’t get to see the bigger picture until much later — assuming that I ever did. Everyone else was in the same boat, and I felt like we were all the proverbial blind men trying to describe an elephant by feel.

The Ugly

It’s hard to describe, but my single biggest regret on this project was in not pushing much harder to create an effective automated testing strategy against our integration with an extremely problematic 3rd party system. Having to depend so much on very slow, laborious manual testing against this 3rd party system at the center of everything was the bottleneck of all our efforts in my opinion. Most of our technical risk and production issues have been related to this 3rd party system. A fully automated test suite might have allowed us to iterate much faster and find & remove the problems we found in the integration.

When you have the choice, don’t write custom infrastructure when there are viable, commonly used, off the shelf components. The senior management at the beginning of this project were very apprehensive of using any kind of new infrastructure, especially open source tools.

Since this was primarily an integration project, asynchronous messaging should have been a very good fit for the project. We wrote a tiny shared library for managing asynchronous communication between applications using Rabbit MQ as the underlying transport, but with some hooks for an easy move later to Azure Service Bus. That tiny library had to continuously evolve into something much larger later as the use cases became more complex to the point where I felt like it was dominating our workload.

I won’t even say “in retrospect” because we knew this full well from day one, but the project would have gone better if we’d been able to use an off the shelf toolset like MassTransit, NServiceBus, or my own Jasper framework for the messaging support. I wish that I’d made a much bigger push at the time to build out a much more robust messaging foundation, but I felt lucky at the time just to get Rabbit MQ approved.

At the end we actually had a consensus agreement to rip out our custom messaging library and replace that with MassTransit, but the clock ran out on us. If and when the customer is able to do that themselves, I think they’ll have much more robust error handling and instrumentation that should result in more successful daily operations.

There were more egregious “NIH” violations than what I described above, but I’m only going to deal with issues where I had some level of control or influence.

The waterfall software process on this project was just as problematic as it ever was. We had to spend a lot of energy upfront on intermediate deliverables that didn’t add much value, but the killer as usual with waterfall was how damn slow the feedback cycles were and not doing any substantial integration testing until very late in the project. I’m aware that many of you reading this will have very negative opinions and experiences with Agile (I blame Scrum though), but Agile done reasonably well means having rapid and early feedback cycles to find and fix problems quickly.

Shared databases are a common scourge of enterprise architectures. Dating all the way back to my 2005 (!) post Overthrowing the Tyranny of the Shared Database, sharing databases between an application has been a massive pet peeve of mine. Hell, tilting at the shared database windmill at my previous company contributed a little bit to me leaving. At the very least, make sure the %^$&$^&%ing shared database structure is completely described in source control somewhere and fully scripted out so any developer can quickly spin up an up to date copy of that database for testing as needed. If you depend on manual database changes independent of the application development around the shared database, you need to expect a great deal of friction and production problems related to your shared database.

One more time with feeling for my longtime readers:

Sharing a database between applications is like drug users sharing needles

Things to research for later

The big takeaways from me on this project are to add some additional error handling and distributed tracing approaches to my integration project tool belt. As soon as I get a chance, I’m doing a deeper dive into the OpenTelemetry specification with a thought toward adding direct support in Jasper and maybe Marten as a learning experience. I’m also going to add some circuit breaker support directly into Jasper.

For any of you who are huge fans of Stephen King’s Dark Tower novels, you know that King modeled Roland on Clint Eastwood’s character from the spaghetti westerns, but living inside of a Lord of the Rings style epic tale. I think Idris Elba would have been awesome as Roland in the Dark Tower movie if they hadn’t changed the story and the character so much from the books. Grrr.

Calling Generic Methods from Non-Generic Code in .Net

Somewhat often (or at least it feels that way this week) I’ll run into the need to call a method with a generic type argument from code that isn’t generic. To make that concrete, here’s an example from Marten. The main IDocumentSession service has a method called Store() that directs Marten to persist one or more documents of the same type. That method has this signature:

void Store<T>(params T[] entities);

That method would typically be used like this:

using (var session = store.OpenSession())
{
    // The generic constraint for "Team" is inferred from the usage
    session.Store(new Team { Name = "Warriors" });
    session.Store(new Team { Name = "Spurs" });
    session.Store(new Team { Name = "Thunder" });

    session.SaveChanges();
}

Great, and easy enough (I hope), but Marten also has this method where folks can add a heterogeneous mix of any kind of document types all at once:

void StoreObjects(IEnumerable<object> documents);

Internally, that method groups the documents by type, then delegates to the property Store<T>() method for each document type — and that’s where this post comes into play.

(Re-)Introducing Baseline

Baseline is a library available on Nuget that provides oodles of little helper extension methods on common .Net types and very basic utilities that I use in almost all my projects, both OSS and at work. Baseline is an improved subset of what was long ago FubuCore (FubuCore was huge, and it also spawned Oakton), but somewhat adapted to .Net Core.

I wanted to call this library “spackle” because it fills in usability gaps in the .Net base class library, but Jason Bock beat me to it with his Spackle library of extension methods. Since I expected this library to be used as a foundational piece from within basically all the projects in the JasperFx suite, I chose the name “Baseline” which I thought conveniently enough described its purpose and also because there’s an important throughway near the titular Jasper called “Baseline”. I don’t know for sure that it’s the basis for the name, but the Battle of Carthage in the very early days of the US Civil War started where this road is today.

Crossing the Non-Generic to Generic Divide with Baseline

Back to the Marten StoreObjects(object[]) calling Store<T>(T[]) problem. Baseline has a helper extension method called CloseAndBuildAs<T>() method I frequently use to solve this problem. It’s unfortunately a little tedious, but first design a non-generic interface that will wrap the calls to Store<T>() like this:

internal interface IHandler
{
void Store(IDocumentSession session, IEnumerable<object> objects);
}

And a concrete, open generic type that implements IHandler:

internal class Handler<T>: IHandler
{
public void Store(IDocumentSession session, IEnumerable<object> objects)
{
// Delegate to the Store<T>() method
session.Store(objects.OfType<T>().ToArray());
}
}

Now, the StoreObjects() method looks like this:

public void StoreObjects(IEnumerable<object> documents)
{
assertNotDisposed();

var documentsGroupedByType = documents
.Where(x => x != null)
.GroupBy(x => x.GetType());

foreach (var group in documentsGroupedByType)
{
// Build the right handler for the group type
var handler = typeof(Handler<>).CloseAndBuildAs<IHandler>(group.Key);
handler.Store(this, group);
}
}

The CloseAndBuildAs<T>() method above does a couple things behind the scenes:

  1. It creates a closed type for the proper Handler<T> based on the type arguments passed into the method
  2. Uses Activator.CreateInstance() to build the concrete type
  3. Casts that object to the interface supplied as a generic argument to the CloseAndBuildAs<T>() method

The method shown above is here in GitHub. It’s not shown, but there are some extra overloads to also pass in constructor arguments to the concrete types being built.

Marten Quickstart with .Net Core HostBuilder

The Marten Community just released Marten 3.12 with a mix of new functionality and plenty of bug fixes this past week. In particular, we added some new extension methods directly into Marten for integration into .Net Core applications that are bootstrapped by the new generic host builder in .Net Core 3.*.

There’s a new runnable, sample project in GitHub called AspNetCoreWebAPIWithMarten that contains all the code from this blog post.

For a small sample ASP.Net Core web service project using Marten’s new integration, let’s start a new web service project with the dotnet new webapi template. Doing this gives you some familiar files that we’re going to edit in a bit:

  1. appsettings.json — standard configuration file for .Net Core
  2. Program.cs — has the main command line entry point for .Net Core applications. We aren’t actually going to touch this right now, but there will be some command line improvements to Marten v4.0 soon that will add some important development lifecycle utilities that will require a 2-3 line change to this file. Soon.
  3. Startup.cs — the convention based Startup class that holds most of the configuration and bootstrapping for a .Net Core application.

Marten does sit on top of Postgresql, so let’s add a docker-compose.yml file to the codebase for our local development database server like this one:

version: '3'
services:
 postgresql:
 image: "clkao/postgres-plv8:latest"
ports:
 - "5433:5432"

At the command line, run docker-compose up -d to start up your new Postgresql database in Docker.

Next, we’ll add a reference to the latest Marten Nuget to the main project. In the appsettings.json file, I’ll add the connection string to the Postgresql container we defined above:

"ConnectionStrings": {
  "Marten": "Host=localhost;Port=5433;Database=postgres;Username=postgres;password=postgres"
}

Finally, let’s go the Startup.ConfigureServices() method and add this code to register Marten services:

public class Startup
{
    // We need the application configuration to get
    // our connection string to Marten
    public Startup(IConfiguration configuration)
    {
        Configuration = configuration;
    }

    public IConfiguration Configuration { get; }

    public void ConfigureServices(IServiceCollection services)
    {
         // This line registers Marten services with all default
         // Marten settings
         var connectionString = Configuration.GetConnectionString("Marten");
         services.AddMarten(connectionString);

         // And some other stuff we don't care about in this post...

And that’s it, we’ve got Marten configured in its most basic “getting started” usage with these services in our application’s IoC container:

  1. IDocumentStore with a Singleton lifetime. The document store can be used to create sessions, query the configuration of Marten, generate schema migrations, and do bulk inserts.
  2. IDocumentSession with a Scoped lifetime for all read and write operations. By default, this is done with the IDocumentStore.OpenSession() method and the session created will have the identity map behavior
  3. IQuerySession with a Scoped lifetime for all read operations against the document store.

Now, let’s build out a very rudimentary issue tracking web service to capture and persist new issues as well as allow us to query for existing, open issues:

public class IssueController : ControllerBase
{
// This endpoint captures, creates, and persists
// new Issue entities with Marten
[HttpPost("/issue")]
public async Task<IssueCreated> PostIssue(
[FromBody] CreateIssue command,
[FromServices] IDocumentSession session)
{
var issue = new Issue
{
Title = command.Title,
Description = command.Description
};

// Register the new Issue entity
// with Marten
session.Store(issue);

// Commit all pending changes
// to the underlying database
await session.SaveChangesAsync();

return new IssueCreated
{
IssueId = issue.Id
};
}

[HttpGet("/issues/{status}/")]
public Task<IReadOnlyList<IssueTitle>> Issues(
[FromServices] IQuerySession session,
IssueStatus status)
{
// Query Marten's underlying database with Linq
// for all the issues with the given status
// and return an array of issue titles and ids
return session.Query<Issue>()
.Where(x => x.Status == status)
.Select(x => new IssueTitle {Title = x.Title, Id = x.Id})
.ToListAsync();
}

// Return only new issues
[HttpGet("/issues/new")]
public Task<IReadOnlyList<IssueTitle>> NewIssues(
[FromServices] IQuerySession session)
{
return Issues(session, IssueStatus.New);
}
}

And that is actually a completely working little application. In its default settings, Marten will create any necessary database tables on the fly, so we didn’t have to worry about much database setup other than having the new Postgresql database started in a Docker container. Likewise, we didn’t have to subclass any kind of Marten service like you would with Entity Framework Core and we most certainly didn’t have to create any kind of Object/Relational Mapping just to get going. Additionally, we didn’t have to care too awfully much about how to get the right Marten services integrated into our application’s IoC container with the right scoping because that’s all handled for us with the UseMarten() extension method that comes out of the box with Marten 3.12 and greater.

In a follow up post later this week, I’ll show you how to customize the Marten service registrations for per-HTTP request multi-tenancy and traceable, contextual logging of Marten-related SQL execution.

The Fundamentals of Continuous Software Design

Continuing my recent theme of remember why we originally thought Agile was a good thing before it devolved into whatever it is now.

I had the opportunity over the weekend to speak online as part of CouchCon Live. My topic was to revisit some of the principles of designing software inside of an adaptive Agile Software process in a talk entitled “The Fundamentals of Continuous Software Design.”

The video has posted on YouTube, and the slides are available on SlideShare.

I went back through the Agile greatest hits with:

  • YAGNI
  • Do the Simplest Thing that Could Possibly Work
  • The Last Responsible Moment
  • Reversibility in Software Architecture
  • Designing for Testability
  • How the full development team should be involved throughout
  • And why I think contemporary Scrum is the Scrappy Doo of Agile Software Development