Category Archives: Uncategorized

What would it take for you to adopt Marten?

If you’re stumbling in here without any knowledge of the Marten project (or me), Marten is an open source .Net library that developers can adopt in their project to use the rock solid Postgresql database as a pretty full featured document database and event store. If you’re unfamiliar with Marten, I think I’d say its feature set makes it similar to MongoDb (but the usage is significantly different), RavenDb, or Cosmos Db. On the event sourcing side of things, I think the only comparison in .Net world is GetEventStore itself, but you can certainly piece together an event store by combining other OSS libraries and database engines.

The Marten community is working very hard on our forthcoming (and long delayed) V4.0 release. We’ve already made some big strides on the document database side of things, and now we’re deep into some significant event store improvements (this link looks best in VS Code w/ the Mermaid plugin active). At Calavista, we’re considering if and how we can build a development practice around Marten for existing and potential clients. I’ve obviously got a lot of skin in the game here as the original creator of Marten. Nothing would make me happier than Marten being even more successful and that I get to help Calavista clients use Marten in real life systems as part of my day job.

I’d really like to hear from other folks what it would really take for them to seriously consider adopting Marten. What is Marten lacking now that you would need, or what kind of community or company support options would be necessary for your shop to use Marten in projects? I’m happy to hear any and all feedback or suggestions from as many people as I can get to respond.

I’m happy to take comments here, or the discussion for this topic is also on GitHub.

Existing Strengths

  • Marten is only a library, and at least for the document database features it’s very unobtrusive into your application code compared to many other persistence options
  • The Marten community is active and I hope you’d say that we’re welcoming to newcomers
  • By building on top of Postgresql, Marten comes with good cloud support from all the major cloud providers and plenty of existing monitoring options
  • Marten comes with many of the very real productivity advantages of a NoSQL solution, but has very strong transactional support from Postgresql itself
  • Marten’s event sourcing functionality comes “in the box” and there’s less work to do to fully incorporate event sourcing — including the all important “read side projection” support — into a .Net architecture than many other alternatives
  • Marten is part of the .Net Foundation
  • If you need commercial support for Marten, you can engage with Calavista Software.

Does any of that resonate with you? If you’ve used Marten before, is there anything missing from that list? And feel free to tell me you’re dubious about anything I’m claiming in the list above.

What’s already done or in flight

  • We made a lot of improvements to Marten’s Linq provider support. Not just in terms of expanding the querying scenarios we support, but also in improving the performance of the library across the board. I know this has been a source of trouble for many users in the past, and I’m excited about the improvements we’ve made in V4.
  • The event store functionality will get a lot more documentation — including sample applications — for V4
  • An important part of many event sourcing architectures is a background process to continuously build “projected” views of the raw events coming in. The current version of Marten has this capability, but it requires the user to do a lot of heavy architectural lifting to use it in any kind of clustered application. In V4, we’ll have an in the box recipe that will be used to do leader election and work distribution through an application cluster in “real server applications.” The asynchronous projection support in V4 will also support multi-tenancy (finally) and we have some ideas to greatly optimize projection rebuilds without system downtime
  • Using native Postgresql sharding for scalability, especially for the event store
  • Allowing users to specify event archival rules to keep the event store tables smaller and more performant
  • Adding more integration with .Net’s generic HostBuilder and standard logging abstractions for easier integration into .Net applications
  • Improving multi-tenancy usage based on user feedback
  • Document and event store metadata capabilities like you’d need for Marten to take part in end to end Open Telemetry tracing within your architecture.
  • More sample applications. To be honest, I’m hoping to find published reference applications built with Entity Framework Core and shift them to Marten. This might be part of an effort to show Jasper as a replacement for MediatR or NServiceBus/MassTransit as well.

And again, does any of that address whatever concerns you might have about adopting Marten? Or that you’d already had in the past?

Other Ideas?

Here are some other ideas that have been kicked around for improving Marten usage, but these ideas would probably need to come through some sort of Marten commercialization or professional support.

  • Cloud hosting recipes. Hopefully through Calavista projects, I’d like to develop some pre-built guidance and quick recipes for standing up scalable and maintainable Marten/Postgresql environments on both Azure and AWS. This would include schema migrations, monitoring, dynamic scaling, and any necessary kind of database provisioning. I think this might get into Terraform/Pulumi infrastructure as well.
  • Cloud hosting models for parallelizing and distributing work with asynchronous event projections. Maybe even getting into dynamic scaling.
  • Multi-tenancy through separate databases for each client tenant. You can pull this off today yourself, but there’s a lot of things to manage. Here I’m proposing more cloud hosting recipes for Marten/Postgresql that would include schema migrations and distributed work strategies for processing asynchronous event projections across the tenant databases.
  • Some kind of management user interface? I honestly don’t know what we’d do with that yet, but other folks have asked for something.
  • Event streaming Marten events through Kafka, Pulsar, AWS Kinesis, or Azure Event Hubs
  • Marten Outbox and Inbox approaches with messaging tools. I’ve already got this built and working with Jasper, but we could extend this to MassTransit or NServiceBus as well.

My 2021 OSS Plans (Marten, Jasper, Storyteller, and more)

I don’t know about you, but my 2020 didn’t quite go the way I planned, Among other things, my grand OSS plans really didn’t go the way I hoped. Besides the obvious issues caused by the pandemic, I was extremely busy at work most of the year on projects unrelated to any of my OSS projects and just didn’t have the energy or time to do much outside of work.

Coming into the new year though, my workload has leveled out and I’m re-charged from the holidays. Moreover, I’m going to get to use some of my OSS tools for at least one client next year and that’s helping my enthusiasm level. At the end of the day though, I still enjoy the creative aspect of my OSS work and I’m ready to get things moving again.

Here’s what I’m hoping to accomplish in 2021:

Marten V4.0 is already heavily underway with huge improvements ongoing for its Event Sourcing support. We’ve also had some significant success improving the Linq querying support and performance all around by almost doing a full re-write of Marten’s internals. There’s a lot more to do yet, but I’m hopeful that Marten V4.0 will be released in the 1st quarter of 2021.

In the slightly longer term, the Marten core team is talking about ways to possibly monetize Marten through either add on products or a services model of some sort. I’m also talking with my Calavista colleagues about about how we might create service offerings around Marten (scalable cloud hosting for Marten, DevOps guidance, migration projects?).

Regardless, Marten is getting the lion’s share of my attention for the time being and I’m excited about the work we have in flight.

Jasper is a toolkit for common messaging scenarios between .Net applications with a robust in process command runner that can be used either with or without the messaging. 

After having worked on it for over half a decade, I actually did release Jasper V1.0 last year! But it was during the first awful wave of Covid-19 and it just got lost in the shuffle of everything else going on. I also didn’t promote it very much.

I’m going to change that this year and make a big push to blog about it and promote it. I think there’s a lot of possible synergy between Jasper and Marten to build out CQRS architectures on .Net.

Development wise, I’m hoping to:

  • Add end to end open telemetry tracing support
  • Async API standard support (roughly Swagger for messaging based architectures if I’m understanding things correctly)
  • Kafka & Pulsar support has been basically done for 6 months through Jarrod‘s hard work.
  • Performance optimizations
  • A circuit breaker error handling option at the transport layer similar to what MassTransit just added here: https://masstransit-project.com/advanced/middleware/killswitch.html. This would have been an extremely useful feature to have had last year for a client, and I’ve wanted it in Jasper ever since
  • Jasper has an unpublished add on for building HTTP services in ASP.Net Core with a very lightweight “Endpoint” model that I’d like to finish, document, and release. It’s more or less the old FubuMVC style of HTTP handlers, but completely built on an ASP.Net Core foundation rather than its own framework.

Storyteller has been completely dormant as a project last year, but I know I’ve got a project coming up at work next year where it could be a great fit. I had started a lot of work a couple years ago for a big V6 overhaul of Storyteller. If and when I’m able, I’d like to dust off those plans and revamp Storyteller this year, but with a twist.

Instead of fighting the overwhelming tide, I think Storyteller will finally embrace the Gherkin specification language. I think this is probably a no-brainer decision to just opt into something that lots of people already understand and common development tools like JetBrains Rider or VS Code already have first class support for Gherkin.

I still think there’s value in having the Storyteller user interface even with the Gherkin support, so I’ll be looking at an all new client that tries to take the things that worked with the huge Storyteller 3.0 re-write a few years ago and puts that in a more modern shell. The current client is a hodgepodge of very early React.js and Redux, and I’d honestly want to tackle a re-write mostly to update my own UI/UX skillset. I’m still leaning toward using the very latest React.js, but I’ve at least looked at Blazor and sort of following MAUI.

I’ve mostly been just keeping up with bugs, pull requests, and new .Net versions for Lamar. At some point, Lamar needs support for IAsyncDisposable. I also get plenty of questions about how to override Lamar service registrations during integration testing scenarios, which is tricky just because of the weird gyrations that go on with HostBuilder bootstrapping and external IoC containers. There is some existing functionality in Lamar that could be useful for this, but I need to document it.

I might think about cutting the existing LamarCodeGeneration and LamarCompiler projects to their own first class library status because they’re developing a life of their own independent from Lamar. LamarCodeGeneration might be helpful for authoring source generators in .Net.

The farm road you see at the edge of this picture is almost perfectly flat with almost no traffic, and you can see for miles. I may or may not have used that to see how fast I could get my first car to go as a teenager. Let’s not tell my parents about that:)

There was just a wee bit of work to move Oakton and Oakton.AspNetCore to .Net 5.0. In the new year I think I’d like to just merge those two projects into one single library, and look at using Spectre Console to make the output of the built in environment test commands look a helluva lot spiffier and easier to read.

Alba

Alba just got .Net 5.0 support. I’ll get a chance to use Alba on a client project this year to do HTTP API testing, and we’ll see if that leads to any new work.

StructureMap

I’ll occasionally answer StructureMap questions as they come in, but that’s it. I’ll be helping one of Calavista’s clients migrate from StructureMap to Lamar in 2021, so I’ll be using it for work at least.

FubuMVC

It’s still dead as a door knob. There are plenty of bits of FubuMVC floating around in Oakton, Alba, Jasper, and Baseline though.

Planned Event Store Improvements for Marten V4, Daft Punk Edition

There’s a new podcast about Marten on the .Net Core Show that posted last week.

Marten V4 development has been heavily underway this year. To date, the work has mostly focused on the document store functionality (Linq, general performance improvements, and document metadata).

While I certainly hope the other improvements to Marten V4 will make a positive difference to our users, the big leap forward in capability is going to be on the event sourcing side of Marten. We’ve gathered a lot of user feedback on this feature set in the past couple years, but there’s always room for more discussion as things are taking shape.

First though, to set the mood:

The master issue for V4 event sourcing improvements is on GitHub here.

Scalability

We know there’s plenty of concern about how well Marten’s event store will scale over time. Beyond the performance improvements I’ll try to outline in following sections below, we’re planning to introduce support for:

Event Metadata

Similar to the document storage, the event storage in V4 will allow users to capture additional metadata to the event storage. There will be support in the event store Linq provider to query against this metadata, and this metadata will be available to the projections. Right now, the plan is to have opt in, additional fields for:

  • Correlation Id
  • Causation Id
  • User name

Additionally, the plan is to also have a “headers” field for user defined data that does not fall into the fields listed above. Marten will capture the metadata at the session level, with the thinking being that you could opt into custom Marten session creation that would automatically apply metadata for the current HTTP request or service bus message or logical unit of work.

There’ll be a follow up post on this soon.

Event Capture Improvements

When events are appended to event streams, we’re planning some small improvements for V4:

Projections, Projections, Projections!

This work is heavily in flight, so please shoot any feedback you might have our (Marten team’s) way.

Building your own event store is actually pretty easy — until the time you want to actually do something with the events you’ve captured or keep a “read-side” view of the status up to date with the incoming events. Based on a couple years of user feedback, all of that is exactly where Marten needs to grow up the most.

The master issue tracking the projection improvements is here. The Marten community (mostly me to be honest) has gone back and forth quite a bit on the shape of the new projection work and nothing I say here is set in stone. The main goals are to:

  • Significantly improve performance and throughput. We’re doing this partially by reducing in memory object allocations, but mostly by introducing much, much more parallelization of the projection work in the async daemon.
  • Simplify the usage of immutable data structures as the projected documents (note that we have plenty of F# users, and now C# record types make that a lot easier too).
  • Introduce snapshotting
  • Supplement the existing ViewProjection mechanism with conventional methods similar to the .Net StartUp class
  • Completely gut the existing ViewProjection to improve its performance while hopefully avoiding breaking API compatibility

There is some thought about breaking the projection support into its own project or making the event sourcing support be storage-agnostic, but I’m not sure about that making it to V4. My personal focus is on performance and scalability, and way too many of the possible optimizations seem to require coupling to details of Marten’s existing storage.

“Async Daemon”

The Async Daemon is an under-documented Marten subsystem we use to process asynchronously built event projections and do projection rebuilds. While it’s “functional” today, it has a lot of shortcomings (it can only run in one node at a time, and we don’d have any kind of leader election or failover) that prevent most folks from adopting it.

The master issue for the Async Daemon V4 is here, but the tl:dr is:

  • Make sure there’s adequate documentation (duh.)
  • Should be easy to integrate in your application
  • Has to be able to run in an application cluster in such a way that it guarantees that every projected view (or slice of a projected view) is being updated on exactly one node at a time
  • Improved performance and throughput of normal projection building
  • No downtime projection rebuilds
  • Way, way faster projection rebuilds

Now, to the changes coming in V4. Let’s assume that you’re doing “serious” work and needing to host your Marten-using .Net Core application across multiple nodes via some sort of cloud hosting. With minimal configuration, you’d like to have the asynchronous projection building “just work” across your cluster.

Here’s a visual representation of my personal “vision” for the async daemon in V4:

In V4 the async daemon will become a .Net Core BackgroundService that will be registered by the AddMarten() integration with HostBuilder. That mechanism will allow us to run background work inside of your .Net Core application.

Inside that background process the async daemon is going to have to elect a single “leader/distributor” agent that can only run on one node. That leader/distributor agent will be responsible for assigning work to the async daemon running inside all the active nodes in the application. What we’re hoping to do is to distribute and parallelize the projection building across running nodes. And oh yeah, do this without having to need any other kind of infrastructure besides the Postgresql database.

Within a single node, we’re adding a lot more parallelization to the projection building instead of treating everything as a dumb “left fold” single threaded queue problem. I’m optimistic that that’s going to make a huge difference for throughput. On top of that, I’m hoping that the new async daemon will be able to split work between different nodes without the nodes stepping on each other.

There’s still plenty of details to work out, and this post is just meant to be a window into some of the work that is happening within Marten for our big V4 release sometime in 2021.

Marten V4 Preview: Command Line Administration

TL;DR — It’s going to be much simpler in V4 to incorporate Marten’s command line administration tools into your .Net Core application.

In my last post I started to lay out some of the improvements in the forthcoming Marten V4 release with our first alpha Nuget release. In this post, I’m going to show the improvements to Marten’s command line package that can be used for some important database administration and schema migrations.

Unlike ORM tools like Entity Framework (it’s a huge pet peeve of mine when people describe Marten as an ORM), Marten by and large tries to allow you to be as productive as possible by keeping your focus on your application code instead of having to spend much energy and time on the details of your database schema. At development time you can just have Marten use its AutoCreate.All mode and it’ll quietly do anything it needs to do with your Postgresql database to make the document storage work at runtime.

For real production though, it’s likely that you’ll want to explicitly control when database schema changes happen. It’s also likely that you won’t want your application to have permissions to change the underlying database schema on the fly. To that end, Marten has quite a bit of functionality to export database schema updates for formal database migrations.

We’ve long supported an add on package called Marten.CommandLine that let’s you build your own command line tool to help manage these schema updates, but to date it’s required you to build a separate console application parallel to your application and has probably not been that useful to most folks.

In V4 though, we’re exploiting the Oakton.AspNetCore library that allows you to embed command line utilities directly into your .Net Core application. Let’s make that concrete with a small sample application in Marten’s GitHub repository.

Before I dive into that code, Marten v3.12 added a built in integration for Marten into the .Net Core generic HostBuilder that we’re going to depend on here. Using the HostBuilder for configuring and bootstrapping Marten into your application allows you to use the exact same Marten configuration and application configuration in the Marten command utilities without any additional work.

This sample application was built with the standard dotnet new webapi template. On top of that, I added a reference to the Marten.CommandLine library.

.Net Core applications tend to be configured and bootstrapped by a combination of a Program.Main() method and a StartUp class. First, here’s the Program.Main() method from the sample application:

public class Program
{
// It's actually important to return Task<int>
// so that the application commands can communicate
// success or failure
public static Task<int> Main(string[] args)
{
return CreateHostBuilder(args)

// This line replaces Build().Start()
// in most dotnet new templates
.RunOaktonCommands(args);
}

public static IHostBuilder CreateHostBuilder(string[] args) =>
Host.CreateDefaultBuilder(args)
.ConfigureWebHostDefaults(webBuilder =>
{
webBuilder.UseStartup<Startup>();
});
}

Note the signature of the Main() method and how it uses the RunOaktonCommands() method to intercept the command line arguments and execute named commands (with the default being to just run the application like normal).

Now, the Startup.ConfigureServices() method with Marten added in is this:

public void ConfigureServices(IServiceCollection services)
{
    // This is the absolute, simplest way to integrate Marten into your
    // .Net Core application with Marten's default configuration
    services.AddMarten(Configuration.GetConnectionString("Marten"));
}

Now, to the actual command line. As long as the Marten.CommandLine assembly is referenced by your application, you should see the additional Marten commands. From your project’s root directory, run dotnet run -- help and we see there’s some additional Marten-related options:

Oakton command line options with Marten.CommandLine in play

And that’s it. Now you can use dotnet run -- dump to export out all the SQL to recreate the Marten database schema, or maybe dotnet run -- patch upgrade_staging.sql --e Staging to create a SQL patch file that would make any necessary changes to upgrade your staging database to reflect the current Marten configuration (assuming that you’ve got an appsettings.Staging.json file with the right connection string pointing to your staging Postgresql server).

Check out the Marten.CommandLine documentation for more information on what it can do, but expect some V4 improvements to that as well.

In tepid defense of…

Hey all, I’ve been swamped at work and haven’t had any bandwidth or energy for blogging, but I’ve actually been working up ideas for a new blog series. I’m going to call it “In tepid defense of [XYZ]”, where XYZ is some kind of software development tool or technique that’s:

  • Gotten a bad name from folks overusing it, or using it in some kind of dogmatic way that isn’t useful
  • Is disparaged by a certain type of elitist, hipster developer
  • Might still have some significant value if used judiciously

My list of topics so far is:

  • IoC Containers — I’m going to focus on where, when, and how they’re still useful — but with a huge dose of what I think are the keys to using them successfully in real projects. Which is more or less gonna amount to using them very simply and not making them do too much weird runtime switcheroo.
  • S.O.L.I.D. — Talking about the principles as a heuristic to think through designing code internals, but most definitely not throwing this out there as any kind of hard and fast programming laws. This will be completely divorced from any discussion about you know who.
  • UML — I’m honestly using UML more now than I had been for years and it’s worth reevaluating UML diagramming after years of the backlash to silly things like “Executable UML”
  • Don’t Repeat Yourself (DRY) — I think folks bash this instead of thinking more about when and how they eliminate duplication in their code without going into some kind of really harmful architecture astronaut mode

I probably don’t have the energy or guts to tackle OOP in general or design patterns in specific, but we’ll see.

Anything interesting to anybody?

Just Finished a Not Really Awesome Project, Here’s What I Learned

To be as clear as possible, I’m speaking only for myself and any views or opinions expressed here do not represent my employer whatsoever.

Years ago I wrote a post called The Surprisingly Valuable and Lasting Lessons I Learned from a Horrible Project that’s exactly what it sounds like. I was on a genuinely awful project ripe with Dilbertesque elements. As usual though, horrible experiences can lead to some very valuable lessons. From that terrible project, I learned plenty about team communication, Agile processes, and doing software design within an Agile process (XP in this case).

If you’ve followed my twitter feed the past couple years, you know that I have routinely expressed some frustrations working within a long running waterfall project. I finally rolled off that project this past Friday after 2+ years, and I have some thoughts. I definitely appreciate some of the personal relationships that came out of the project, but I’m not leaving feeling very satisfied by how the project went overall. That makes it time to reflect and figure out what actionable lessons I can take from the project for future endeavors.

Because it’s an easy writing crutch, let’s go to the good, the bad, and the ugly of the project and what I (re)learned this time around:

The Good

Let’s start with some positives. I’ve been a big believer in capturing business rule requirements as “executable specifications” (acceptance test driven development or some folk’s version of behavior driven development) for years. While we weren’t allowed by the client to use my preferred tooling (grumble) to do that and had to write some custom tooling, we had some genuine success with executable specifications as requirements. Think “if we have this exact inputs and run the business rules, these are the exact validation errors and/or transactions that should be triggered next.” Our client had some very involved business rules that were driven by even more complex databases, and having the client review and adjust our acceptance tests written as examples instead of just sticking with vaguely worded Word documents made a hugely positive difference for us.

Automating all deployments through Azure DevOps was a big win, especially when we finally got folks outside of our organization to stop deploying directly from Visual Studio.Net. It’s vitally important to have good traceability from the binaries deployed in testing or production to the revision of the code in source control. I learned that way back when in my previous “worst project ever”, and we re-learned that in this project. The one aspect of this project that was an undeniable success was introducing continuous integration and some continuous delivery to our customer.

The Bad

Developers that do not communicate or interact well with other developers simply cannot be placed in important positions in integration projects regardless of their technical ability or domain knowledge.

We started adding some environment tests (with an ancient, but still working feature in StructureMap!) into our deployment pipelines about halfway, but I wish we’d gone much farther. The overall technical ecosystem was not perfectly reliable and there were dependencies that still had manual deployments, so it became very important to have self-diagnosing deployments that could tell you quickly when things like database connectivity, configuration to external dependencies, or expected network shares were unreachable.

I was on a call this morning for a new project just getting off the ground and wanted to give a giant high five through Zoom to our DevOps architect when he showed how he was planning to build environment tests directly into this new project’s CD pipeline.

Conway’s Law is not evil per se, but it does exist and you absolutely need to be aware of it when you lay out both your desired architecture and organizational structure. We were absolutely screwed over by Conway’s Law on this project, and the consequence to the customer is a system that has performance problems and is harder to troubleshoot and support than it needed to be because the service boundaries fell in problematic ways based on the organizational model rather than on what would have made sense from a technical or even problem domain perspective.

I wish we’d engaged much earlier with the client’s operations team. It was waterfall after all, so they only expected to get support and troubleshooting documentation near the end of the project. That said, while I still think our basic architecture was appropriate from a technical perspective, it didn’t at all fit into the operation team’s comfort zone and the customer’s existing infrastructure for application support. Either we could have worked with them to make them comfortable with alternative tooling for operations monitoring much earlier in the project, or we could have bitten the bullet and made the systems act much more like the batch driven mainframe tools they were used to.

The error handling we designed into our asynchronous support was heavily based off of Jasper’s existing error handling, which in turn grew out of my experiences in my previous company where we dealt with large volumes and frequent transient errors. In this ecosystem, our problems were really more systematic when a downstream system would be either completely down or mis-configured so that every interaction with it failed. In this case, we really needed a circuit breaker strategy for the error handling inside the message handling code. The main lesson here is to be careful you aren’t trying to fight the last war.

I wish there had been an overall architect over all the elements of this large initiative (preferably me). I only had a window into the specific elements my teams were building out early on, and didn’t get to see the bigger picture until much later — assuming that I ever did. Everyone else was in the same boat, and I felt like we were all the proverbial blind men trying to describe an elephant by feel.

The Ugly

It’s hard to describe, but my single biggest regret on this project was in not pushing much harder to create an effective automated testing strategy against our integration with an extremely problematic 3rd party system. Having to depend so much on very slow, laborious manual testing against this 3rd party system at the center of everything was the bottleneck of all our efforts in my opinion. Most of our technical risk and production issues have been related to this 3rd party system. A fully automated test suite might have allowed us to iterate much faster and find & remove the problems we found in the integration.

When you have the choice, don’t write custom infrastructure when there are viable, commonly used, off the shelf components. The senior management at the beginning of this project were very apprehensive of using any kind of new infrastructure, especially open source tools.

Since this was primarily an integration project, asynchronous messaging should have been a very good fit for the project. We wrote a tiny shared library for managing asynchronous communication between applications using Rabbit MQ as the underlying transport, but with some hooks for an easy move later to Azure Service Bus. That tiny library had to continuously evolve into something much larger later as the use cases became more complex to the point where I felt like it was dominating our workload.

I won’t even say “in retrospect” because we knew this full well from day one, but the project would have gone better if we’d been able to use an off the shelf toolset like MassTransit, NServiceBus, or my own Jasper framework for the messaging support. I wish that I’d made a much bigger push at the time to build out a much more robust messaging foundation, but I felt lucky at the time just to get Rabbit MQ approved.

At the end we actually had a consensus agreement to rip out our custom messaging library and replace that with MassTransit, but the clock ran out on us. If and when the customer is able to do that themselves, I think they’ll have much more robust error handling and instrumentation that should result in more successful daily operations.

There were more egregious “NIH” violations than what I described above, but I’m only going to deal with issues where I had some level of control or influence.

The waterfall software process on this project was just as problematic as it ever was. We had to spend a lot of energy upfront on intermediate deliverables that didn’t add much value, but the killer as usual with waterfall was how damn slow the feedback cycles were and not doing any substantial integration testing until very late in the project. I’m aware that many of you reading this will have very negative opinions and experiences with Agile (I blame Scrum though), but Agile done reasonably well means having rapid and early feedback cycles to find and fix problems quickly.

Shared databases are a common scourge of enterprise architectures. Dating all the way back to my 2005 (!) post Overthrowing the Tyranny of the Shared Database, sharing databases between an application has been a massive pet peeve of mine. Hell, tilting at the shared database windmill at my previous company contributed a little bit to me leaving. At the very least, make sure the %^$&$^&%ing shared database structure is completely described in source control somewhere and fully scripted out so any developer can quickly spin up an up to date copy of that database for testing as needed. If you depend on manual database changes independent of the application development around the shared database, you need to expect a great deal of friction and production problems related to your shared database.

One more time with feeling for my longtime readers:

Sharing a database between applications is like drug users sharing needles

Things to research for later

The big takeaways from me on this project are to add some additional error handling and distributed tracing approaches to my integration project tool belt. As soon as I get a chance, I’m doing a deeper dive into the OpenTelemetry specification with a thought toward adding direct support in Jasper and maybe Marten as a learning experience. I’m also going to add some circuit breaker support directly into Jasper.

For any of you who are huge fans of Stephen King’s Dark Tower novels, you know that King modeled Roland on Clint Eastwood’s character from the spaghetti westerns, but living inside of a Lord of the Rings style epic tale. I think Idris Elba would have been awesome as Roland in the Dark Tower movie if they hadn’t changed the story and the character so much from the books. Grrr.

Calling Generic Methods from Non-Generic Code in .Net

Somewhat often (or at least it feels that way this week) I’ll run into the need to call a method with a generic type argument from code that isn’t generic. To make that concrete, here’s an example from Marten. The main IDocumentSession service has a method called Store() that directs Marten to persist one or more documents of the same type. That method has this signature:

void Store<T>(params T[] entities);

That method would typically be used like this:

using (var session = store.OpenSession())
{
    // The generic constraint for "Team" is inferred from the usage
    session.Store(new Team { Name = "Warriors" });
    session.Store(new Team { Name = "Spurs" });
    session.Store(new Team { Name = "Thunder" });

    session.SaveChanges();
}

Great, and easy enough (I hope), but Marten also has this method where folks can add a heterogeneous mix of any kind of document types all at once:

void StoreObjects(IEnumerable<object> documents);

Internally, that method groups the documents by type, then delegates to the property Store<T>() method for each document type — and that’s where this post comes into play.

(Re-)Introducing Baseline

Baseline is a library available on Nuget that provides oodles of little helper extension methods on common .Net types and very basic utilities that I use in almost all my projects, both OSS and at work. Baseline is an improved subset of what was long ago FubuCore (FubuCore was huge, and it also spawned Oakton), but somewhat adapted to .Net Core.

I wanted to call this library “spackle” because it fills in usability gaps in the .Net base class library, but Jason Bock beat me to it with his Spackle library of extension methods. Since I expected this library to be used as a foundational piece from within basically all the projects in the JasperFx suite, I chose the name “Baseline” which I thought conveniently enough described its purpose and also because there’s an important throughway near the titular Jasper called “Baseline”. I don’t know for sure that it’s the basis for the name, but the Battle of Carthage in the very early days of the US Civil War started where this road is today.

Crossing the Non-Generic to Generic Divide with Baseline

Back to the Marten StoreObjects(object[]) calling Store<T>(T[]) problem. Baseline has a helper extension method called CloseAndBuildAs<T>() method I frequently use to solve this problem. It’s unfortunately a little tedious, but first design a non-generic interface that will wrap the calls to Store<T>() like this:

internal interface IHandler
{
void Store(IDocumentSession session, IEnumerable<object> objects);
}

And a concrete, open generic type that implements IHandler:

internal class Handler<T>: IHandler
{
public void Store(IDocumentSession session, IEnumerable<object> objects)
{
// Delegate to the Store<T>() method
session.Store(objects.OfType<T>().ToArray());
}
}

Now, the StoreObjects() method looks like this:

public void StoreObjects(IEnumerable<object> documents)
{
assertNotDisposed();

var documentsGroupedByType = documents
.Where(x => x != null)
.GroupBy(x => x.GetType());

foreach (var group in documentsGroupedByType)
{
// Build the right handler for the group type
var handler = typeof(Handler<>).CloseAndBuildAs<IHandler>(group.Key);
handler.Store(this, group);
}
}

The CloseAndBuildAs<T>() method above does a couple things behind the scenes:

  1. It creates a closed type for the proper Handler<T> based on the type arguments passed into the method
  2. Uses Activator.CreateInstance() to build the concrete type
  3. Casts that object to the interface supplied as a generic argument to the CloseAndBuildAs<T>() method

The method shown above is here in GitHub. It’s not shown, but there are some extra overloads to also pass in constructor arguments to the concrete types being built.

Marten Quickstart with .Net Core HostBuilder

The Marten Community just released Marten 3.12 with a mix of new functionality and plenty of bug fixes this past week. In particular, we added some new extension methods directly into Marten for integration into .Net Core applications that are bootstrapped by the new generic host builder in .Net Core 3.*.

There’s a new runnable, sample project in GitHub called AspNetCoreWebAPIWithMarten that contains all the code from this blog post.

For a small sample ASP.Net Core web service project using Marten’s new integration, let’s start a new web service project with the dotnet new webapi template. Doing this gives you some familiar files that we’re going to edit in a bit:

  1. appsettings.json — standard configuration file for .Net Core
  2. Program.cs — has the main command line entry point for .Net Core applications. We aren’t actually going to touch this right now, but there will be some command line improvements to Marten v4.0 soon that will add some important development lifecycle utilities that will require a 2-3 line change to this file. Soon.
  3. Startup.cs — the convention based Startup class that holds most of the configuration and bootstrapping for a .Net Core application.

Marten does sit on top of Postgresql, so let’s add a docker-compose.yml file to the codebase for our local development database server like this one:

version: '3'
services:
 postgresql:
 image: "clkao/postgres-plv8:latest"
ports:
 - "5433:5432"

At the command line, run docker-compose up -d to start up your new Postgresql database in Docker.

Next, we’ll add a reference to the latest Marten Nuget to the main project. In the appsettings.json file, I’ll add the connection string to the Postgresql container we defined above:

"ConnectionStrings": {
  "Marten": "Host=localhost;Port=5433;Database=postgres;Username=postgres;password=postgres"
}

Finally, let’s go the Startup.ConfigureServices() method and add this code to register Marten services:

public class Startup
{
    // We need the application configuration to get
    // our connection string to Marten
    public Startup(IConfiguration configuration)
    {
        Configuration = configuration;
    }

    public IConfiguration Configuration { get; }

    public void ConfigureServices(IServiceCollection services)
    {
         // This line registers Marten services with all default
         // Marten settings
         var connectionString = Configuration.GetConnectionString("Marten");
         services.AddMarten(connectionString);

         // And some other stuff we don't care about in this post...

And that’s it, we’ve got Marten configured in its most basic “getting started” usage with these services in our application’s IoC container:

  1. IDocumentStore with a Singleton lifetime. The document store can be used to create sessions, query the configuration of Marten, generate schema migrations, and do bulk inserts.
  2. IDocumentSession with a Scoped lifetime for all read and write operations. By default, this is done with the IDocumentStore.OpenSession() method and the session created will have the identity map behavior
  3. IQuerySession with a Scoped lifetime for all read operations against the document store.

Now, let’s build out a very rudimentary issue tracking web service to capture and persist new issues as well as allow us to query for existing, open issues:

public class IssueController : ControllerBase
{
// This endpoint captures, creates, and persists
// new Issue entities with Marten
[HttpPost("/issue")]
public async Task<IssueCreated> PostIssue(
[FromBody] CreateIssue command,
[FromServices] IDocumentSession session)
{
var issue = new Issue
{
Title = command.Title,
Description = command.Description
};

// Register the new Issue entity
// with Marten
session.Store(issue);

// Commit all pending changes
// to the underlying database
await session.SaveChangesAsync();

return new IssueCreated
{
IssueId = issue.Id
};
}

[HttpGet("/issues/{status}/")]
public Task<IReadOnlyList<IssueTitle>> Issues(
[FromServices] IQuerySession session,
IssueStatus status)
{
// Query Marten's underlying database with Linq
// for all the issues with the given status
// and return an array of issue titles and ids
return session.Query<Issue>()
.Where(x => x.Status == status)
.Select(x => new IssueTitle {Title = x.Title, Id = x.Id})
.ToListAsync();
}

// Return only new issues
[HttpGet("/issues/new")]
public Task<IReadOnlyList<IssueTitle>> NewIssues(
[FromServices] IQuerySession session)
{
return Issues(session, IssueStatus.New);
}
}

And that is actually a completely working little application. In its default settings, Marten will create any necessary database tables on the fly, so we didn’t have to worry about much database setup other than having the new Postgresql database started in a Docker container. Likewise, we didn’t have to subclass any kind of Marten service like you would with Entity Framework Core and we most certainly didn’t have to create any kind of Object/Relational Mapping just to get going. Additionally, we didn’t have to care too awfully much about how to get the right Marten services integrated into our application’s IoC container with the right scoping because that’s all handled for us with the UseMarten() extension method that comes out of the box with Marten 3.12 and greater.

In a follow up post later this week, I’ll show you how to customize the Marten service registrations for per-HTTP request multi-tenancy and traceable, contextual logging of Marten-related SQL execution.

The Fundamentals of Continuous Software Design

Continuing my recent theme of remember why we originally thought Agile was a good thing before it devolved into whatever it is now.

I had the opportunity over the weekend to speak online as part of CouchCon Live. My topic was to revisit some of the principles of designing software inside of an adaptive Agile Software process in a talk entitled “The Fundamentals of Continuous Software Design.”

The video has posted on YouTube, and the slides are available on SlideShare.

I went back through the Agile greatest hits with:

  • YAGNI
  • Do the Simplest Thing that Could Possibly Work
  • The Last Responsible Moment
  • Reversibility in Software Architecture
  • Designing for Testability
  • How the full development team should be involved throughout
  • And why I think contemporary Scrum is the Scrappy Doo of Agile Software Development

 

Remembering Why Agile was a Big Deal

Earlier this year I recorded a podcast for Software Engineering Radio with Jeff Doolittle on Agile versus Waterfall Software Development where we discussed the vital differences between Agile development and traditional waterfall, and why those differences still matter. This post is the long-awaited companion piece I couldn’t manage to finish before the podcast posted. I might write a follow up post on some software engineering practices like continuous design later so that I can strictly focus here on softer process related ideas.

I started my software development career writing Shadow IT applications and automation for my engineering group. No process, no real practices either, and lots of code. As you’d probably guess, I later chafed very badly at the formal waterfall processes and organizational model of my first “real” software development job for plenty of reasons I’ll detail later in this post.

During that first real IT job, I started to read and learn about alternative iterative development processes like the Rational Unified Process (RUP), but I was mostly inspired by the brand new, shiny Extreme Programming (XP) method. After tilting the windmill a bit at my waterfall shop to try to move us away from the waterfall to RUP (Agile would have been a bridge too far), I jumped to a consulting company that was an influential, early adopter of XP. I’ve worked almost exclusively with Agile processes and inside more or less Agile organizational models ever since — until recently. In the past couple years I’ve been involved with a long running project in a classic waterfall shop which has absolutely reinforced my belief in the philosophical approaches to software development that came out of Agile Software (or Lean Programming). I also think some of those ideas are somewhat lost in contemporary Scrum’s monomaniacal focus on project management, so here’s a long blog post talking about what I think really was vital in the shift from waterfall to agile development.

First, what I believe in

A consistent theme in all of these topics is trying to minimize the amount of context switching throughout a projectand anybody’s average day. I think that Agile processes made a huge improvement in that regard over the older waterfall models, and that by itself is enough to justify staking waterfall through the heart permanently in my book.

Self-contained, multi-disciplinary teams

My strong preference and a constant recommendation to our clients is to favor self-contained, multi-disciplinary teams centered around specific projects or products. What I mean here is that the project is done by a team who is completely dedicated to working on just that project. That team should ideally have every possible skillset it needs to execute the project so that there’s reduced need to coordinate with external teams, so it’s whatever mix you need of developers, testers, analysts all working together on a common goal.

In the later sections on what I think is wrong with the waterfall model, I bring up the fallacy of local optimization. In terms of a software project, we need to focus on optimizing the entire process of delivering working software, not just on what it takes to code a feature, or write tests, or quickly checking off a process quality gate. A self-contained team is hopefully all focused on delivering just one project or sprint, so there should be more opportunity to optimize the delivery. This generally means things like developers purposely thinking about how to support the testers on their team or using practices like Executable Specifications for requirements that shortens the development and testing time overall, even if it’s more work for the original analysts upfront.

A lot of the overhead of software projects is communication between project team members. To be effective, you need to improve the quality of that communication so that team members have a shared understanding of the project. I also think you’re a lot less brittle as a project if you have fewer people to communication with. In a classic waterfall shop, you may need to be involving a lot of folks from external projects who are simultaneously working on several other initiatives and have a different schedule than your project’s schedule. That tends to force communication into either occasional meetings (which impact productivity on its own) or through asynchronous communication via emails or intermediate documentation like design specifications.

Let’s step back and think about the various types of communication you use with your colleagues, and what actually works. Take a look at this diagram from Scott Ambler’s Communication on Agile Software Teams essay (originally adapted from some influential writings by Alistair Cockburn that I couldn’t quite find this morning):

 

communicationModes

In a self-contained, multi-disciplinary team, you’re much more likely to be using the more effective forms of communication in the upper, right hand of the graph. In a waterfall model where different disciplines (testers, developers, analysts) may be working in different teams and on different projects at any one time, the communication is mostly happening at the less effective, lower left of this diagram.

I think that a self-contained team suffers much less from context switching, but I’ll cover that in the section on delivering serially.

Another huge advantage to self-contained teams is the flexibility in scheduling and the ability to adapt to changing project circumstances and whatever you’re learning as you go. This is the idea of Reversibility from Extreme Programming:

“If you can easily change your decisions, this means it’s less important to get them right – which makes your life much simpler. ” — Martin Fowler

In a self-contained team, your reversibility is naturally higher, and you’re more likely able to adapt and incorporate learning throughout the project. If you’re dependent upon external teams or can’t easily change approved specification documents, you have much lower reversibility.

If you’re interested in Reversibility, I gave a technically focused talk on that at Agile Vancouver several years ago

I think everything in this section still applies to teams that are focused on a product or family of products.

Looking over my history, I’ve written about this topic quite a bit over the years:

  1. On Software Teams
  2. Call me a Utopian, but I want my teams flat and my team members broad
  3. Self Organizing Teams are Superior to Command n’ Control Teams
  4. Once Upon a Team
  5. The Anti Team
  6. On Process and Practices
  7. Want productivity? Try some team continuity (and a side of empowerment too) – I miss this team.
  8. The Will to be Good
  9. Learning Lessons — Can You Make Mistakes at Work?
  10. Indelible Proof of a Healthy Team

Deliver serially over working in parallel

A huge shift in thinking behind Agile Software Development is simply the idea that the only thing that matters is delivering working software. Not design specifications, not intermediate documents, not process checkpoints passed, but actual working software deployed to production.

In practice, this means that Agile teams seek to deliver completely working features or “vertical slices” of functionality at one time. In this model a team strives for the continuous delivery model of constantly making little releases of working software.

Contrast this idea to the old “software as construction” metaphor from my waterfall youth where we generally developed by:

  1. Get the business requirements
  2. Do a high level architecture document
  3. Maybe do a lower level design specification (or do a prototype first and pretend you wrote the spec first)
  4. Design the database model for the new system
  5. Code the data layer
  6. Code the business layer
  7. Code any user interface
  8. Rework 4-6 because you inevitably missed something or realized something new as you went
  9. Declare the system “code complete”
  10. Start formal testing of then entire system
  11. Fix lots of bugs on 4-6
  12. User acceptance testing (hopefully)
  13. Release to production

The obvious problems in this cycle is that you deliver no actual value until the very, very end of the project. You’re also struggling quite a bit in the later parts of the project because you’re needing to re-visit work that was done much earlier and you often struggle with the context switching that entails.

In contrast, delivering in vertical slices allows you to:

  • Minimize context switching because you’re finishing work much closer to when it’s started. With a multi-disciplinary team, every body is focused on a small subset of features at any one time which tends to improve the communication between disciplines.
  • Actually deliver something much earlier to production and start accruing some business payoff. Moreover, you should be working in the order of business priority, so the most important things are worked on and completed first. Which also serves to fail softer compared to a waterfall project cycle.
  • Fail softer by delivering part of a project on time if even if you’re not able to complete all the planned features by the theoretical end date — as apposed to failing completely to deliver anything on time in a waterfall model.

In the Extreme Programming days we talked a lot about the concept of done, done, done as opposed to being theoretically “code complete.”

 

Rev’ing up feedback loops

After coming back to waterfall development the past couple years, the most frustrating thing to me is how slow the feedback cycles are between doing or deciding something and knowing whether or not anything you did was really correct. It also turns out that having a roomful of people staring at a design specification document in a formal review doesn’t do a great job at spotting a lot of problems that present themselves later in the project when actual code is being written.

Any iterative process helps tighten feedback cycles and enables you to fix issues earlier. What Agile brought to the table was an emphasis on better, faster, more fine-grained feedback cycles through project automation and practices like continuous integration and test driven development.

More on the engineering practices in later posts. Maybe. It literally took me 5 years to go from an initial draft to publishing this post so don’t hold your breathe.

 

What I think is wrong with classic waterfall development

Potentially a lot. Your mileage may vary from mine (my experiences with formal waterfall processes has been very negative) and I’m sure some of you are succeeding just fine with waterfall processes of one sort or another.

At its most extreme, I’ve observed these traits in shops with formal waterfall methods, all of which I think work against successful delivery and why I think these traits are problematic.

Over-specialization of personnel

I’m not even just talking about developers vs testers. I mean having some developers or groups who govern the central batch scheduling infrastructure, others that own integrations, a front end team maybe, and obviously the centralized database team. Having folks specialized in their roles like this means that it takes more people involved in a project in order to have all the necessary skillset, and that means having a lot more effort to communicate and collaborate between people who aren’t in the same teams or even in the same organizations. That’s a lot of potential project overhead, and it makes your project less flexible as you’re bound by the constraints of your external dependencies.

The other problem with over-specialization is the fallacy of local optimization problem, because many folks only have purview over part of a project and don’t necessarily see or understand the whole project.

Formal, intermediate documents

I’m not here to start yet another argument over how much technical documentation is enough. What I will argue about is a misplaced focus on formal, intermediate documents as a quality or process gate. Especially if those intermediate documents are really meant to serve as the primary communication between architects, analysts, and developers. One, because that’s a deeply inefficient way to communicate. Two, because those documents are always wrong because they’re written too early. Three because it’s just a lot of overhead to go through authoring those documents to get through a formal process gate that could be better spent on getting real feedback about the intended system or project.

Slow feedback cycles

Easily the thing I hate the most about “true” waterfall projects is the length of time between getting adequate feedback between the early design and requirements documents and an actually working system getting tested or going through some user acceptance testing from real users. This is an especially pernicious issue if you hew to the idea that formal testing can only start after all designed features are complete.

The killer problem in larger waterfall projects over my career is that you’re always trying to remember how some piece of code you wrote 3-6 months ago works when problems finally surface from real testing or usage.

 

Summary

I’d absolutely choose some sort of disciplined Agile process with solid engineering practices over any kind of formal waterfall process any day of the week. I think waterfall processes do a wretched job managing project risks by the slow, ineffective feedback cycles and waste a lot of energy on unevenly useful intermediate documentation.

Agile hasn’t always been pretty for me though, see The Surprisingly Valuable and Lasting Lessons I Learned from a Horrible Project about an absolutely miserable XP project.

Ironically, the most successful project I’ve ever worked on from a business perspective was technically a waterfall project (a certain 20-something first time technical lead basically ignored the official process), but process wasn’t really an issue because:

  • We had a very good relationship with the business partners including near constant feedback about what we were building. That project is still the best collaboration I’ve ever experienced with the actual business experts
  • There was a very obvious problem to solve for the business that was likely to pay off quickly
  • Our senior management did a tremendous job being a “shit umbrella” to keep the rest of the organization from interfering with us
  • It was a short project

And projects like that just don’t come around very often, so I wouldn’t read much into the process being the deciding factor in its success.