Category Archives: Uncategorized

The Fundamentals of Continuous Software Design

Continuing my recent theme of remember why we originally thought Agile was a good thing before it devolved into whatever it is now.

I had the opportunity over the weekend to speak online as part of CouchCon Live. My topic was to revisit some of the principles of designing software inside of an adaptive Agile Software process in a talk entitled “The Fundamentals of Continuous Software Design.”

The video has posted on YouTube, and the slides are available on SlideShare.

I went back through the Agile greatest hits with:

  • YAGNI
  • Do the Simplest Thing that Could Possibly Work
  • The Last Responsible Moment
  • Reversibility in Software Architecture
  • Designing for Testability
  • How the full development team should be involved throughout
  • And why I think contemporary Scrum is the Scrappy Doo of Agile Software Development

 

Remembering Why Agile was a Big Deal

Earlier this year I recorded a podcast for Software Engineering Radio with Jeff Doolittle on Agile versus Waterfall Software Development where we discussed the vital differences between Agile development and traditional waterfall, and why those differences still matter. This post is the long-awaited companion piece I couldn’t manage to finish before the podcast posted. I might write a follow up post on some software engineering practices like continuous design later so that I can strictly focus here on softer process related ideas.

I started my software development career writing Shadow IT applications and automation for my engineering group. No process, no real practices either, and lots of code. As you’d probably guess, I later chafed very badly at the formal waterfall processes and organizational model of my first “real” software development job for plenty of reasons I’ll detail later in this post.

During that first real IT job, I started to read and learn about alternative iterative development processes like the Rational Unified Process (RUP), but I was mostly inspired by the brand new, shiny Extreme Programming (XP) method. After tilting the windmill a bit at my waterfall shop to try to move us away from the waterfall to RUP (Agile would have been a bridge too far), I jumped to a consulting company that was an influential, early adopter of XP. I’ve worked almost exclusively with Agile processes and inside more or less Agile organizational models ever since — until recently. In the past couple years I’ve been involved with a long running project in a classic waterfall shop which has absolutely reinforced my belief in the philosophical approaches to software development that came out of Agile Software (or Lean Programming). I also think some of those ideas are somewhat lost in contemporary Scrum’s monomaniacal focus on project management, so here’s a long blog post talking about what I think really was vital in the shift from waterfall to agile development.

First, what I believe in

A consistent theme in all of these topics is trying to minimize the amount of context switching throughout a projectand anybody’s average day. I think that Agile processes made a huge improvement in that regard over the older waterfall models, and that by itself is enough to justify staking waterfall through the heart permanently in my book.

Self-contained, multi-disciplinary teams

My strong preference and a constant recommendation to our clients is to favor self-contained, multi-disciplinary teams centered around specific projects or products. What I mean here is that the project is done by a team who is completely dedicated to working on just that project. That team should ideally have every possible skillset it needs to execute the project so that there’s reduced need to coordinate with external teams, so it’s whatever mix you need of developers, testers, analysts all working together on a common goal.

In the later sections on what I think is wrong with the waterfall model, I bring up the fallacy of local optimization. In terms of a software project, we need to focus on optimizing the entire process of delivering working software, not just on what it takes to code a feature, or write tests, or quickly checking off a process quality gate. A self-contained team is hopefully all focused on delivering just one project or sprint, so there should be more opportunity to optimize the delivery. This generally means things like developers purposely thinking about how to support the testers on their team or using practices like Executable Specifications for requirements that shortens the development and testing time overall, even if it’s more work for the original analysts upfront.

A lot of the overhead of software projects is communication between project team members. To be effective, you need to improve the quality of that communication so that team members have a shared understanding of the project. I also think you’re a lot less brittle as a project if you have fewer people to communication with. In a classic waterfall shop, you may need to be involving a lot of folks from external projects who are simultaneously working on several other initiatives and have a different schedule than your project’s schedule. That tends to force communication into either occasional meetings (which impact productivity on its own) or through asynchronous communication via emails or intermediate documentation like design specifications.

Let’s step back and think about the various types of communication you use with your colleagues, and what actually works. Take a look at this diagram from Scott Ambler’s Communication on Agile Software Teams essay (originally adapted from some influential writings by Alistair Cockburn that I couldn’t quite find this morning):

 

communicationModes

In a self-contained, multi-disciplinary team, you’re much more likely to be using the more effective forms of communication in the upper, right hand of the graph. In a waterfall model where different disciplines (testers, developers, analysts) may be working in different teams and on different projects at any one time, the communication is mostly happening at the less effective, lower left of this diagram.

I think that a self-contained team suffers much less from context switching, but I’ll cover that in the section on delivering serially.

Another huge advantage to self-contained teams is the flexibility in scheduling and the ability to adapt to changing project circumstances and whatever you’re learning as you go. This is the idea of Reversibility from Extreme Programming:

“If you can easily change your decisions, this means it’s less important to get them right – which makes your life much simpler. ” — Martin Fowler

In a self-contained team, your reversibility is naturally higher, and you’re more likely able to adapt and incorporate learning throughout the project. If you’re dependent upon external teams or can’t easily change approved specification documents, you have much lower reversibility.

If you’re interested in Reversibility, I gave a technically focused talk on that at Agile Vancouver several years ago

I think everything in this section still applies to teams that are focused on a product or family of products.

Looking over my history, I’ve written about this topic quite a bit over the years:

  1. On Software Teams
  2. Call me a Utopian, but I want my teams flat and my team members broad
  3. Self Organizing Teams are Superior to Command n’ Control Teams
  4. Once Upon a Team
  5. The Anti Team
  6. On Process and Practices
  7. Want productivity? Try some team continuity (and a side of empowerment too) – I miss this team.
  8. The Will to be Good
  9. Learning Lessons — Can You Make Mistakes at Work?
  10. Indelible Proof of a Healthy Team

Deliver serially over working in parallel

A huge shift in thinking behind Agile Software Development is simply the idea that the only thing that matters is delivering working software. Not design specifications, not intermediate documents, not process checkpoints passed, but actual working software deployed to production.

In practice, this means that Agile teams seek to deliver completely working features or “vertical slices” of functionality at one time. In this model a team strives for the continuous delivery model of constantly making little releases of working software.

Contrast this idea to the old “software as construction” metaphor from my waterfall youth where we generally developed by:

  1. Get the business requirements
  2. Do a high level architecture document
  3. Maybe do a lower level design specification (or do a prototype first and pretend you wrote the spec first)
  4. Design the database model for the new system
  5. Code the data layer
  6. Code the business layer
  7. Code any user interface
  8. Rework 4-6 because you inevitably missed something or realized something new as you went
  9. Declare the system “code complete”
  10. Start formal testing of then entire system
  11. Fix lots of bugs on 4-6
  12. User acceptance testing (hopefully)
  13. Release to production

The obvious problems in this cycle is that you deliver no actual value until the very, very end of the project. You’re also struggling quite a bit in the later parts of the project because you’re needing to re-visit work that was done much earlier and you often struggle with the context switching that entails.

In contrast, delivering in vertical slices allows you to:

  • Minimize context switching because you’re finishing work much closer to when it’s started. With a multi-disciplinary team, every body is focused on a small subset of features at any one time which tends to improve the communication between disciplines.
  • Actually deliver something much earlier to production and start accruing some business payoff. Moreover, you should be working in the order of business priority, so the most important things are worked on and completed first. Which also serves to fail softer compared to a waterfall project cycle.
  • Fail softer by delivering part of a project on time if even if you’re not able to complete all the planned features by the theoretical end date — as apposed to failing completely to deliver anything on time in a waterfall model.

In the Extreme Programming days we talked a lot about the concept of done, done, done as opposed to being theoretically “code complete.”

 

Rev’ing up feedback loops

After coming back to waterfall development the past couple years, the most frustrating thing to me is how slow the feedback cycles are between doing or deciding something and knowing whether or not anything you did was really correct. It also turns out that having a roomful of people staring at a design specification document in a formal review doesn’t do a great job at spotting a lot of problems that present themselves later in the project when actual code is being written.

Any iterative process helps tighten feedback cycles and enables you to fix issues earlier. What Agile brought to the table was an emphasis on better, faster, more fine-grained feedback cycles through project automation and practices like continuous integration and test driven development.

More on the engineering practices in later posts. Maybe. It literally took me 5 years to go from an initial draft to publishing this post so don’t hold your breathe.

 

What I think is wrong with classic waterfall development

Potentially a lot. Your mileage may vary from mine (my experiences with formal waterfall processes has been very negative) and I’m sure some of you are succeeding just fine with waterfall processes of one sort or another.

At its most extreme, I’ve observed these traits in shops with formal waterfall methods, all of which I think work against successful delivery and why I think these traits are problematic.

Over-specialization of personnel

I’m not even just talking about developers vs testers. I mean having some developers or groups who govern the central batch scheduling infrastructure, others that own integrations, a front end team maybe, and obviously the centralized database team. Having folks specialized in their roles like this means that it takes more people involved in a project in order to have all the necessary skillset, and that means having a lot more effort to communicate and collaborate between people who aren’t in the same teams or even in the same organizations. That’s a lot of potential project overhead, and it makes your project less flexible as you’re bound by the constraints of your external dependencies.

The other problem with over-specialization is the fallacy of local optimization problem, because many folks only have purview over part of a project and don’t necessarily see or understand the whole project.

Formal, intermediate documents

I’m not here to start yet another argument over how much technical documentation is enough. What I will argue about is a misplaced focus on formal, intermediate documents as a quality or process gate. Especially if those intermediate documents are really meant to serve as the primary communication between architects, analysts, and developers. One, because that’s a deeply inefficient way to communicate. Two, because those documents are always wrong because they’re written too early. Three because it’s just a lot of overhead to go through authoring those documents to get through a formal process gate that could be better spent on getting real feedback about the intended system or project.

Slow feedback cycles

Easily the thing I hate the most about “true” waterfall projects is the length of time between getting adequate feedback between the early design and requirements documents and an actually working system getting tested or going through some user acceptance testing from real users. This is an especially pernicious issue if you hew to the idea that formal testing can only start after all designed features are complete.

The killer problem in larger waterfall projects over my career is that you’re always trying to remember how some piece of code you wrote 3-6 months ago works when problems finally surface from real testing or usage.

 

Summary

I’d absolutely choose some sort of disciplined Agile process with solid engineering practices over any kind of formal waterfall process any day of the week. I think waterfall processes do a wretched job managing project risks by the slow, ineffective feedback cycles and waste a lot of energy on unevenly useful intermediate documentation.

Agile hasn’t always been pretty for me though, see The Surprisingly Valuable and Lasting Lessons I Learned from a Horrible Project about an absolutely miserable XP project.

Ironically, the most successful project I’ve ever worked on from a business perspective was technically a waterfall project (a certain 20-something first time technical lead basically ignored the official process), but process wasn’t really an issue because:

  • We had a very good relationship with the business partners including near constant feedback about what we were building. That project is still the best collaboration I’ve ever experienced with the actual business experts
  • There was a very obvious problem to solve for the business that was likely to pay off quickly
  • Our senior management did a tremendous job being a “shit umbrella” to keep the rest of the organization from interfering with us
  • It was a short project

And projects like that just don’t come around very often, so I wouldn’t read much into the process being the deciding factor in its success.

 

 

 

 

 

 

 

Marten v4.0 Planning Document (Part 1)

As I wrote about a couple weeks ago in a post called Kicking off Marten V4 Development, the Marten team is actively working on the long delayed v4.0 release with planned improvements for performance, the Linq support, and a massive planned jump in capability for the event store functionality. This post is a the result of a long comment thread and many discussions between the Marten community.

We don’t yet have a great consensus about the exact direction that the event store improvements are going to go, so I’m strictly focusing on improvements to the Marten internals and the Document Database support. I’ll follow up with a part 2 on the event store as that starts to gel.

If you’re interested in Marten, here’s some links:

 

Pulling it Off

We’ve got the typical problems of needing to address incoming pull requests and bug issues in master while probably needing to have a long lived branch for v4.

As an initial plan, let’s:

  1. Start with the unit testing improvements as a way to speed up the build before we dive into much more of this? This is in progress with about a 25% reduction in test throughput time so far in this pull request
  2. Do a bug sweep v3.12 release to address as many of the tactical problems as we can before branching to v4
  3. Possibly do a series of v4, then v5 releases to do this in smaller chunks? We’ve mostly said do the event store as v4, then Linq improvements as v5 — Nope, full speed ahead with a large v4 release in order to do as many breaking changes as possible in one release
  4. Extract the generic database manipulation code to its own library to clean up Marten, and speed up our builds to make the development work be more efficient.
  5. Do the Event Store v4 work in a separate project built as an add on from the very beginning, but leave the existing event store in place. That would enable us to do a lot of work and mostly be able to keep that work in master so we don’t have long-lived branch problems. Break open the event store improvement work because that’s where most of the interest is for this release.

Miscellaneous Ideas

  • Look at some kind of object pooling for the DocumentSession / QuerySession objects?
  • Ditch the document by document type schema configuration where Document A can be in one schema, and Document “B” is in another schema. Do that, and I think we open the door for multi-tenancy by schema.
  • Eliminate ManagedConnection altogether. I think it results in unnecessary object allocations and it’s causing more harm that help as it’s been extended over time. After studying that more today, it’s just too damn embedded. At least try to kill off the Execute methods that take in a Lambda. See this GitHub issue.
  • Can we consider ditching < .Net Core or .Net v5 for v4? The probable answer is “no,” so let’s just take this off the table.
  • Do a hunt for classes in Marten marked public that should be internal. Here’s the GitHub issue.
  • Make the exceptions a bit more consistent

Dynamic Code Generation

If you look at the pull request for Document Metadata and the code in Marten.Schema.Arguments you can see that our dynamic Expression to Lambda compilation code is getting extremely messy, hard to reason with, and difficult to extend.

Idea: Introduce a dependency on LamarCodeGeneration and LamarCompiler. LamarCodeGeneration has a strong model for dynamically generating C# code at runtime. LamarCompiler adds runtime Roslyn support to compile assemblies on the fly and utilities to attach/create these classes. We could stick with Expression to Lambda compilation, but that can’t really handle any kind of asynchronous code without some severe pain and it’s far more difficult to reason about (Jeremy’s note: I’m uniquely qualified to make this statement unfortunately).

What gets dynamically generated today:

  • Bulk importer handling for a single entity
  • Loading entities and tracking entities in the identity map or version tracking

What could be generated in the future:

  • Document metadata properties — but sad trombone, that might have to stay with Expressions if the setters are internal/private :/
  • Much more of the ISelector implementations, especially since there’s going to be more variability when we do the document metadata
  • Finer-grained manipulation of the IIdentityMap

Jeremy’s note: After doing some detailed analysis through the codebase and the spots that would be impacted by the change to dynamic code generation, I’m convinced that this will lead to significant performance improvements by eliminating many existing runtime conditional checks and casts

Track this work here.

Unit Testing Approach

This is in progress, and going well.

If we introduce the runtime code generation back into Marten, that’s unfortunately a non-trivial “cold start” testing issue. To soften that, I suggest we get a lot more aggressive with reusable xUnit.Net class fixtures between tests to reuse generated code between tests, cut way down on the sheer number of database calls by not having to frequently check the schema configuration, and other DocumentStore overhead.

A couple other points about this:

  • We need to create more unique document types so we’re not having to use different configurations for the same document type. This would enable more reuse inside the testing runtime
  • Be aggressive with separate schemas for different configurations
  • We could possibly turn on xUnit.net parallel test running to speed up the test cycles

Document Metadata

  • From the feedback on GitHub, it sounds like the desire to extend the existing metadata to tracking data like correlation identifiers, transaction ids, user ids, etc. To make this data easy to query on, I would prefer that this data be separate columns in the underlying storage
  • Use the configuration and tests from pull request for Document Metadata, but use the Lamar-backed dynamic code generation from the previous section to pull this off.
  • I strongly suggest using a new dynamic codegen model for the ISelector objects that would be responsible for putting Marten’s own document metadata like IsDeleted or TenantId or Version onto the resolved objects (but that falls apart if we have to use private setters)
  • I think we could expand the document metadata to allow for user defined properties like “user id” or “transaction id” much the same way we’ll do for the EventStore metadata. We’d need to think about how we extend the document tables and how metadata is attached to a document session

My thought is to designate one (or maybe a few?) .Net type as the “metadata type” for your application like maybe this one:

    public class MyMetadata
    {
        public Guid CorrelationId { get; set; }
        public string UserId { get; set; }
    }

Maybe that gets added to the StoreOptions something like:

var store = DocumentStore.For(x => {
    // other stuff

    // This would direct Marten to add extra columns to
    // the documents and events for the metadata properties
    // on the MyMetadata type.

    // This would probably be a fluent interface to optionally fine tune
    // the storage and applicability -- i.e., to all documents, to events, etc.
    x.UseMetadataType<MyMetadata>();
});

Then at runtime, you’d do something like:

session.UseMetadata<MyMetadta>(metadata);

Either through docs or through the new, official .Net Core integration, we have patterns to have that automatically set upon new DocumentSession objects being created from the IoC to make the tracking be seemless.

Extract Generic Database Helpers to its own Library

  • Pull everything to do with Schema object generation, difference detection, and DDL generation to a separate library (IFeatureSchemaISchemaObject, etc.). Mostly to clean out the main library, but also because this code could easily be reused outside of Marten. Separating it out might make it easier to test and extend that functionality, which is something that occasionally gets requested. There’s also the possibility of further breaking that into abstractions and implementations for the long run of getting us ready for Sql Server or other database engine support. The tests for this functionality are slow, and rarely change. It would be advantageous to get this out of the main Marten library and testing project.
  • Pull the ADO.Net helper code like CommandBuilder and the extension methods into a small helper library somewhere else (I’m nominating the Baseline repository). This code is copied around to other projects as it is, and it’s another way of getting stuff out of the main library and the test suite.

Track this work in this GitHub issue.

F# Improvements

We’ll have a virtual F# subcommittee to be watching this work for F#-friendliness:

HostBuilder Integration

We’ll bring Joona-Pekka Kokko’s ASP.Net Core integration library into the main repository and make that the officially blessed and documented recipe for integrating Marten into .Net Core applications based on the HostBuilder in .Net Core. I suppose we could also multi-target IWebHostBuilder for ASP.Net Core 2.*.

That HostBuilder integration could be extended to:

  • Optionally set up the Async Daemon in an IHostedService — more on this in the Event Store section
  • Optionally register some kind of IDocumentSessionBuilder that could be used to customize session construction?
  • Have some way to have container resolved IDocumentSessionListener objects attached to IDocumentSession. This is to have an easy recipe for folks who want events broadcast through messaging infrastructure in CQRS architectures

See the GitHub issue for this.

Command Line Support

The Marten.CommandLine package already uses Oakton for command line parsing. For easier integration in .Net Core applications, we could shift that to using the Oakton.AspNetCore package so the command line support can be added to any ASP.net Core 2.* or .Net Core 3.* project by installing the Nuget. This might simplify the usage because you’d no longer need a separate project for the command line support.

There are some long standing stories about extending the command line support for the event store projection rebuilding. I think that would be more effective if it switches over to Oakton.AspNetCore.

See the GitHub issue

Linq

This is also covered by the Linq Overhaul issue.

  • Bring back the work in the linq branch for the revamped IField model within the Linq provider. This would be advantageous for performance, cleans up some conditional code in the Linq internals, could make the Linq support be aware of Json serialization customizations like [JsonProperty], and probably helps us deal more with F# types like discriminated unions.
  • Completely rewrite the Include() functionality. Use Postgresql Common Table Expression and UNION queries to fetch both the parent and any related documents in one query without needing to do any kind of JOIN s that complicate the selectors. There’d be a column for document type the code could use to switch. The dynamic code generation would help here. This could finally knock out the long wished for Include() on child collections feature. This work would nuke the InnerJoin stuff in the ISelector implementations, and that would hugely simplify a lot of code.
  • Finer grained code generation would let us optimize the interactions with IdentityMap. For purely query sessions, you could completely skip any kind of interaction with IdentityMap instead of wasting cycles on nullo objects. You could pull out a specific IdentityMap<TEntity, TKey> out of the larger identity map just before calling selectors to avoid some repetitive “find the right inner dictionary” on each document resolved.
  • Maybe introduce a new concept of ILinqDialect where the Expression parsing would just detect what logical thing it finds (like !BoolProperty), and turns around and calls this ILinqDialect to get at a WhereFragment or whatever. This way we could ready ourselves to support an alternative json/sql dialect around JSONPath for Postgresql v12+ and later for Sql Server vNext. I think this would fit into the theme of making the Linq support more modular. It should make the Linq support easier to unit test as we go. Before we do anything with this, let’s take a deep look into the EF Core internals and see how they handle this issue
  • Consider replacing the SelectMany() implementation with Common Table Expression sql statements. That might do a lot to simplify the internal mechanics. Could definitely get us to an n-deep model.
  • Do the Json streaming story because it should be compelling, especially as part of the readside of a CQRS architecture using Marten’s event store functionality.
  • Possibly use a PLV8-based JsonPath polyfill so we could use sql/json immediately in the Linq support. More research necessary.

Partial Updates

Use native postgresql partial JSON updates wherever possible. Let’s do a perf test on that first though.

Using Postgresql Advisory Locks for Leader Election

If you’re running an application with a substantial workload, or just want some kind of high availability, you’re probably running that application across multiple servers (heretofore called “nodes” because who knows where they’re physically running these days). That’s great and all, but it’s not too uncommon that you’ll need to make some kind of process run on only one of those nodes at any one time.

As an example, the Marten event store functionality as a feature to support asynchronous projection builders called the “async daemon” (because I thought that sounded cool at the time). The async daemon is very stateful, and can only function while running on one node at a time — but it doesn’t have any existing infrastructure to help you manage that. What we know we need to do for the upcoming Marten v4.0 release is to provide “leader election” to make sure the async daemon is actively building projections on only one node and can be activated or fail over to another node as needed to guarantee that exactly one node is active at all times.

From Wikipedia, Leader Election “is the process of designating a single process as the organizer of some task distributed among several computers.” There’s plenty of existing art to do this, but it’s not for the feint of heart. In the past, I tried to do this with FubuMVC using a custom implementation of the Bully Algorithm. Microsoft’s microservices pattern guidance has some .Net centric approaches to leader election. Microsoft’s new Dapr tool is supposed to support leader election some day.

From my previous experience, building out and especially testing custom election infrastructure was very difficult. As a far easier approach, I’ve used Advisory Locks in Postgresql in Jasper (I’m also using the Sql Server equivalents as well) as what I think of as a “poor man’s leader election.”

An advisory lock in Postgresql is an arbitrary, application-managed lock on a named resource. Postgresql simply tracks these locks as a distributed lock such that only one active client can hold the lock at any one time. These locks can be held either at:

  1. The connection level, such that the lock, once obtained, is held as long as the database connection is open.
  2. The transaction level, such that a lock obtained within the course of one Postgresql transaction is held until the transaction is committed, rolled back, or the connection is lost.

As an example, Jasper‘s “Durability Agent” is a constantly running process in Jasper applications that tries to read and process any persisted messages persisted in a Postgresql or Sql Server database. Since you certainly don’t want a unique message to be processed by more than one node, the durability uses advisory locks to try to temporarily take sole ownership of replaying persisted messages with a workflow similar to this sequence diagram:

Transaction Scoped Advisory Lock Usage

That’s working well so far for Jasper, but in Marten v4.0, we want to use the connection scoped advisory lock for leader election of a long running process for the async daemon.

Sample Usage for Leader Election

Before you look at any of these code samples, just know that this is over-simplified to show the concept, isn’t in production, and would require a copious amount of error handling and logging to be production worthy.

For Marten v4.0, we’ll use the per-connection usage to ensure that the new version of the async daemon will only be running on one node (or at least the actual “leader” process that distributes and assigns work across other nodes if we do it well). The async daemon process itself is probably going to be a .Net Core IHostedService that runs in the background.

As just a demonstrator, I’ve pushed up a little project called AdvisoryLockSpike to GitHub just to show the conceptual usage. First let’s say that the actual worker bee process of the async daemon implements this interface:

public enum ProcessState
{
    Active,
    Inactive,
    Broken
}

public interface IActiveProcess : IDisposable
{
    Task<ProcessState> State();
    
    
    // The way I've done this before, the
    // running code does all its work using
    // the currently open connection or at
    // least checks the connection to "know"
    // that it still has the leadership role
    Task Start(NpgsqlConnection conn);
}

Next, we need something around that to actually deal with the mechanics of trying to obtain the global lock and starting or stopping the active process. Since that’s a background process within an application, I’m going to use the built in BackgroundService in .Net Core with this little class:

public class LeaderHostedService<T> : BackgroundService
    where T : IActiveProcess
{
    private readonly LeaderSettings<T> _settings;
    private readonly T _process;
    private NpgsqlConnection _connection;

    public LeaderHostedService(LeaderSettings<T> settings, T process)
    {
        _settings = settings;
        _process = process;
    }

    protected override async Task ExecuteAsync(CancellationToken stoppingToken)
    {
        // Don't try to start right off the bat
        await Task.Delay(_settings.FirstPollingTime, stoppingToken);
            
        _connection = new NpgsqlConnection(_settings.ConnectionString);
        await _connection.OpenAsync(stoppingToken);
        
        while (!stoppingToken.IsCancellationRequested)
        {
            var state = await _process.State();
            if (state != ProcessState.Active)
            {
                // If you can take the global lock, start
                // the process
                if (await _connection.TryGetGlobalLock(_settings.LockId, cancellation: stoppingToken))
                {
                    await _process.Start(_connection);
                }
            }

            // Start polling again
            await Task.Delay(_settings.OwnershipPollingTime, stoppingToken);
        }

        if (_connection.State != ConnectionState.Closed)
        {
            await _connection.DisposeAsync();
        }

    }
}

To fill in the blanks, the TryGetGlobalLock() method is an extension method helper to call the underlying pg_try_advisory_lock function in Postgresql to try to obtain a global advisory lock for the configured lock id. That extension method is shown below:

// Try to get a global lock with connection scoping
public static async Task<bool> TryGetGlobalLock(this DbConnection conn, int lockId, CancellationToken cancellation = default(CancellationToken))
{
    var c = await conn.CreateCommand("SELECT pg_try_advisory_lock(:id);")
        .With("id", lockId)
        .ExecuteScalarAsync(cancellation);

    return (bool) c;
}

Raw ADO.Net is so verbose and unusable out of the box that I’ve built up a set of extension methods to streamline its usage that you might observe above if you notice that that isn’t quite out of the box ADO.Net.

I’m generally a fan of strong typed configuration, and .Net Core makes that easy now, so I’ll use this class to represent the configuration:

public class LeaderSettings<T> where T : IActiveProcess
{
    public TimeSpan OwnershipPollingTime { get; set; } = 5.Seconds();
    
    // It's a random number here so that if you spin
    // up multiple nodes at the same time, they won't
    // all collide trying to grab ownership at the exact
    // same time
    public TimeSpan FirstPollingTime { get; set; } 
        = new Random().Next(100, 3000).Milliseconds();
    
    // This would be something meaningful
    public int LockId { get; set; }
    
    public string ConnectionString { get; set; }
}

In this approach, the background services will be constantly polling to try to take over as the async daemon if the async daemon is not active somewhere else. If the current async daemon node fails, the connection will drop and the global advisory lock is released and ready for another node to take over. We’ll see how this goes, but the early feedback from my own usage on Jasper and other Marten contributors other projects is positive. With this approach, we hope to enable teams to use the async daemon on multi-node deployments of their application with just Marten out of the box and without having to have any kind of sophisticated infrastructure for leader election.

Kicking off Marten v4 Development

banner

If you’re not familiar with Marten, it’s a library that allows .Net developers to treat Postgresql as a Document Database and also as an Event Store. For a quick background on Marten, here’s my presentation on Marten from Dotnetconf 2018.

The Marten community is gearing up to work on the long awaited v4.0 release, which I think is going to be the most ambitious release of Marten since the original v1.0 release in September of 2016. Marten is blessed with a strong user community, and we’ve got a big backlog of issues, feature requests, and suggestions for improvements that have been building up for quite some time.

First, some links for anyone who wants to be a part of that conversation:

Now would be a perfect time to add any kind of feedback or requests for Marten. We do have an unofficial “F# advisory board” this time around, but the more the merrier on that side. There’ll be plenty of opportunities for folks to contribute, whether that’s in coding, experimenting with early alphas, or just being heard in the discussions online.

Now, on to the release plans:

Event Sourcing Improvements

The biggest area of focus in Marten v4.0 is likely going to be the event store functionality. At the risk of being (rightly) mocked, I’d sum up our goals here as “Make the Marten Event Store be Web Scale!”

There’s a lot of content on the aforementioned Marten v4.0 discussion, and also on an older GitHub discussion on Event Store improvements for v4.0. The highlights (I think) are:

  • Event metadata (a long running request)
  • Partitioning Support
  • Improved projection support, including the option to project event data to flat database tables
  • A lot of improvements (near rewrite) to the existing support for asynchronous projections including multi-tenancy support, running across multiple application nodes, performance optimizations, and projection rebuild improvements
  • New support for asynchronous projection builders using messaging as an alternative to the polling async daemon. The first cut of this is very likely to be based around Jasper (I’m using this as a way to push Jasper forward too of course) and either Rabbit MQ or Azure Service Bus. If things go well, I’d hope that that expands to at least Jasper + Kafka and maybe parallel support for NServiceBus and MassTransit.
  • Snapshotting
  • The ability to rebuild projections with zero downtime

Linq Support Improvements

I can’t overstate how complex building and maintaining a comprehensive Linq provider can be just out of the sheer number of usage permutations.

Marten v4.0 is going to include a pretty large overhaul of the Linq support. Along the way, we’re hopefully going to:

  • Rewrite the functionality to include related documents altogether and finally support Include() on collections that’s been a big user request for 3+ years.
  • Improve the efficiency of the generated SQL
  • More variable data type behavior within the Linq support. You can read more about the mechanics on this issue. This also includes being quite a bit smarter about the Json serialization and how that varies for different .Net data types. As an example, I think these changes will allow Marten’s Linq support to finally deal with things like F# Discriminator Union types.
  • Improve the internals by making it more modular and a little more performant in how it handles marshaling data from the database to C# objects.
  • Improve Marten’s ability to query within child collections
  • More flexibility in event stream identifier strateggies

 

Other Improvements

  • Document Metadata improvements, both exposing Marten’s metadata and user customizable metadata for things like user id or correlation identifiers
  • An option to stream JSON directly to .Net Stream objects like HTTP responses without going through serialization first or even a JSON string.
  • Optimizing the Partial update support to use Postgresql native operators when possible
  • Formal support for .Net Core integration through the generic HostBuilder mechanism

 

Planning

Right now it’s all about the discussions and approach, but that’s going pretty well right now. The first thing I’m personally going to jump into is a round of automated test performance just to make everything go faster later.

The hope is that a subsequent V5 release might finally take Marten into supporting other database engines besides Postgresql, with Sql Server being the very obvious next step. We might take some preliminary steps with V4 to make the multi-database engine support be more feasible later with fewer breaking API changes, but don’t hold me to that.

I don’t have any idea about timing yet, sorry.

Jasper’s Efficient and Flexible Roslyn-Powered Execution Pipeline

You’ll need to pull the very latest code from the sample application linked to in this post because of course I found some bugs by dog-fooding while writing this:/

This is an update to a post from a couple years ago called Jasper’s Roslyn-Powered “Special Sauce.” You can also find more information about Lamar’s code generation and runtime compilation support from my Dynamic Runtime Code with Roslyn talk at London NDC 2019.

The more or less accepted understanding of a “framework” as opposed to a “library” is that a framework calls your application code (the Hollywood Principle). To be useful, frameworks should be dealing with cross cutting infrastructure concerns like security, data marshaling, or error handling. At some point, frameworks have to have some way to call your application code.

Since generics were introduced way back in .Net 3.5, many .Net frameworks have used what I call the “IHandler of T” approach where you are almost inevitably asked to implement a common layer supertype interface like this:

public interface IHandler
{
    Task Handle(T message, IContext context);
{

From a framework author’s perspective, this is easy to implement, and most modern IoC frameworks have reasonably strong support for generic types (like Lamar generic types support for example). Off the top of my head, NServiceBus, MassTransit, and MediatR are examples of this approach (my recollection is that NServiceBus did this first and I distinctly remember Udi Dahan describing this years ago at an ALT.Net event).

The other general approach I’ve seen — and what Jasper itself uses — is to have a framework call your code through some kind of dynamic code that adapts the signature of your code to the shape or interface that the framework needs. ASP.Net MVC Core Controller methods are an example of this approach. Here’s a sample controller method to demonstrate this:

public class WithResponseController : ControllerBase
{
    private readonly ICommandBus _bus;

    public WithResponseController(ICommandBus bus)
    {
        _bus = bus;
    } 
    
    // MVC Core calls this method, and uses the signature
    // and attributes like the [FromBody] to "know" how
    // to call this code at runtime
    [HttpPost("/items/create2")]
    public Task Create([FromBody] CreateItemCommand command)
    {
        // Using Jasper as a Mediator, and receive the
        // expected response from Jasper
        return _bus.Invoke(command);
    }
}

To add some more complexity to designing frameworks, there’s also the issue of how you can combine the basic handlers with some sort of Russian Doll middleware approach to handle cross cutting concerns. There’s a couple ways to handle this:

  • If a framework uses the “IHandler of T” approach, folks often try to use the decorator pattern to wrap the inner “IHandler”. This approach can lead to folks crashing and burning by getting way too complex with generic constraints. It can also lead to some pretty severe levels of object allocations and garbage collection thrashing from the sheer number of objects being created and discarded. To add more fuel to the fire, this approach can easily lead to absurdly large stack trace exceptions that can be very intimidating to parse for the average developer.
  • ASP.Net Core’s middleware approach through functional composition. This can also lead to some dramatically bad stack traces and similar issues with object allocation.
  • Do some runtime code generation to piece together the middleware. I believe that NServiceBus does this internally with its version of Behaviors.

Lastly, most .Net web, service bus, or command execution frameworks use some sort of scoped IoC container (nested container in Lamar or StructureMap parlance) per request or command execution to deal with object scoping and cleanup or disposal in their execution pipelines.

Jasper’s Unique Approach

Jasper was originally conceived and planned as a replacement to the older FubuMVC project (but it’s gone off in different directions later of course). As a framework, some of FubuMVC’s design goals were to:

  1. Minimize the amount of framework artifacts (interfaces, base classes, attributes, et al) in your application code — and it generally succeeded at that
  2. Provide a very strong model for composable middleware (what we called Behaviors) in the runtime pipeline — which also succeeded in terms of capability, but at runtime it was extremely inefficient, object allocations were off the charts, the IoC integration was complex, and the exception stack traces when things went wrong were epically big.

With Jasper, the initial goals were to recreate the “cleanliness” of the application code and the flexibility of FubuMVC’s “Russian Doll” approach, but do so in a way that was much more efficient at runtime. And because this actually matters quite a bit, make sure that any exceptions thrown out of application code while running within Jasper have minimal exception stack traces for easier troubleshooting.

Roslyn introduced support for compiling C# code at runtime. Some time in 2015 I was in the office with some of the old FubuMVC core contributors and drew up on the whiteboard a new approach where we’d generate C# code at application bootstrapping time to weave in both the calls to the application code and designated middleware.

Later on, I expanded that vision to try to also encompass the object construction and cleanup functionality of an IoC container in the same generated code. The result of that initial envisioning has become the combination of Jasper and Lamar, where Jasper actually uses Lamar’s registration model to try to generate and inline the functionality that would normally be done at runtime through an IoC container. The theory here being that the fasted IoC container is no IoC container.

Alright, to make this concrete, let’s see how this plays out in real usage. In my last post, Introducing Jasper as an In Process Command Bus for .Net, I demonstrated a small message handler in a sample Jasper application with this code (I’ve stripped out the original comments to make it smaller):

public class ItemHandler
{
    [Transactional]
    public static ItemCreated Handle(
        CreateItemCommand command,
        ItemsDbContext db)
    {
        var item = new Item
        {
            Name = command.Name
        };

        db.Items.Add(item);

        return new ItemCreated
        {
            Id = item.Id
        };
    }
}

The code above is meant to:

  1. Create a new Item entity based on the incoming CreateItemCommand
  2. Persist that new Item with Entity Framework Core
  3. Publish a new ItemCreated event message to be handled somewhere else (how that happens isn’t terribly important for this blog post)

And lastly, the [Transactional] attribute tells Jasper to apply its transactional middleware and outbox support to the message handler, such that a single database commit will save both the new Item entity and do the “store” part of a “store and forward” operation to send out the cascading ItemCreated event.

Internally, Jasper is going to generate a new class that inherits from this base class (slightly simplified):

public abstract class MessageHandler
{
    public abstract Task Handle(
        IMessageContext context, 
        CancellationToken cancellation
    );
}

For that ItemHandler class shown up above in the sample application, Jasper is generating and compiling through Roslyn this (butt ugly, but remember that it’s generated) class:

    public class InMemoryMediator_Items_CreateItemCommand : Jasper.Runtime.Handlers.MessageHandler
    {
        private readonly Microsoft.EntityFrameworkCore.DbContextOptions _dbContextOptions;

        public InMemoryMediator_Items_CreateItemCommand(Microsoft.EntityFrameworkCore.DbContextOptions dbContextOptions)
        {
            _dbContextOptions = dbContextOptions;
        }



        public override async Task Handle(Jasper.IMessageContext context, System.Threading.CancellationToken cancellation)
        {
            var createItemCommand = (InMemoryMediator.Items.CreateItemCommand)context.Envelope.Message;
            using (var itemsDbContext = new InMemoryMediator.Items.ItemsDbContext(_dbContextOptions))
            {
                // Enroll the DbContext & IMessagingContext in the outgoing Jasper outbox transaction
                await Jasper.Persistence.EntityFrameworkCore.JasperEnvelopeEntityFrameworkCoreExtensions.EnlistInTransaction(context, itemsDbContext);
                var itemCreated = InMemoryMediator.Items.ItemHandler.Handle(createItemCommand, itemsDbContext);
                // Outgoing, cascaded message
                await context.EnqueueCascading(itemCreated);
                // Added by EF Core Transaction Middleware
                var result_of_SaveChangesAsync = await itemsDbContext.SaveChangesAsync(cancellation);
            }

        }

    }

If you want, you can see the raw code at any time by executing the dotnet run -- codegen command from the root of the sample project.

So here’s what I’m claiming is the advantage of Jasper’s approach:

  • Allows your application code to be “clean” of framework artifacts and much more decoupled from Jasper than you can achieve with many other application frameworks like MVC Core
  • Using the diagnostic commands to dump out the generated source code in the application, Jasper can tell you exactly how it’s handling a message at runtime
  • By just generating Poor Man’s Dependency Injection code to build up the EF Core dependency and also to deal with disposing it later, the generated code eliminates any need to use the IoC container at runtime. And even for the very fastest IoC container in the world — and Lamar isn’t a slouch on the performance side of things — pure dependency injection is faster. Do mote that Jasper + Lamar can’t do this for every possible message handler and will have to revert to a scoped container per message with service location in some circumstances, usually because of a scoped or transient Lambda service registration or internal types.
  • The generated code minimizes the number of object allocations compared to a typical .Net framework that depends on adapter types
  • Jasper will allow you to use either synchronous or asynchronous handlers as appropriate for your use case, so no wasted keystrokes typing out return Task.CompletedTask; littering up your code.
  • The stack traces are pretty clean and you’ll see effectively nothing related to Jasper in logged exceptions except for the outermost frame where the generated MessageHandler.Handle() method is called.
  • Some other .Net frameworks try to do what Jasper does conceptually by trying to generate Expression trees and compile those down to Func objects. I’ve certainly done that in other tools (Lamar, StructureMap, Marten), but that’s rarified air that most developers can’t deal with. It’s also batshit insane for async heavy code like you’ll inevitably hit with anything that involves IO these days. My theory, yet to be proven, is that by relying on C#, Jasper’s middleware approach will be much more approachable to whatever community Jasper attracts down the line.

 

For some follow up reading if you’re interested:

Introducing Jasper as an In Process Command Bus for .Net

A couple weeks ago I wrote a blog post called If you want your OSS project to be successful… about trying to succeed with open source development efforts. One of the things I said was “don’t go dark” when you’re working on an OSS project. Not only did I go “dark” on Jasper for quite awhile, I finally rolled out its 1.0 release during the worst global pandemic in a century. So all told, Jasper is by no means an exemplary project model for anyone to follow who’s trying to succeed with an OSS project.

This sample application is also explained and demonstrated in the documentation page Jasper as a Mediator.

Jasper is a new open source tool that can be used as an in process “command bus” inside of .Net Core 3 applications. Used locally, Jasper can provide a superset of the “mediator” functionality popularized by MediatR that many folks like using within ASP.Net MVC Core applications to simplify controller code by offloading most of the processing to separate command handlers. Jasper certainly supports that functionality, but also adds rich options for asynchronous processing commands with built in resiliency mechanisms.

Part of the reason why Jasper went cold was waiting for .Net Core 3.0 to be released. With the advent of .Net Core 3.0, Jasper was somewhat re-wired to support the new generic HostBuilder for bootstrapping and configuration. With this model of bootstrapping, Jasper can easily be integrated into any kind of .Net Core application (MVC Core application, web api, windows service, console app, “worker” app) that uses the HostBuilder.

Let’s jump into seeing how Jasper could be integrated into a .Net Core Web API system. All the sample code I’m showing here is on GitHub in the “InMemoryMediator” project. InMemoryMediator uses EF Core with Sql Server as its backing persistence. Additionally, this sample shows off Jasper’s support for the “Outbox” pattern for reliable messaging without having to resort to distributed transactions.

To get started, generated a project with the dotnet new webapi template. From there, I added some extra Nuget dependencies:

  1. Microsoft.EntityFrameworkCore.SqlServer — because we’re going to use EF Core with Sql Server as the backing persistence for this service
  2. Jasper — this is the core library, and all that you would need to use Jasper as an in process command bus
  3. Jasper.Persistence.EntityFrameworkCore — extension library to add Jasper’s “Outbox” and transactional support to EF Core
  4. Jasper.Persistence.SqlServer — extension library to add persistence for the “Outbox” support
  5. Swashbuckle.AspNetCore — just to add Swagger support

Your First Jasper Handler

Before we get into bootstrapping, let’s just start with how to build a Jasper command handler and how that would integrate with an MVC Core Controller. Keeping to a very simple problem domain, let’s say that we’re capturing, creating, and tracking new Item entities like this:

public class Item
{
    public string Name { get; set; }
    public Guid Id { get; set; }
}

So let’s build a simple Jasper command handler that would process a CreateItemCommand message, persist a new Item entity, and then raise an ItemCreated event message that would be handled by Jasper as well, but asynchronously somewhere off to the side in a different through. Lastly, we want things to be reliable, so we’re going to introduce Jasper’s integration of Entity Framework Core for “Outbox” support for the event messages being raised at the same time we create new Item entities.

First though, to put things in context, we’re trying to get to the point where our controller classes mostly just delegate to Jasper through its ICommandBus interface and look like this:

public class UseJasperAsMediatorController : ControllerBase
{
    private readonly ICommandBus _bus;

    public UseJasperAsMediatorController(ICommandBus bus)
    {
        _bus = bus;
    }

    [HttpPost("/items/create")]
    public Task Create([FromBody] CreateItemCommand command)
    {
        // Using Jasper as a Mediator
        return _bus.Invoke(command);
    }
}

You can find a lot more information about what Jasper can do as a local command bus in the project documentation.

When using Jasper as a mediator, the controller methods become strictly about the mechanics of reading and writing data to and from the HTTP protocol. The real functionality is now in the Jasper command handler for the CreateItemCommand message, as coded with this Jasper Handler class:

public class ItemHandler
{
    // This attribute applies Jasper's transactional
    // middleware
    [Transactional]
    public static ItemCreated Handle(
        // This would be the message
        CreateItemCommand command,

        // Any other arguments are assumed
        // to be service dependencies
        ItemsDbContext db)
    {
        // Create a new Item entity
        var item = new Item
        {
            Name = command.Name
        };

        // Add the item to the current
        // DbContext unit of work
        db.Items.Add(item);

        // This event being returned
        // by the handler will be automatically sent
        // out as a "cascading" message
        return new ItemCreated
        {
            Id = item.Id
        };
    }
}

You’ll probably notice that there’s no interface and mandatory base class usage in the code up above. Similar to MVC Core, Jasper will auto-discover the handler classes and message handling methods from your code through type scanning. Unlike MVC Core and every other service bus kind of tool .Net I’m aware of, Jasper only depends on naming conventions rather than base classes or interfaces.

The only bit of framework “stuff” at all in the code above is the [Transactional] attribute that decorates the handler class. That adds Jasper’s own middleware for transaction and outbox support around the message handling to just that message. At runtime, when Jasper handles the CreateItemCommand in that handler code up above, it:

  • Sets up an “outbox” transaction with the EF Core ItemsDbContextservice being passed into the Handle() method as a parameter
  • Take the ItemCreated message that “cascades” from the handler method and persists that message with ItemsDbContext so that both the outgoing message and the new Item entity are persisted in the same Sql Server transaction
  • Commits the EF Core unit of work by calling ItemsDbContext.SaveChangesAsync()
  • Assuming that the transaction succeeds, Jasper kicks the new ItemCreated message into its internal sending loop to speed it on its way. That outgoing event message could be handled locally in in-memory queues or sent out via external transports like Rabbit MQ or Azure Service Bus

If you’re interested in what the code above would look like without any of Jasper’s middleware or cascading message conventions, see the section near the bottom of this post called “Do it All Explicitly Controller”.

So that’s the MVC Controller and Jasper command handler, now let’s move on to integrating Jasper into the application.

Bootstrapping and Configuration

This is just an ASP.Net Core application, so you’ll probably be familiar with the generated Program.Main() entry point. To completely utilize Jasper’s extended command line support (really Oakton.AspNetCore), I’ll make some small edits to the out of the box generated file:

public class Program
{
    // Change the return type to Task to communicate
    // success/failure codes
    public static Task Main(string[] args)
    {
        return CreateHostBuilder(args)

            // This replaces Build().Start() from the default
            // dotnet new templates
            .RunJasper(args);
    }

    public static IHostBuilder CreateHostBuilder(string[] args) =>
        Host.CreateDefaultBuilder(args)

            // You can do the Jasper configuration inline with a 
            // Lambda, but here I've centralized the Jasper
            // configuration into a separate class
            .UseJasper<JasperConfig>()

            .ConfigureWebHostDefaults(webBuilder =>
            {
                webBuilder.UseStartup();
            });
}

This isn’t mandatory, but there’s just enough Jasper configuration for this project with the outbox support that I opted to put the Jasper configuration in a new file called JasperConfig that inherits from JasperOptions:

public class JasperConfig : JasperOptions
{
    public override void Configure(IHostEnvironment hosting, IConfiguration config)
    {
        if (hosting.IsDevelopment())
        {
            // In development mode, we're just going to have the message persistence
            // schema objects dropped and rebuilt on app startup so you're
            // always starting from a clean slate
            Advanced.StorageProvisioning = StorageProvisioning.Rebuild;
        }

        // Just the normal work to get the connection string out of
        // application configuration
        var connectionString = config.GetConnectionString("sqlserver");

        // Setting up Sql Server-backed message persistence
        // This requires a reference to Jasper.Persistence.SqlServer
        Extensions.PersistMessagesWithSqlServer(connectionString);

        // Set up Entity Framework Core as the support
        // for Jasper's transactional middleware
        Extensions.UseEntityFrameworkCorePersistence();

        // Register the EF Core DbContext
        // You can register IoC services in this file in addition
        // to any kind of Startup.ConfigureServices() method,
        // but you probably only want to do it in one place or the 
        // other and not both.
        Services.AddDbContext(
            x => x.UseSqlServer(connectionString),

            // This is important! Using Singleton scoping
            // of the options allows Jasper + Lamar to significantly
            // optimize the runtime pipeline of the handlers that
            // use this DbContext type
            optionsLifetime:ServiceLifetime.Singleton);
    }
}

Returning a Response to the HTTP Request

In the UseJasperAsMediatorController controller, we just passed the command into Jasper and let MVC return an HTTP status code 200 with no other context. If instead, we wanted to send down the ItemCreated message as a response to the HTTP caller, we could change the controller code to this:

public class WithResponseController : ControllerBase
{
    private readonly ICommandBus _bus;

    public WithResponseController(ICommandBus bus)
    {
        _bus = bus;
    }

    [HttpPost("/items/create2")]
    public Task<ItemCreated> Create([FromBody] CreateItemCommand command)
    {
        // Using Jasper as a Mediator, and receive the
        // expected response from Jasper
        return _bus.Invoke<ItemCreated>(command);
    }
}

“Do it All Explicitly Controller”

Just for a comparison, here’s the CreateItemCommand workflow implemented inline in a controller action with explicit code to handle the Jasper “Outbox” support:

// This controller does all the transactional work and business
// logic all by itself
public class DoItAllMyselfItemController : ControllerBase
{
    private readonly IMessageContext _messaging;
    private readonly ItemsDbContext _db;

    public DoItAllMyselfItemController(IMessageContext messaging, ItemsDbContext db)
    {
        _messaging = messaging;
        _db = db;
    }

    [HttpPost("/items/create3")]
    public async Task Create([FromBody] CreateItemCommand command)
    {
        // Start the "Outbox" transaction
        await _messaging.EnlistInTransaction(_db);

        // Create a new Item entity
        var item = new Item
        {
            Name = command.Name
        };

        // Add the item to the current
        // DbContext unit of work
        _db.Items.Add(item);

        // Publish an event to anyone
        // who cares that a new Item has
        // been created
        var @event = new ItemCreated
        {
            Id = item.Id
        };

        // Because the message context is enlisted in an
        // "outbox" transaction, these outgoing messages are
        // held until the ongoing transaction completes
        await _messaging.Send(@event);

        // Commit the unit of work. This will persist
        // both the Item entity we created above, and
        // also a Jasper Envelope for the outgoing
        // ItemCreated message
        await _db.SaveChangesAsync();

        // After the DbContext transaction succeeds, kick out
        // the persisted messages in the context "outbox"
        await _messaging.SendAllQueuedOutgoingMessages();
    }
}

As a huge lesson learned from Jasper’s predecessor project, it’s always possible to easily bypass any kind of Jasper conventional “magic” and write explicit code as necessary.

There’s a lot more to say about Jasper and you can find a *lot* more information on its documentation website. I’ll be back sometime soon with more examples of Jasper, with probably some focus on functionality that goes beyond other mediator tools.

In the next post, I’ll talk about Jasper’s runtime execution pipeline and how it’s very different than other .Net tools with similar functionality (hint, it involves a boatload less generics magic than anything else).