As part of an ongoing JasperFx client engagement, Wolverine (1.9.0) just added some new options for event streaming from Wolverine applications. The immediate need was to support messaging with the MQTT protocol for usage inside of a new system in the “Internet of Things” problem space. Knowing that a different JasperFx client is going to need to support event subscriptions with Apache Kafka, it was also convenient to finally add the much requested option for Kafka support within Wolverine while the similar MQTT work was still fresh in my mind.
While the new MQTT transport option is documented, the Kafka transport documentation is still on the way, so I’m going to focus on that first.
To get started with Kafka within a Wolverine application, add the WolverineFx.Kafka Nuget to your project. Next, add the Kafka transport option, any messaging subscription rules, and the topics you want your application to listen to with code like this:
using var host = await Host.CreateDefaultBuilder()
.UseWolverine(opts =>
{
opts.UseKafka("localhost:29092");
// Just publish all messages to Kafka topics
// based on the message type (or message attributes)
// This will get fancier in the near future
opts.PublishAllMessages().ToKafkaTopics();
// Or explicitly make subscription rules
opts.PublishMessage<ColorMessage>()
.ToKafkaTopic("colors");
// Listen to topics
opts.ListenToKafkaTopic("red")
.ProcessInline();
opts.ListenToKafkaTopic("green")
.BufferedInMemory();
// This will direct Wolverine to try to ensure that all
// referenced Kafka topics exist at application start up
// time
opts.Services.AddResourceSetupOnStartup();
}).StartAsync();
I’m very sure that these two transports (and shortly a third option for Apache Pulsar) will need to be enhanced when they meet real users and unexpected use cases, but I think there’s a solid foundation ready to go.
In the near future, JasperFx Software will be ready to start offering official support contracts and relationships for both Marten and Wolverine. In the slightly longer term, we’re hoping to create some paid add on products (with support!) for Wolverine for “big, serious enterprise usage.” One of the first use cases I’d like us to tackle with that initiative will be a more robust event subscription capability from Marten’s event sourcing through Wolverine’s messaging capabilities. Adding options especially for Kafka messaging and also for MQTT, Pulsar, and maybe SignalR is an obvious foundational piece to make that a reality.
Hey folks, this is more a brain dump to collect my own thoughts than any kind of tome of accumulated wisdom and experience. Please treat this accordingly, and absolutely chime in on the Critter Stack Discord discussion going on about this right now.I’m also very willing and maybe even likely to change my mind about anything I’m going to say in this post.
There’s been some recent interest and consternation about the combination of Marten within Hot Chocolate as a GraphQL framework. At the same time, I am working with a new JasperFx client who wants to use Hot Chocolate with Marten’s event store functionality behind mutations and projected data behind GraphQL queries.
Long story short, Marten and Hot Chocolate do not mix well without some significant thought and deviation from normal, out of the box Marten usage. Likewise, I’m seeing some significant challenges in using Wolverine behind Hot Chocolate mutations. The rest of this post is a rundown of the issues, sticking points, and possible future ameliorations to make this combination more effective for our various users.
Connections are Sticky in Marten
If you use the out of the box IServiceCollection.AddMarten() mechanism to add Marten into a .NET application, you’re registering Marten’s IQuerySession and IDocumentSession as a Scoped lifetime — which is optimal for usage within short lived ASP.Net Core HTTP requests or within message bus handlers (like Wolverine!). In both of those cases, the session can be expected to have a short lifetime and generally be running in a single thread — which is good because Marten sessions are absolutely not thread safe.
However, for historical reasons (integration with Dapper was a major use case in early Marten usage, so there’s some optimization for that with Marten sessions), Marten sessions have a “sticky” connection lifecycle where an underlying Npgsql connection is retained on the first database query until the session is disposed. Again, if you’re utilizing Marten within ASP.Net Core controller methods or Minimal API calls or Wolverine message handlers or most other service bus frameworks, the underlying IoC container of your application is happily taking care of resource disposal for you at the right times in the request lifecycle.
The last sentence is one of the most important, but poorly understood advantages of using IoC containers in applications in my opinion.
Ponder the following Marten usage:
public static async Task using_marten(IDocumentStore store)
{
// The Marten query session is IDisposable,
// and that absolutely matters!
await using var session = store.QuerySession();
// Marten opens a database connection at the first
// need for that connection, then holds on to it
var doc = await session.LoadAsync<User>("jeremy");
// other code runs, but the session is still open
// just in case...
// The connection is closed as the method exits
// and the session is disposed
}
The problem with Hot Chocolate comes in because Hot Chocolate is trying to parallelize queries when you get multiple queries in one GraphQL request — which since that query batching was pretty well the raison d’être for GraphQL in the first place, so you should assume that’s quite common!
Now, consider a naive usage of a Marten session in a Hot Chocolate query:
public async Task<SomeEntity> GetEntity(
[Service] IQuerySession session
Input input)
{
// load data using the session
}
Without taking some additional steps to serialize access to the IQuerySession across Hot Chocolate queries, you will absolutely hit concurrency errors when Hot Chocolate tries to parallelize data fetching. You can beat this by either forcing Hot Chocolate to serialize access like so:
builder.Services
.AddGraphQLServer()
// Serialize access to the IQuerySession within Hot Chocolate
.RegisterService<IQuerySession>(ServiceKind.Synchronized)
or by making the session lifetime in your container Transient by doing this:
The first choice will potentially slow down your GraphQL endpoints by serializing access to the IQuerySession while fetching data. The second choice is a non-idiomatic usage of Marten that potentially fouls up usage of Marten within non-GraphQL operations as you could potentially be using separate Marten sessions when you really meant to be using a shared instance.
For Marten V7, we’re going to strongly consider some kind of query runner that does not have sticky connections for the express purpose of simplifying Hot Chocolate + Marten integration, but I can’t promise any particular timeline for that work. You can track that work here though.
Multi-Tenancy and Session Lifecycles
Multi-Tenancy throws yet another spanner into the works. Consider the following Hot Chocolate query method:
public IQueryable<User> GetUsers(
[Service] IDocumentStore documentStore, [GlobalState] string tenant)
{
using var session = documentStore.LightweightSession(tenant);
return session.Query<User>();
}
Assuming that you’ve got some kind of Hot Chocolate interceptor to detect the tenant id for you, and that value is communicated through Hot Chocolate’s global state mechanism, you might think to open a Marten session directly like the code above. That code above will absolutely not work under any kind of system load because it’s putting you into a damned if you do, damned if you don’t situation. If you dispose the session before this method completes, the IQueryable execution will throw an ObjectDisposedException when Hot Chocolate tries to execute the query. If you *don’t* dispose the session, the IoC container for the request scope doesn’t know about it, so can’t dispose it for you and Marten is going to be hanging on to the open database connection until garbage collection comes for it — and under a significant load, that means your system will behave very badly when the database connection pool is exhausted!
What we need to do is to have some way that our sessions can be created for the right tenant for the current request, but have the session tracked some how so that the scoped IoC container can be used to clean up the open sessions at the end of the request. As a first pass, I’m using this crude approach first with this service that’s registered with the IoC container with a Scoped lifetime:
/// <summary>
/// This will be Scoped in the container per request, "knows" what
/// the tenant id for the request is. Also tracks the active Marten
/// session
/// </summary>
public class ActiveTenant : IDisposable
{
public ActiveTenant(IHttpContextAccessor contextAccessor, IDocumentStore store)
{
if (contextAccessor.HttpContext is not null)
{
// Try to detect the active tenant id from
// the current HttpContext
var context = contextAccessor.HttpContext;
if (context.Request.Headers.TryGetValue("tenant", out var tenant))
{
var tenantId = tenant.FirstOrDefault();
if (tenantId.IsNotEmpty())
{
this.Session = store.QuerySession(tenant!);
}
}
}
this.Session ??= store.QuerySession();
}
public IQuerySession Session { get; }
public void Dispose()
{
this.Session.Dispose();
}
}
Now, rewrite the Hot Chocolate query from way up above with:
public IQueryable<User> GetUsers(
[Service] ActiveTenant tenant)
{
return tenant.Session.Query<User>();
}
That does still have to be paired with this Hot Chocolate configuration to dodge the concurrent access problems like so:
I took some time this morning to research Hot Chocolate’s Mutation model (think “writes”). Since my client is using Marten as an event store and I’m me, I was looking for opportunities to:
What I’ve found so far has been a series of blockers once you zero in on the fact that Hot Chocolate is built around the possibility of having zero to many mutation messages in any one request — and that that request should be treated as a logical transaction such that every mutation should either succeed or fail together. With that being said, I see the blockers as:
Wolverine doesn’t yet support message batching in any kind of built in way, and is unlikely to do so before a 2.0 release that isn’t even so much as a glimmer in my eyes yet
Hot Chocolate depends on ambient transactions (Boo!) to manage the transaction boundaries. That by itself almost knocks out the out of the box Marten integration and forces you to use more custom session mechanics to enlist in ambient transactions.
The existing Wolverine transactional outbox depends on an explicit “Flush” operation after the actual database transaction is committed. That’s handled quite gracefully by Wolverine’s Marten integration in normal issue (in my humble and very biased opinion), but that can’t work across multiple mutations in one GraphQL request
There is a mechanism to replace. the transaction boundary management in Hot Chocolate, but it was very clearly built around ambient transactions and it has a synchronous signature to commit the transaction. Like any sane server side development framework, Wolverine performs the IO intensive database transactional mechanics and outbox flushing operations with asynchronous methods. To fit that within Hot Chocolate’s transactional boundary abstraction would require calls to turn the asynchronous Marten and Wolverine APIs into synchronous calls with GetAwaiter().GetResult(), which is tantamount to painting a bullseye on your chest and daring the Fates to not make your application crater with deadlocks under load.
I think at this point, my recommended approach is going to forego integrating Wolverine into Hot Chocolate mutations altogether with some combination of:
Don’t use Hot Chocolate mutations whatsoever if there’s no need for the operation batching and use old fashioned ASP.Net Core with or without Wolverine’s HTTP support
Or document a pattern for using the Decider pattern within Hot Chocolate as an alternative to Wolverine’s “aggregate handler” usage. The goal here is to document a way for developers to keep infrastructure out of business logic code and maximize testability
If using Hot Chocolate mutations, I think there’s a need for a better outbox subscription model directly against Marten’s event store. The approach Oskar outlined here would certainly be a viable start, but I’d rather have an improved version of that built directly into Wolverine’s Marten integration. The goal here is to allow for an Event Driven Architecture which Wolverine supports quite well and the application in question could definitely utilize, but do so without creating any complexity around the Hot Chocolate integration.
In the long, long term:
Add a message batch processing option to Wolverine that manages transactional boundaries between messages for you
Have a significant palaver between the Marten/Wolverine core teams and the fine folks behind Hot Chocolate to iron a bit of this out
My Recommendations For Now
Honestly, I don’t think that I would recommend using GraphQL in general in your system whatsoever unless you’re building some kind of composite user interface where GraphQL would be beneficial in reducing chattiness between your user interface and backing service by allowing unrelated components in your UI happily batch up requests to your server. Maybe also if you were using GraphQL as a service gateway to combine disparate data sources on the server side in a consistent way, but even then I wouldn’t automatically use GraphQL.
I’m not knowledgeable enough to say how much GraphQL usage would help speed up your user interface development, so take all that I said in the paragraph above with a grain of salt.
At this point I would urge folks to be cautious about using the Critter Stack with Hot Chocolate. Marten can be used if you’re aware of the potential problems I discussed above. Even when we beat the sticky connection thing and the session lifecycle problems, Marten’s basic model of storing JSON in the database is really not optimized for plucking out individual fields in Select() transforms. While Marten does support Select() transforms, it’s may not as efficient as the equivalent functionality on top of a relational database model would be. It’s possible that GraphQL might be a better fit with Marten if you were primarily using projected read models purposely designed for client consumption through GraphQL or even projecting event data to flat tables that are queried by Hot Chocolate.
Wolverine with Hot Chocolate maybe not so much if you’d have any problems with the transactional boundary issues.
I would be urge you to do load testing with any usage of Hot Chocolate as I think from peeking into its internals that it’s not the most efficient server side tooling around. Again, that doesn’t mean that you will automatically have performance problems with Hot Chocolate, but I think you you should be cautious with its usage.
In general, I’d say that GraphQL creates way too much abstraction over your underlying data storage — and my experience consistently says that abstracted data access can lead to some very poor system performance by both making your application harder to understand and by eliminating the usage of advanced features of your data storage tooling behind least common denominator abstractions.
This took much longer than I wanted it too, as always. I might write a smaller follow up on how I’d theoretically go about building an optimized GraphQL layer from scratch for Marten — which I have zero intension of ever doing, but it’s a fun thought experiment.
Just a coincidence here, but I had this blog post draft in the works right as my friend and Critter Stack collaborator Oskar Dudycz wrote Is the Strategy Pattern an ultimate solution for low coupling? last week (right after Lillard was traded). My goal here is just to create some new blog content by plucking out existing usages of design patterns in my own work. I’m hoping this turns into a low key series to go revisit some older software development concepts and see if they still hold any value.
On Design Patterns
Before discussing the “Strategy Pattern,” let’s go address the obvious elephant in the room. There has been a huge backlash against design patterns in the aftermath of the famous “Gang of Four” book. As some of you will be unable from resist pointing out in the comments, design patterns have been absurdly overused by people who associate complexity in code with well engineered code leading to such atrocities as “WidgetStrategyAbstractFactoryBuilder” type names showing up in your code. A certain type of very online functional programming enthusiast loves to say “design patterns are just an indication that your OO programming language is broken and my FP language doesn’t need them” — which I think is both an inaccurate and completely useless statement because there are absolutely recurring design patterns in functional programming as well even if they aren’t the exact same set of design patterns or implemented the exact same way as they were described in the old GoF book from the C++ era.
Backing off of the negativity and cynicism, “design patterns” came out of a desire to build a taxonomy and understanding of reoccurring structural elements that developers were already frequently using to solve problems in code. The hope and goal of this effort was to build a common language that developers could use with each other to quickly describe what they were doing when they used these patterns or even to just understand what some other developer was doing in the code you just inherited. Just as importantly, once we developers have a shared name for these patterns, we could start to record and share a body of wisdom about when and where these patterns were applicable, useful, or harmful.
That’s it, that’s all they ever should have been. The problems always came about when people decided that design patterns were recommendations or goals unto themselves and then tried to maximize their usage of said patterns. Software development being very prone to quick cycles of “ooh, shiny object!” then the inevitable backlash after taking the new shiny object way too far, design patterns got an atrocious reputation across much of our community.
After all that being said, here’s what I think is absolutely still valuable about design patterns and why learning about them is worth your time:
They happen in code anyway
It’s useful to have the common language to discuss code with other developers
Recognizing a design pattern in usage can give you some quick insight into how some existing code works or was at least intended to work
There is a large amount of writing out there about the benefits, drawbacks, and applicability of all of the common design patterns
And lastly, don’t force the usage of design patterns in your code.
The Strategy Pattern
One of the simplest and most common design patterns in all of software development is the “Strategy” pattern that:
allows one of a family of algorithms to be selected on-the-fly at runtime.
Gang of Four
Fine, but let’s immediately move to a simple, concrete example. In a recent Wolverine release, I finally added an end to end multi-tenancy feature for Wolverine’s HTTP endpoints for a JasperFx client. One of the key parts of that new feature was for Wolverine to be able to identity what the active tenant was from an HTTP request. From experience, I knew that there were several commonly used ways to do that, and even knew that plenty of folks would want to mix and match approaches like:
Look for a named route argument
Look for an expected request header
Use a named claim for the authenticated user
Look for an expected query string parameter
Key off of the Url sub domain name
And also allow for users to add some completely custom mechanism for who knows what they’ll actually want to do in their own system.
This of course is a pretty obvious usage of the “strategy pattern” where you expose a common interface for the variable algorithms that could be used within the code that ended up looking like this:
/// <summary>
/// Used to create new strategies to detect the tenant id from an HttpContext
/// for the current request
/// </summary>
public interface ITenantDetection
{
/// <summary>
/// This method can return the actual tenant id or null to represent "not found"
/// </summary>
/// <param name="context"></param>
/// <returns></returns>
public ValueTask<string?> DetectTenant(HttpContext context);
}
Behind the scenes, a simple version of that interface for the route argument approach looks like this code:
internal class ArgumentDetection : ITenantDetection
{
private readonly string _argumentName;
public ArgumentDetection(string argumentName)
{
_argumentName = argumentName;
}
public ValueTask<string?> DetectTenant(HttpContext httpContext)
{
return httpContext.Request.RouteValues.TryGetValue(_argumentName, out var value)
? new ValueTask<string?>(value?.ToString())
: ValueTask.FromResult<string?>(null);
}
public override string ToString()
{
return $"Tenant Id is route argument named '{_argumentName}'";
}
}
Now, you *could* accurately tell me that this whole strategy pattern nonsense with interfaces and such could be accomplished with mere functions (Func<HttpContext, string?> maybe), and you would be absolutely correct. And I’m going to respond by saying that that is still the “Strategy Pattern” in intent and its role within your code — even though the implementation is different than my C# interface up above. When learning and discussing design patterns, I highly recommend you worry more about the roles and intent of functions, methods, or classes than the actual implementation details.
But also, having the interface + implementing class structure makes it easier for a framework like Wolverine to provide meaningful diagnostics and visualizations of how the code is configured. I actually went down a more functional approach in an earlier framework, and went back to being more OO in the framework guts for reasons of traceability and diagnostics. That’s not necessarily something that I think is generally applicable inside of application code though.
More Complicated: The Marten LINQ Provider
One of the most important points I need to make about design patterns is to use them defensively as a response to a use case in your system rather than picking out design patterns upfront like you’re ordering off of a menu. As a case in point, I’m going to shift to the Marten LINQ provider code. Consider this simplistic usage of a LINQ query:
public static async Task run_linq_query(IQuerySession session)
{
var targets = await session.Query<Target>()
// Filter by a numeric value
.Where(x => x.Number == 5)
// Filter by an Enum value
.Where(x => x.Color == Colors.Blue)
.ToListAsync();
}
Notice the usage of the Where() clauses to merely filter based on both a numeric value and the value of a custom Color enum on the Target document (a fake document type in Marten for exactly this kind of testing). Even in this example, Marten runs into quite a bit of variance as it tries to create SQL to compare a value to the persisted JSONB field within the PostgreSQL database:
In the case of an integer, Marten can simply compare the JSON field to the actual number
In the case of an Enum value, Marten will need to compare the JSON field to either the numeric value of the Enum value or to the string name for the Enum value depending on the serialization settings for Marten in the application
In the early days of Marten, there was code that did the variation you see above with simple procedural code something like:
if (memberType.IsEnum)
{
if (serializer.EnumStorage == EnumStorage.AsInteger)
{
// create parameter for numeric value of enum
}
else
{
// create parameter for string name
// of enum value
}
}
else
{
// create parameter for value
}
In the some of the comments to Oskar’s recent post on the Strategy pattern (maybe on LinkedIn?), I saw someone point out that they thought that moving logic behind strategy pattern interfaces as opposed to simple, inline procedural code made code harder to read and understand. I absolutely understand that point of view, and I’ve run across that before too (and definitely caused myself).
However, that bit of procedural code above? That code started being repeated in a lot of places in the LINQ parsing code. Worse, that nested, branching code was showing up within surrounding code that was already deeply nested and rife with branching logic before you even got into the parameter creation code. Even worse, that repeated procedural code grew in complexity over time as we found more special handling rules for additional types like DateTime or DateTimeOffset.
As a reaction to that exploding complexity, deep code branching, and harmful duplication in our LINQ parsing code, we introduced a couple different instances of the Strategy pattern, including this interface from what will be the V7 release of Marten soon (hopefully):
public interface IComparableMember
{
/// <summary>
/// For a member inside of a document, create the WHERE clause
/// for comparing itself to the supplied value "constant" using
/// the supplied operator "op"
/// </summary>
/// <param name="op"></param>
/// <param name="constant"></param>
/// <returns></returns>
ISqlFragment CreateComparison(string op, ConstantExpression constant);
}
And of course, there are different implementations of that interface for string members, numeric members, and even separate implementations for Enum values stored as integers in the JSON or stored as strings within the JSON. At runtime, when the LINQ parser sees an expression like Where(x => x.SomeProperty == Value), it works by:
Finding the known, memoizedIComparableMember for the SomeProperty member
Calls the IComparableMember.CreateComparison("==", Value) to translate the LINQ expression into a SQL fragment
In the case of the Marten LINQ parsing, introducing the “Strategy” pattern usage did a lot of good to simplify the internal code by removing deeply nested branching logic and by allowing us to more easily introduce support for all new .NET types within the LINQ support by hiding the variation behind abstracted strategies for different .NET types or different .NET methods (string.StartsWith() / string.EndsWith() for example). Using the Strategy pattern also allowed us to remove quite a bit of duplicated logic in the Marten code.
The main takeaways from this more complicated sample:
The Strategy pattern can sometimes improve the maintainability of your code by reducing the need for branching logic within your code. Deeply nested if/then constructs in code are a veritable breeding ground for software bugs. Slimming that down and avoiding “Arrowhead code” can be a big advantage of the Strategy pattern
In some scenarios like the Marten LINQ processor, the Strategy pattern is a way to write much more DRY code that also made the code easier to maintain and extend over time. More. on this below.
A Quick Side Note about the Don’t Repeat Yourself Principle
Speaking of backlashes, many developers have a bad taste in their mouthes over the Don’t Repeat Yourself principle (DRY) — or differently stated by the old XP community as “once, and only once.” I’ve even seen experienced developers scream that “DRY is not important!” or go so far as to say that trying to write DRY code is a one way street to unnecessary complexity by introducing more and more abstractions in an attempt to reuse code.
I guess I’m going to end with the intellectual dodge that principles like DRY are really just heuristics that may or might not help you think through what a good structure would be for the functionality you’re coding. They are certainly not black and white rules. In some cases, trying to be DRY will make your code more complex than it might be with a little bit more duplication of some simple code. In the case of the Marten LINQ provider however, I absolutely believe that applying a little bit of DRY with our usage of the Strategy pattern made that code significantly more robust, maintainable, and even simpler in many cases.
Sorry there’s no easy set of rules you can always follow to arrive at good code, but if there actually was, then AI is gonna eat all our lunches anyway.
Look for details on official support plans for Marten and/or Wolverine and the rest of the “Critter Stack” from JasperFx Software early next week. If you’re looking at Wolverine and wondering if it’s going to be a viable choice in the long run, just know we’re trying very hard to make it so.
Wolverine had a pretty significant 1.7.0 release on Friday. What’s most encouraging to me was how many community contributions were in this one including pull requests, issues where community members took a lot of time to create actionable reproduction steps, and suggestions from our Wolverine Discord room. This always misses folks, but thank you goes to:
A much better interoperability story for Wolverine and non-Wolverine applications using Rabbit MQ, AWS SQS, or Azure Service Bus. More on this later this week
A lot more diagnostics and explanatory comments in the generated code to unravel the “magic” within Wolverine and the message handler / http endpoint method discovery logic. Much more on this in a later blog post this week
Much more control over the Open Telemetry and message logging that is published by Wolverine to tone down the unnecessary noise that might be happening to some users today. Definitely more on that later this week
I’m working with a couple clients who are using Wolverine, and I can’t say that there are zero problems, but overall I’m very happy with how Wolverine is being received and how it’s working out in real applications so far.
Hey, JasperFx Software is more than just some silly named open source frameworks. We’re also deeply experienced in test driven development, designing for testability, and making test automation work without driving into the ditch with over dependence on slow, brittle Selenium testing. Hit us up about what we could do to help you be more successful in your own test automation or TDD efforts.
I have been working furiously on getting an incremental Wolverine release out this week, with one of the new shiny features being end to end support for multi-tenancy (the work in progress GitHub issue is here) through Wolverine.Http endpoints. I hit a point today where I have to admit that I can’t finish that work today, but did see the potential for a blog post on the Alba library (also part of JasperFx’s OSS offerings) and how I was using Alba today to write integration tests for this new functionality, show how the sausage is being made, and even work in a test-first manner.
To put the desired functionality in context, let’s say that we’re building a “Todo” web service using Marten for persistence. Moreover, we’re expecting this system to have a massive number of users and want to be sure to isolate data between customers, so we plan on using Marten’s support for using a separate database for each tenant (think user organization in this case). Within that “Todo” system, let’s say that we’ve got a very simple web service endpoint to just serve up all the completed Todo documents for the current tenant like this one:
Now, you’ll notice that there is a route argument named “tenant” that isn’t consumed at all by this web api endpoint. What I want Wolverine to do in this case is to infer that the value of that “tenant” value within the route is the current tenant id for the request, and quietly select the correct Marten tenant database for me without me having to write a lot of repetitive code.
Just a note, all of this is work in progress and I haven’t even pushed the code at the time of writing this post. Soon. Maybe tomorrow.
Stepping into the bootstrapping for this web service, I’m going to add these new lines of code to the Todo web service’s Program file to teach Wolverine.HTTP how to handle multi-tenancy detection for me:
// Let's add in Wolverine HTTP endpoints to the routing tree
app.MapWolverineEndpoints(opts =>
{
// Letting Wolverine HTTP automatically detect the tenant id!
opts.TenantId.IsRouteArgumentNamed("tenant");
// Assert that the tenant id was successfully detected,
// or pull the rip cord on the request and return a
// 400 w/ ProblemDetails
opts.TenantId.AssertExists();
});
So that’s some of the desired, built in multi-tenancy features going into Wolverine.HTTP 1.7 some time soon. Back to the actual construction of these new features and how I used Alba this morning to drive the coding.
I started by asking around on social media about what other folks used as strategies to detect the tenant id in ASP.Net Core multi-tenancy, and came up with this list (plus a few other options):
Use a custom request header
Use a named route argument
Use a named query string value (I hate using the query string myself, but like cockroaches or scorpions in our Central Texas house, they always sneak in somehow)
Use an expected Claim on the ClaimsPrincipal
Mix and match the strategies above because you’re inevitably retrofitting this to an existing system
Use sub domain names (I’m arbitrarily skipping this one for now just because it was going to be harder to test and I’m pressed for time this week)
Once I saw a little bit of consensus on the most common strategies (and thank you to everyone who responded to me today), I jotted down some tasks in GitHub-flavored markdown (I *love* this feature) on what the configuration API would look like and my guesses for development tasks:
- [x] `WolverineHttpOptions.TenantId.IsRouteArgumentNamed("foo")` -- creates a policy
- [ ] `[TenantId("route arg")]`, or make `[TenantId]` on a route parameter for one offs. Will need to throw if not a route argument
- [x] `WolverineHttpOptions.TenantId.IsQueryStringValue("key") -- creates policy
- [x] `WolverineHttpOptions.TenantId.IsRequestHeaderValue("key") -- creates policy
- [x] `WolverineHttpOptions.TenantId.IsClaimNamed("key") -- creates policy
- [ ] New way to add custom middleware that's first inline
- [ ] Documentation on custom strategies
- [ ] Way to register the "preprocess context" middleware methods
- [x] Middleware or policy that blows it up with no tenant id detected. Use ProblemDetails
- [ ] Need an attribute to opt into tenant id is required, or tenant id is NOT required on certain endpoints
Knowing that I was going to need to quickly stand up different configurations of a test web service’s IHost, I started with this skeleton that I hoped would make the test setup relatively easy:
public class multi_tenancy_detection_and_integration : IAsyncDisposable, IDisposable
{
private IAlbaHost theHost;
public void Dispose()
{
theHost.Dispose();
}
// The configuration of the Wolverine.HTTP endpoints is the only variable
// part of the test, so isolate all this test setup noise here so
// each test can more clearly communicate the relationship between
// Wolverine configuration and the desired behavior
protected async Task configure(Action<WolverineHttpOptions> configure)
{
var builder = WebApplication.CreateBuilder(Array.Empty<string>());
builder.Services.AddScoped<IUserService, UserService>();
// Haven't gotten around to it yet, but there'll be some end to
// end tests in a bit from the ASP.Net request all the way down
// to the underlying tenant databases
builder.Services.AddMarten(Servers.PostgresConnectionString)
.IntegrateWithWolverine();
// Defaults are good enough here
builder.Host.UseWolverine();
// Setting up Alba stubbed authentication so that we can fake
// out ClaimsPrincipal data on requests later
var securityStub = new AuthenticationStub()
.With("foo", "bar")
.With(JwtRegisteredClaimNames.Email, "guy@company.com")
.WithName("jeremy");
// Spinning up a test application using Alba
theHost = await AlbaHost.For(builder, app =>
{
app.MapWolverineEndpoints(configure);
}, securityStub);
}
public async ValueTask DisposeAsync()
{
// Hey, this is important!
// Make sure you clean up after your tests
// to make the subsequent tests run cleanly
await theHost.StopAsync();
}
Now, the intermediate step of tenant detection even before Marten itself gets involved is to analyze the HttpContext for the current request, try to derive the tenant id, then set the MessageContext.TenantId in Wolverine for this current request — which Wolverine’s Marten integration will use a little later to create a Marten session pointing at the correct database for that tenant.
Just to measure the tenant id detection — because that’s what I want to build and test first before even trying to put everything together with a real database too — I built these two simple GET endpoints with Wolverine.HTTP:
public static class TenantedEndpoints
{
[WolverineGet("/tenant/route/{tenant}")]
public static string GetTenantIdFromRoute(IMessageBus bus)
{
return bus.TenantId;
}
[WolverineGet("/tenant")]
public static string GetTenantIdFromWhatever(IMessageBus bus)
{
return bus.TenantId;
}
}
That folks is the scintillating code that brings droves of readership to my blog!
Alright, so now I’ve got some support code for the “Arrange” and “Assert” part of my Arrange/Act/Assert workflow. To finally jump into a real test, I started with detecting the tenant id with a named route pattern using Alba with this code:
[Fact]
public async Task get_the_tenant_id_from_route_value()
{
// Set up a new application with the desired configuration
await configure(opts => opts.TenantId.IsRouteArgumentNamed("tenant"));
// Run a web request end to end in memory
var result = await theHost.Scenario(x => x.Get.Url("/tenant/route/chartreuse"));
// Make sure it worked!
// ZZ Top FTW! https://www.youtube.com/watch?v=uTjgZEapJb8
result.ReadAsText().ShouldBe("chartreuse");
}
The code itself is a little wonky, but I had that quickly working end to end. I next proceeded to the query string strategy like this:
[Fact]
public async Task get_the_tenant_id_from_the_query_string()
{
await configure(opts => opts.TenantId.IsQueryStringValue("t"));
var result = await theHost.Scenario(x => x.Get.Url("/tenant?t=bar"));
result.ReadAsText().ShouldBe("bar");
}
Hopefully you can see from the two tests above how that configure() method already helped me quickly write the next test. Sometimes — but not always so be careful with this — the best thing you can do is to first invest in a test harness that makes subsequent tests be more declarative, quicker to write mechanically, and easier to read later.
Next, let’s go to the request header strategy test:
[Fact]
public async Task get_the_tenant_id_from_request_header()
{
await configure(opts => opts.TenantId.IsRequestHeaderValue("tenant"));
var result = await theHost.Scenario(x =>
{
x.Get.Url("/tenant");
// Alba is helping set up the request header
// for me here
x.WithRequestHeader("tenant", "green");
});
result.ReadAsText().ShouldBe("green");
}
Easy enough, and hopefully you see how Alba helped me get the preconditions into the request quickly in that test. Now, let’s go for a little more complicated test where I first ran into a little trouble and work with the Claim strategy:
[Fact]
public async Task get_the_tenant_id_from_a_claim()
{
await configure(opts => opts.TenantId.IsClaimTypeNamed("tenant"));
var result = await theHost.Scenario(x =>
{
x.Get.Url("/tenant");
// Add a Claim to *only* this request
x.WithClaim(new Claim("tenant", "blue"));
});
result.ReadAsText().ShouldBe("blue");
}
I hit a little friction at first because I didn’t have Alba set up exactly right at first, but since Alba runs your application code completely within process, it was very quick to step right into the code and figure out why the code wasn’t working at first (I’d forgotten to set up the SecurityStub shown above). Refreshing my memory on how Alba’s Security Extensions worked, I was able to get going again. Arguably, Alba’s ability to fake out or even work with your application’s security in tests is its best features.
So that’s been a lot of “happy path” tests, so now let’s break things by specifying Wolverine’s new behavior to validate that a request has a valid tenant id with these two new tests. First, a happy path:
[Fact]
public async Task require_tenant_id_happy_path()
{
await configure(opts =>
{
opts.TenantId.IsQueryStringValue("tenant");
opts.TenantId.AssertExists();
});
// Got a 200? All good!
await theHost.Scenario(x =>
{
x.Get.Url("/tenant?tenant=green");
});
}
Note that Alba would cause a test failure if the web request did not return a 200 status code.
And to lock down the binary behavior, here’s the “sad path” where Wolverine should be returning a 400 status code with ProblemDetails data:
[Fact]
public async Task require_tenant_id_sad_path()
{
await configure(opts =>
{
opts.TenantId.IsQueryStringValue("tenant");
opts.TenantId.AssertExists();
});
var results = await theHost.Scenario(x =>
{
x.Get.Url("/tenant");
// Tell Alba we expect a non-200 response
x.StatusCodeShouldBe(400);
});
// Alba's helpers to deserialize JSON responses
// to a strong typed object for easy
// assertions
var details = results.ReadAsJson<ProblemDetails>();
// I like to refer to constants in test assertions sometimes
// so that you can tweak error messages later w/o breaking
// automated tests. And inevitably regret it when I
// don't do this
details.Detail.ShouldBe(TenantIdDetection
.NoMandatoryTenantIdCouldBeDetectedForThisHttpRequest);
}
To be honest, it took me a few minutes to get the test above to pass because of some internal middleware mechanics I didn’t expect. As usual. All the same though, Alba helped me drive the code through “outside in” tests that ran quickly so I could iterate rapidly.
Alba itself is a descendant of some very old test helper code in FubuMVC, then was ported to OWIN (RIP, but I don’t miss you), then to early ASP.Net Core, and finally rebuilt as a helper around ASP.Net Core’s. built in TestServer and WebApplicationFactory. Alba has been continuously used for well over a decade now. If you’re looking for selling points for Alba, I’d say:
Alba makes your integration tests more declarative
There are quite a few helpers for common repetitive tasks in integration tests like reading JSON data with the application’s built in serialization
Simplifies test setup
It runs completely in memory where you can quickly spin up your application and jump right into debugging when necessary
Testing web services with Alba is much more efficient and faster than trying to do the same thing through inevitably slow, brittle, and laborious Selenium/Playwright/Cypress testing
JasperFx Software has several decades worth of experience with Test Driven Development, developer focused testing, and test automation in general. We’re more than happy to engage with potential clients who are interested in improving their outcomes with TDD or automated testing!
Crap I feel old having typed out that previous sentence.
I’m going through an interesting exercise right now helping a JasperFx client learn how to apply Test Driven Development and developer testing from scratch. The developer in question is very inquisitive and trying hard to understand how best to apply testing and even a little TDD, and that’s keeping me on my toes. Since I’m getting to see things fresh from his point of view, I’m trying to keep notes on what we’ve been discussing, my thoughts on those questions, and the suggestions I’ve been making as we go.
The first things I should have stressed was that the purpose of your automated test suite is to:
Help you know when it’s safe to ship code — not “your code is perfect” but “your code is most likely ready to ship.” That last distinction matters. It’s not always economically viable to have perfect 100% coverage of your code, but you can hopefully do enough testing to minimize the risk of defects getting past your test coverage.
Provide an effective feedback loop that helps you to modify code. And by “effective,” I mean that it’s fast enough that it doesn’t slow you down, tells you useful things about the state of your code, and it’s stable or reliable enough to be trusted.
Now, switching to Test Driven Development (TDD) itself, I try to stress that TDD is primarily a low level design technique and an important feedback loop for coding. While I’m not too concerned about whether or not the test is written first before the actual code in all cases, I do believe you should consider how you’ll test your code upfront as an input to how the code is going to be written in the first place.
Think about Individual Responsibilities
What I absolutely did tell my client was to try to approach any bigger development task by first trying to pick out the individual tasks or responsibilities within the larger user story. In the first case we were retrofitting tests to, it was a pretty typical web api endpoint that:
Tried to locate some related entities in the database based on the request
Validated whether the requested action was valid based on the existence and state of the entities
On the happy path, make a change to the entity state
Persist the changes to the underlying database
In the case above, we started by focusing on that validation logic by isolating it into its own little function where we could easily “push” in inputs and do simple assertions against the expected state. Together, we built little unit tests that exercised all the unique pathways in the validation including the “happy path”.
Even this little getting started exercise potentially leads to several other topics:
The advantage of using pure functions for testable code whenever possible
In our case, I had us break the code apart so we could start in a “bottom up” approach where we coded and tested individual tasks before assembling everything together, versus a top down approach where you try to code the governing workflow of a user story first in order to help define the new API calls for the lower level tasks to build after. I did stress that the bottom up or top down approach should be chosen on a case by case basis.
When we were happy with those first unit tests, we moved on to integration tests that tested from the HTTP layer all the way through the database. Since we had dealt with the different permutations of validation earlier in unit tests, I had us just write two tests, one for the happy path that should have made changes in the database and another “sad path” test where validation problems should have been detected, an HTTP status code of 400 was returned denoting a bad request, and no database changes were made. These two relatively small tests led to a wide range of further discussions:
Whither unit or integration testing? That’s a small book all by itself, or at least a long blog post like Jeremy’s Only Rule of Testing.
I did stress that we weren’t even going to try to test every permutation of the validation logic within the integration test harnesses. I tried to say that we were trying to create just enough tests that worked through the execution pathways of that web api method that we could feel confident to ship that code if all the tests were passing
Watch how much time you’re needing to spend using debugging tools. If you or your team is finding yourself needing to frequently use debuggers to diagnose test failures or defects, that’s often a sign that you should be writing more granular unit tests for your code
Again with the theme that it’s actually inefficient to be using your debugger too much, I stressed the importance of trying to push through smaller unit tests on coding tasks before you even try to run end to end tests. That’s all about trying to reduce the number of variables or surface area in your code that could be causing integration test failures
And to not let the debugging topic go quite yet, we did have to jump into a debugger to fix a failing integration test. We just happened to be using the Alba (one of the JasperFx OSS libraries!) library to help us test our web api. One of the huge advantages of this approach is that our web application is running in the same process as the test harness, so it’s very quick to jump right into the debugger by merely re-running the failing test. I can’t stress enough how valuable this is for faster feedback cycles when it inevitably comes time to debug through breaking code as opposed to trying to troubleshoot failing end to end tests running through user interfaces in separate processes (i.e. Selenium based testing).
Should unit tests and integration tests against the same code be in the same file or even in the same project? My take was just to pay attention to his feedback cycle. If he felt like his test suite ran “fast enough” — and this is purely subjective — keep it simple and put everything together. If the integration tests became uncomfortably slow, then it might be valuable to separate the two poles of tests into the “fast” and “slow” test suites
Even in this one test, we had to set up expected inputs through the actual database to run end to end. In our case, the data is all identified through globally unique identifiers, so we could add all new data inputs without worrying about needing to teardown or rebuild system data before the test executed. We just barely started a discussion about my recommendations for test data setup.
As an aside, JasperFx Software strongly feels that overusing Selenium, Playwright, or Cypress.io to primarily automate testing through browser manipulation is potentially very inefficient and ineffective compared to more balanced approaches that rely on smaller and faster, intermediate level integration tests like the Alba-based integration testing my client and I were doing above.
“Quick Twitch” Working Style
In the end, you want to be quick enough with your testing and coding mechanics that your progress is only limited by how fast you can think. Both my client and I use JetBrains Rider as our primary IDE, so I recommended:
Get familiar with the keyboard shortcuts to run test, re-run the last test, or re-run the last test in the debugger so that he could mechanically execute the exact test he’s working on faster without fumbling around with a mouse. This is all about just being able to work as fast as you can think through problems. Other people will choose to use continuous test runners that automatically re-run your tests when file changes are detected. The point either way is just to reduce your mechanical steps and tighten up the feedback loop. Not everything is a hugely deep philosophical subject:-)
Invest a little time in micro-code generation tooling like Rider’s Live Template feature to help build repetitive code structures around unit tests. Again, the point of this is just to be able to work at the “speed of thought” and not burn up any gray cells dealing with mundane, repetitive code or mouse clicking
Scheduling or delaying message retries on failures where you want the message retried, but definitely want that message out of the way of any subsequent messages in a queue
Enforcing “timeout” conditions for any kind of long running workflow
Explicit scheduling from within message handlers
Mechanically, you can publish a message with a delayed message delivery with Wolverine’s main IMessageBus entry point with this extension method:
public async Task schedule_send(IMessageContext context, Guid issueId)
{
var timeout = new WarnIfIssueIsStale
{
IssueId = issueId
};
// Process the issue timeout logic 3 days from now
await context.ScheduleAsync(timeout, 3.Days());
// The code above is short hand for this:
await context.PublishAsync(timeout, new DeliveryOptions
{
ScheduleDelay = 3.Days()
});
}
Or using an absolute time with this overload of the same extension method:
public async Task schedule_send_at_5_tomorrow_afternoon(IMessageContext context, Guid issueId)
{
var timeout = new WarnIfIssueIsStale
{
IssueId = issueId
};
var time = DateTime.Today.AddDays(1).AddHours(17);
// Process the issue timeout at 5PM tomorrow
// Do note that Wolverine quietly converts this
// to universal time in storage
await context.ScheduleAsync(timeout, time);
}
Now, Wolverine tries really hard to enable you to use pure functions for as many message handlers as possible, so there’s of course an option to schedule message delivery while still using cascading messages with the DelayedFor() and ScheduledAt() extension methods shown below:
public static IEnumerable<object> Consume(Incoming incoming)
{
// Delay the message delivery by 10 minutes
yield return new Message1().DelayedFor(10.Minutes());
// Schedule the message delivery for a certain time
yield return new Message2().ScheduledAt(new DateTimeOffset(DateTime.Today.AddDays(2)));
}
Lastly, there’s a special base class called TimeoutMessage that your message types can extend to add scheduling logic directly to the message itself for easy usage as a cascaded message. Here’s an example message type:
// This message will always be scheduled to be delivered after
// a one minute delay
public record OrderTimeout(string Id) : TimeoutMessage(1.Minutes());
Which is used within this sample saga implementation:
// This method would be called when a StartOrder message arrives
// to start a new Order
public static (Order, OrderTimeout) Start(StartOrder order, ILogger<Order> logger)
{
logger.LogInformation("Got a new order with id {Id}", order.OrderId);
// creating a timeout message for the saga
return (new Order{Id = order.OrderId}, new OrderTimeout(order.OrderId));
}
How does it work?
The actual mechanics for how Wolverine is doing the scheduled delivery are determined by the destination endpoint for the message being published. In order of precedence:
If the destination endpoint has native message delivery capabilities, Wolverine uses that capability. Outbox mechanics still apply to when the outgoing message is released to the external endpoint’s sender. At the time of this post, the only transport with native scheduling support is Wolverine’s Azure Service Bus transport or the recently added Sql Server backed transport.
If the destination endpoint is durable, meaning that it’s enrolled in Wolverine’s transactional outbox, then Wolverine will store the scheduled messages in the outgoing envelope storage for later execution. In this case, Wolverine is polling for the ready to execute or deliver messages across all running Wolverine nodes. This option is durable in case of process exits.
In lieu of any other support, Wolverine has an in memory option that can do scheduled delivery or execution
A very rare Friday blog post, but don’t worry, I didn’t exert too much energy on it.
TL;DR: I was lucky as hell, but maybe prepared well enough that I was able to seize the opportunities that did fall in my lap later
I never had a computer at home growing up, and I frankly get a little bit exasperated at developers of my generation bragging about the earliest programming language and computer they learned on as they often seem unaware of how privileged they were back in the day when home computers were far more expensive than they are now. Needless to say, no kind of computer science or MIS degree was even remotely on my radar when I started college back in the fall of ’92. I did at least start with a 386 knockoff my uncle had given me for graduation, and that certainly helped.
Based partially on the advice of one of my football coaches, I picked Mechanical Engineering for my degree right off the bat, then never really considered any kind of alternatives the rest of the way. Looking back, I can clearly recognize that my favorite course work in college was anytime we dipped into using Matlab (a very easy to use mathematics scripting language if you’ve never bumped into it) for our coursework (Fortran though, not so much).
I don’t remember how this came about, but my first engineering lead gave me a couple weeks one time to try to automate some kind of calculations we frequently did with custom Matlab scripts, which just gave me the bug to want to do that more than our actual engineering work — which was often just a ton of paperwork to satisfy formal processes. My next programming trick was playing with Office VBA to automate the creation of some of those engineering documents instead of retyping information that was already in Excel or Access into Word documents.
This was also about the time the software industry had its first .com boom and right before the first really bad bust, so a lot of us younger engineers were flirting with moving into software as an alternative. My next big step was right into what I’d now call “Shadow IT” after I purchased some early version of this book:
I devoured that book, and used MS Access to generate ASP views based on database tables and views that I reverse engineered to “learn” how to code. Using a combination of MS Access, Office VBA, and ASP “Classic”, I built a system for my engineering team to automate quite a bit of our documentation and inventory order creation that was actually halfway successful.
I think that work got the attention of our real IT organization, and I got picked up to work in project automation right at the magical time when the engineering and construction industry was moving from paper drafting and isolated software systems into connected systems with integration work. That was such a cool time because there was so much low hanging fruit and the time between kicking around an idea in a meeting and actually coding it up was pretty short. I was still primarily working with the old Microsoft DNA technologies plus Oracle databases.
While doing this, I took some formal classes at night to try to get the old Microsoft MCSD certification (I never finished that) where I added VB6 and Sql Server to my repertoire.
My next big break was moving to Austin in 2000 to work for a certain large computer manufacturer. I came in right as a big consulting company was finishing up a big initiative around supply chain automation that didn’t really turn out the way everybody wanted. I don’t remember doing too much at first (a little Perl of all things), but I was taking a lot of notes about how I’d try to rebuild one of the failing systems from that initiative — mostly as a learning experience for myself.
I think I’d managed to have a decent relationship with the part of the business folks who were in charge of automation strategy, and at the point where they were beside themselves with frustration about the current state of things, I happened to have a new approach ready to go. In an almost parting of the Red Sea kind of effect, the business and my management let me run on a proof of concept rewrite. For the first and only time in my career, I had almost unlimited collaboration with the business domain experts, and got the basics in place fast and sold them on the direction. From there, my management at the time did an amazing job of organizing a team around that initiative and fighting off all the other competing groups in our department that tried to crash the party (I didn’t really learn to appreciate what my leadership did to enable me until years later, but I certainly do now).
Long story short, the project was a big success in terms of business value (the code itself was built on old Windows DNA technology, some Java, Oracle and was unnecessarily complicated in a way that I’d call it completely unacceptable now). I never quite reached that level of success there again, but did get bumped up in title to an “architect” role before I left for a real software consultancy.
I also at least started working with very early .NET for a big proof of concept that never got off the ground, and that helped launch me into my next job with ThoughtWorks where I got my first grounding in Agile software development and more disciplined ways of building systems.
Some time soon, there’ll be an episode up of the Azure DevOps podcast that Jeffrey Palermo and I recorded recently. Jeffrey asked me something to the effect of what my formative experiences were that set me on my career path. I told him that my real acceleration into being a “real” software developer was my brief time at ThoughtWorks during some of the heady eXtreme Programming (XP) days (before Scrum ruined Agile development). That’s where I was at when I started and first published StructureMap that lasted for almost 15 years of active development. I think the OSS work has helped (and also hurt) my career path. I probably derived much more career benefit from writing technical content for the now defunct CodeBetter.com website where I learned to communicate ideas better and participated in the early marriage of Agile software practices and .NET technologies.
Anyway, that’s how I managed to get started. Looking back, I’d just say that it’s all about making the best of your early work situations to learn so that you can seize the day when opportunities come later. It’d probably also help if you were way better at networking than I was in my early career too though, but I don’t have any real advice on that one:-)
Hey, JasperFx Software is completely open for business, and ready to help your company make the most of your software development initiatives. While we’d be thrilled to work with our own “critter stack” tooling, we’re also very capable of helping you with software modernization, architectural reviews, and test automation challenges with whatever technical stack you happen to be using. Contact us at any time at sales@jasperfx.net for more information.
The DotNetRocks guys let me come on to talk with them for a show called Minimal Architecture with Jeremy Miller. Along the way, we talked about the latest happenings with the “Critter Stack,” why I’m absolutely doubling down on my criticisms of the Clean Architecture as it is practiced, and a lot about how to craft maintainable codebases with lower ceremony vertical slice architecture approaches — including how Wolverine and Marten absolutely help make that a reality.
Regardless of whether or not you’re taking the plunge into the “Critter Stack” tools or using a completely different technical stack, JasperFx Software is ready to engage with your shop for any help you might want on software architecture, test automation, modernization efforts, or helping your teams be more effective with Test Driven Development. Contact us anytime at sales@jasperfx.net.
In their first joint webinar, Oskar and Jeremy demonstrate how combining Wolverine and Marten can lead to a very low ceremony Event Sourcing and CQRS architecture. More than just that, we demonstrate how this tooling is purposely designed to lead to isolating the business logic for easy testing and good maintainability over time.
Jeremy joins Oskar’s Event Sourcerers Webinar Series to talk Wolverine and Marten
Last week we talked about code organization in a post-hexagonal world, where we decried the explosion of complexity that often comes from prescriptive architectures and “noun-centric” code organization. Let’s say this webinar is a down payment on explaining just how we’d go about doing things differently to sidestep the long term maintainability traps in many popular prescriptive architectures.