Celebrating Marten’s 10th Birthday!

To the best of my recollection and internet sleuthing today, development on Marten started in October of 2015 after my then colleague Corey Kaylor had kicked around an idea the previous summer to utilize the new JSONB feature in PostgreSQL 9.4 as a way to replace our then problematic usage of a third party NoSQL database in a production application (RavenDb, but some of that was on us (me) and RavenDb was young at the time). Digging around today, I found the first post I wrote when we first announced a new tool called Marten later that month.

At this point I feel pretty confident in saying that Marten is the leading Event Sourcing tool for the .NET platform. It’s definitely the most capable toolset for Event Sourcing you can use in .NET and arguably the only single truly “batteries included” option* — especially if you consider its combination with Wolverine into the “Critter Stack.” On top of that, it still fulfills its intended original role as a robust and easy to use document database with a much better local development story and transactional model than most NoSQL options that tend to be either cloud only or have weaker support for data consistency than Marten’s PostgreSQL foundation.

If you’ll indulge just a little bit of navel gazing today, I’d like to walk back through some of the notable history of Marten and thank some fellow travelers along the way. As I mentioned before, Corey Kaylor was the project cofounder and “Marten as a Document Database” was really his original idea. Oskar Dudycz was a massive contributor and really co-leader of Marten for many years, especially around Marten’s now focus on Event Sourcing (You can follow his current work with Event Sourcing and PostgreSQL on Node.JS with Emmett). Babu Annamalai has been a core team member of Marten for most of its life and has done yeoman work around our DevOps infrastructure and website as well as making large contributions to the code. Jaedyn Tonee has been one of our most active community members and now a core team member and contributor. Anne Erdtsieck adds some younger blood, enthusiasm, and a lot of helpful documentation. Jeffry Gonzalez is helping me a great deal with community efforts and now the CritterWatch tooling.

Beyond that, Marten has benefitted from far, far more community involvement than any other OSS project I’ve ever been a part of. I think we’re sitting at around ~250 official contributors to the codebase (a massive number for a .NET OSS project), but that undercounts the true community when you also account for everybody who has made suggestions, given feedback, or taken the time to create actionable GitHub issues that have led to improvements in Marten.

More recently, JasperFx Software‘s engagements with our customers using Marten has directly led to a very large number of technical improvements like partitioning support, first class subscriptions, multi-tenancy improvements, and quite a bit of the integration with Wolverine for scalability and first class messaging support.

Some Project History

When I started the initial PoC work on what is now Marten in late 2015, I was just getting over my funk from a previous multi-year OSS effort failing and furiously doing conceptual planning for a new application framework codenamed “Jasper” that was going to learn from everything that I thought went wrong with FubuMVC (“Jasper” was later rebooted as “Wolverine” to fit into the “Critter Stack” naming theme and also to act as a natural complement to Marten).

To tell this story one last time, as I was doing the initial work I was using the codename “Jasper.Data.” Corey called me one day and in his laconic manner asked me what codename I was going to use and even said “not something lame like Jasper.Data.” I said um, no, and remembering the story of how Selenium is the “cure for mercury poisoning” naming I quickly googled for the “natural predators of Ravens,” which is how we stumbled on the name “Marten” from that moment on as our planned drop in replacement for RavenDb.

As I said earlier, I was really smarting from the FubuMVC project failure, and a big part of my own lessons learned was that I should have been much more aggressive in projection promotion and community building from the very beginning instead of just being a mad scientist. It turned out that there were at least a couple other efforts out there to build something like Marten, but I still had some leftover name recognition from the CodeBetter and ALT.NET days (don’t bother looking for that, it’s all long gone now) and Marten won out quickly over those other nascent projects and even attracted an important cadre of early, active contributors.

Our 1.0 release was in mid 2016 just in time for Marten to go into production in an application with heavy traffic that fall.

A couple years previous I had spent about a month doing some proof of concept work on a possible PostgreSQL backed event store on NodeJS, so I had some interest in Event Sourcing as a possible feature set and tossed in a small event store feature set off to the side of the Marten 1.0 release that was mostly about the Document Database feature set. To be honest, I was just irritated at the wasted effort from the earlier NodeJS work that was abandoned and didn’t want it to be a complete loss. I had zero idea at that time that the Event Sourcing feature set in what I thought was going to be a little side project mostly for work was going to turn out to be the most important and positively impactful technical effort of my career.

As it turned out, we abandoned our plans at that time to jump from .NET to NodeJS when the left-pad incident literally happened the exact same day we were going to meet one last time to decide if we really wanted to do that (we, as it turned out, did not want to do that). At the same time, David Fowler and co in the AspNetCore team finally started talking about “Project K” that while cut down, did become what we now know as .NET Core and in my opinion — even thought that team drives me bonkers sometimes — saved .NET as a technical platform and gave .NET a much brighter future.

Marten 2.0 came out in 2017 with performance improvements, our first built in multi-tenancy feature set, and some customization of JSON serialization for the first time.

Marten 3.0 released in late 2018 with the incorporation of our first “official” core team. The release itself wasn’t that big of a deal, but the formation of an actual core team paid huge dividends for the project over time.

Marten went quiet for awhile as I left the company who had originally sponsored Marten development, but the community and I released the then mammoth Marten 4.0 release in late 2021 that I hoped at the time would permanently fix every possible bit of the technical foundation and set us up for endless success. Schema management, LINQ internals, multi-tenancy, low level mechanics, and a nearly complete overhaul of the Event Sourcing support were part of that release. At that point it was already clear that Marten was now an Event Sourcing tool that also had a Document Database feature set instead of vice versa.

Narrator voice: V4 was not the end of development and did not fix every possible bit of the Marten technical foundation.

Marten 5.0 followed just 6 months later to fix some usability issues we’d introduced in 4.0 with our first foray into standardized AddMarten() bootstrapping and .NET IHost integration. Also importantly, 5.0 introduced Marten’s support for multi-tenancy through separate databases in addition to our previous “conjoined” tenancy model.

Marten 6.0 landed in May 2023 right as I was just about to launch JasperFx. Oskar added the very important event upcaster feature. I might be misremembering, but I think this is about when we added full text search to Marten as well.

Marten 7.0 was released in March of last year, and represented the single largest feature release I think we’d ever done. In this release we did a near rewrite of the LINQ support and extended its use cases while in some cases dramatically improving query performance. The very lowest level database execution pipeline was greatly improved by introducing Polly for resiliency and using every possible advanced trick in Npgsql for improving query batching or command execution. The important async daemon got some serious improvements to how it could distribute work across an application cluster, with that being even more effective when combined with Wolverine for load distribution. Babu added a new native PostgreSQL “partial update” feature we’d wanted for years as the PLV8 engine had fallen out of favor. Heck, 7.0 even added a new model for dynamically adding new tenant databases at runtime with no downtime and a true blue/green deployment model for versioned projections as part of the Event Sourcing feature set. JT added PostgreSQL read replica support that’s completely baked into Marten.

Feel free to correct me if I’m wrong, but I don’t believe there is another event sourcing tool on the planet that can match the CritterStack’s ability to do blue/green deployments with active event projections while not sacrificing strong data consistency.

There was an absurd amount of feature development during 2024 and early 2025 that included:

  • PostgreSQL partitioning support for scalability and performance
  • Full Open Telemetry and Metrics support throughout Marten
  • The “Quick Append” option for faster event store operations
  • A “side effect” model within projections that folks had wanted for years
  • Convenience mechanisms to make event archiving easier
  • New mechanisms to manage tenant data at runtime
  • Non-stale querying of asynchronously projected event data
  • The FetchLatest() API for optimized fetching or advancement of single stream projections. This was very important to optimize common CQRS command handler usages
  • And a lot more…

Marten 8.0 released this June, and I’ll admit that it mostly involved restructuring the shared dependencies underneath both Marten and Wolverine. There was also a large effort to yank quite a bit of the event store functionality and key abstractions out to a shared library that will theoretically be used in a future critter tool to do SQL Server backed event sourcing.

And about that…

Why not SQL Server?!?

If Marten is 10 years old, then that means it’s been 10 years of receiving well (and sometimes not) intentioned advice that Marten should have been either built on SQL Server instead of PostgreSQL or that we should have sprinkled abstractions every which way so that we or community contributors would be able to just casually override a pluggable interface to swap PostgreSQL out for SQL Server or Oracle or whatever. \

Here’s the way I see this after all these years:

  • The PostgreSQL feature set for JSON is still far ahead of where SQL Server is, and Marten depends on a lot of that special PostgreSQL sauce. Maybe the new SQL Server JSON Type will change that equation, but…
  • I’ve already invested far more time than I think I should have getting ready to build a planned SQL Server backed port of Marten and I’m not convinced that that effort will end up being worth the sunk cost 😦
  • The “just use abstractions” armchair architecting isn’t really viable, and I think that would have exploded the internal complexity of several Marten subsystems. And honestly, I was adamant that we were going YAGNI on Marten extensibility upfront so we’d actually get something built after having gone to the opposite extreme with a prior OSS effort
  • PostgreSQL is gaining traction fast in the .NET community and it’s actually much rarer now to get pushback from potential users on PostgreSQL usage — even in the normally very Microsoft-centric .NET world

Marten’s Future

Other than possible performance optimizations, I think that Marten itself will slow down quite a bit in terms of feature development in the near future. That changes anytime a JasperFx client of course, but for the most part, I think most of the Critter Stack effort for the remainder of the year goes into the in flight “CritterWatch” tool that will be a management and observability console application for Critter Stack systems in production.

Summary

I can’t say that back in 2015 I had any clue that Marten would end up being so important to my career. I will say that when I was interviewing with Calavista in 2018 I did a presentation on early Marten as part of that process that most certainly helped me get that position. At the time, my soon to be colleague interviewing me asked me what professional effort I was most proud of, and I answered “Marten” even then.

I had long wanted to branch out and start a company around my OSS efforts, but had largely given up on that dream until someone I just barely know from conferences reached out to me to ask why in the world we hadn’t already commercialized Marten because he thought it was a better choice even then the leading commercial tool. That little DM exchange — along with endless encouragement and support from my wife of course — gave me a bit of confidence and a jolt to get going. Knowing that Marten needed some integration into messaging and a better story for CQRS within an application, Wolverine came back to life originally as a purposeful complement to Marten, which led to our now “Critter Stack” that is the only real end to end technical stack for Event Sourcing in the .NET ecosystem.

Anyway, the whole morale of this little story is that the most profound effort of my now long technical career was largely an accident and only possible with a helluva lot of help, support, and feedback from other people. From my side, I’d say that the one single personal strength that does set me apart from most developers and directly contributed to Marten’s success is simply having a much longer attention span than most of my peers:). Make of *that* what you will.

* Yes, you can use the commercial KurrentDb library within a .NET application, but that only provides a small subset of Marten’s capabilities and requires a lot more repetitive code to use than Marten does.

Leader Election and Virtual Actors in Wolverine

A JasperFx Software client was asking recently about the features for software controlled load balancing and “sticky” agents I’m describing in this post. Since these features are both critical for Wolverine functionality and maybe not perfectly documented already, it’s a great topic for a new blog post! Both because it’s helpful to understand what’s going on under the covers if you’re running Wolverine in production and also in case you want to build your own software managed load distribution for your own virtual agents.

Wolverine was rebooted around in 2022 as a complement to Marten to extend the newly named “Critter Stack” into a full Event Driven Architecture platform and arguably the only single “batteries included” technical stack for Event Sourcing on the .NET platform.

One of the things that Wolverine does for Marten is to provide a first class event subscription function where Wolverine can either asynchronously process events captured by Marten in strict order or forward those events to external messaging brokers. Those first class event subscriptions and the existing asynchronous projection support from Marten can both be executed in only one process at a time because the processing is stateful. As you can probably imagine, it would be very helpful for your system’s scalability and performance if those asynchronous projections and subscriptions could be spread out over an executing cluster of system nodes.

Fortunately enough, Wolverine works with Marten to provide its subscription and projection distribution to assign different asynchronous projections and event subscriptions to run on different nodes so you have a bit more even spread of work throughout your running application cluster like this illustration:

To support that capability above, Wolverine uses a combination of its Leader Election that allows Wolverine to designate one — and only one — node within an application cluster as the “leader” and it’s “agent family” feature that allows for assigning stateful agents across a running cluster of nodes. In the case above, there’s a single agent for every configured projection or subscription in the application that Wolverine will try to spread out over the application cluster.

Just for the sake of completeness, if you have configured Marten for multi-tenancy through separate databases, Wolverine’s projection/subscription distribution will distribute by database rather than by individual projection or subscription + database.

Alright, so here’s the things you might want to know about the subsystem above:

  1. You need to have some sort of Wolverine message persistence configured for your application. You might already be familiar with that for the transactional inbox or outbox storage, but there’s also storage to persist information about the running nodes and agents within your system that’s important for both the leader election and agent assignments
  2. There has to be some sort of “control endpoint” configured for Wolverine to be able to communicate between specific nodes. There is a built in “database control” transport that can act as a fallback mechanism, but all of this back and forth communication works better with transports like Wolverine’s Rabbit MQ integration that can quietly use non-durable queues per node for this intra-node communication. And in case you’re wondering, the Rabbit MQ
  3. Wolverine’s leader election process tries to make sure that there is always a single node that is running the “leader agent” that is monitoring the other running node status and all the known agents
  4. Wolverine’s agent (some other frameworks call these “virtual actors“) subsystem consisting of the IAgentFamily and IAgent interfaces

Building Your Own Agents

Let’s say you have some kind of stateful process in your system that you want to always be running like something that polls against an external system maybe. And then because this is a somewhat common scenario, let’s say that you need a completely separate polling mechanism against different outside entities or tenants.

First, we need to implement this Wolverine interface to be able to start and stop agents in your application:

/// <summary>
///     Models a constantly running background process within a Wolverine
///     node cluster
/// </summary>
public interface IAgent : IHostedService
{
    /// <summary>
    ///     Unique identification for this agent within the Wolverine system
    /// </summary>
    Uri Uri { get; }
    
    /// <summary>
    /// Is the agent running, stopped, or paused? Not really used
    /// by Wolverine *yet* 
    /// </summary>
    AgentStatus Status { get; }
}

IHostedService up above is the same old interface from .NET for long running processes, and Wolverine just adds a Uri and currently unused Status property (that hopefully gets used by “CritterWatch” someday soon for health checks). You could even use the BackgroundService from .NET itself as a base class.

Next, you need a way to tell Wolverine what agents exist and a strategy for distributing the agents across a running application cluster by implementing this interface:

/// <summary>
///     Pluggable model for managing the assignment and execution of stateful, "sticky"
///     background agents on the various nodes of a running Wolverine cluster
/// </summary>
public interface IAgentFamily
{
    /// <summary>
    ///     Uri scheme for this family of agents
    /// </summary>
    string Scheme { get; }

    /// <summary>
    ///     List of all the possible agents by their identity for this family of agents
    /// </summary>
    /// <returns></returns>
    ValueTask<IReadOnlyList<Uri>> AllKnownAgentsAsync();

    /// <summary>
    ///     Create or resolve the agent for this family
    /// </summary>
    /// <param name="uri"></param>
    /// <param name="wolverineRuntime"></param>
    /// <returns></returns>
    ValueTask<IAgent> BuildAgentAsync(Uri uri, IWolverineRuntime wolverineRuntime);

    /// <summary>
    ///     All supported agent uris by this node instance
    /// </summary>
    /// <returns></returns>
    ValueTask<IReadOnlyList<Uri>> SupportedAgentsAsync();

    /// <summary>
    ///     Assign agents to the currently running nodes when new nodes are detected or existing
    ///     nodes are deactivated
    /// </summary>
    /// <param name="assignments"></param>
    /// <returns></returns>
    ValueTask EvaluateAssignmentsAsync(AssignmentGrid assignments);
}

In this case, you can plug custom IAgentFamily strategies into Wolverine by just registering a concrete service in your DI container against that IAgentFamily interface. Wolverine does a simple IServiceProvider.GetServices<IAgentFamily>() during its boostrapping to find them.

As you can probably guess, the Scheme should be unique, and the Uri structure needs to be unique across all of your agents. EvaluateAssignmentsAsync() is your hook to create distribution strategies, with a simple “just distribute these things evenly across my cluster” strategy possible like this example from Wolverine itself:

    public ValueTask EvaluateAssignmentsAsync(AssignmentGrid assignments)
    {
        assignments.DistributeEvenly(Scheme);
        return ValueTask.CompletedTask;
    }

If you go looking for it, the equivalent in Wolverine’s distribution of Marten projections and subscriptions is a tiny bit more complicated in that it uses knowledge of node capabilities to support blue/green semantics to only distribute work to the servers that “know” how to use particular agents (like version 3 of a projection that doesn’t exist on “blue” nodes):

    public ValueTask EvaluateAssignmentsAsync(AssignmentGrid assignments)
    {
        assignments.DistributeEvenlyWithBlueGreenSemantics(SchemeName);
        return new ValueTask();
    }

The AssignmentGrid tells you the current state of your application in terms of which node is the leader, what all the currently running nodes are, and which agents are running on which nodes. Beyond the even distribution, the AssignmentGrid has fine grained API methods to start, stop, or reassign agents.

To wrap this up, I’m trying to guess at the questions you might have and see if I can cover all the bases:

  • Is some kind of persistence necessary? Yes, absolutely. Wolverine has to have some way to “know” what nodes are running and which agents are really running on each node.
  • How does Wolverine do health checks for each node? If you look in the wolverine_nodes table when using PostgreSQL or Sql Server, you’ll see a heartbeat column with a timestamp. Each Wolverine application is running a polling operation that updates its heartbeat timestamp and also checks that there is a known leader node. In normal shutdown, Wolverine tries to gracefully mark the current node as offline and send a message to the current leader node if there is one telling the leader that the node is shutting down. In real world usage though, Kubernetes or who knows what is frequently killing processes without a clean shutdown. In that case, the leader node will be able to detect stale nodes that are offline, eject them from the node persistence, and redistribute agents.
  • Can Wolverine switch over the leadership role? Yes, and that should be relatively quick. Plus Wolverine would keep trying to start a leader election if none is found. But yet, it’s an imperfect world where things can go wrong and there will 100% be the ability to either kickstart or assign the leader role from the forthcoming CritterWatch user interface.
  • How does the leadership election work? Crudely and relatively effectively. All of the storage mechanics today have some kind of sequential node number assignment for all newly persisted nodes. In a kind of simplified “Bully Algorithm,” Wolverine will always try to send “try assume leadership” messages to the node with the lowest sequential node number which will always be the longest running node. When a node does try to take leadership, it uses whatever kind of global, advisory lock function the current persistence uses to get sole access to write the leader node assignment to itself, but will back out if the current node detects from storage that the leadership is already running on another active node.
  • Can I extract the Wolverine leadership election for my own usage? Not easily at all, sorry. I don’t have the link anywhere handy, but there is I believe a couple OSS libraries in .NET that implement the Raft consensus algorithm for leader election. I honestly don’t remember why I didn’t think that was suitable for Wolverine though. Leadership election is most certainly not something for the feint of heart.

Summary

I’m not sure how useful this post was for most users, but hopefully it’s helpful to some. I’m sure I didn’t hit every possible question or concern you might have, so feel free to reach out in Discord or comments here with any questions.

Last Sprint to Wolverine 5.0

Little update since the last check in on Wolverine 5.0. I think right now that Wolverine 5.0 hits by next Monday (October 6th). To be honest, besides documentation updates, the biggest work is just pushing more on the CritterWatch backend this week to see if that forces any breaking changes in the Wolverine internals.

What’s definitely “in” right now:

  • The “Partitioned Sequential Messaging” feature work that we previewed in a live stream
  • Big improvements and expansion to Wolverine’s interoperability story against NServiceBus, MassTransit, CloudEvents, and whatever custom interoperability folks need to do
  • A first class SignalR transport which is getting used heavily in our own CritterWatch work
  • A first class Redis messaging transport from the community
  • Modernization and upgrades to the GCP Pubsub transport
  • The ability to mix and match database storage with Wolverine for modular monoliths
  • A bit batch of optimization for the Marten integration including improvements for multi-stream operations as our response to the “Dynamic Boundary Consistency” idea from other tools
  • The utilization of System.Threading.Channels in place of the TPL DataFlow library

What’s unfortunately out:

  • Any effort to optimize the cold start times for Marten and Wolverine. Just a bandwidth problem, plus I think this can get done without breaking changes

And we’ll see:

  • Random improvements for Azure Service Bus and Kafka usage
  • HTTP improvements for content negotiation and multi-part uploads
  • Yet more improvements to the “aggregate handler workflow” with Marten to allow for yet more strong typed identifier usage

The items in the 3rd list don’t require any breaking changes, so could slide to Wolverine 5.1 if necessary.

All in all, I’d argue this turned out to be a big batch of improvements with very few breaking API changes and almost nothing that would impact the average user.

Some Thoughts on MS Pulling Back on their “Eventing Framework”

I’ll admit that I’d stopped paying attention quite awhile ago and didn’t even realize Microsoft was still considering building out their own “Eventing Framework” until everybody and their little brother started posting a link to their announcement about forgoing this effort today.

Here’s a very few thoughts from me about this, and I think for about the first time ever, I’m disallowing comments on this one to just spit this out and be done with it.

  • I thought that what they were proposing in terms of usability by basically trying to make it “Minimal API” for asynchronous messaging was not going to be very successful in complex systems. I get that their approach might have led to a low learning code for simple usage and there’s some appeal to having a common programming model with web development, but man, I think that would have severely limited that tooling in terms of what it helped you do to deal with application complexity or testability compared to existing tools in this space.
  • Specifically, I think that the Microsoft tooling teams have a blind spot sometimes about testability design in their application frameworks
  • I think this is a technical area where .NET is actually very rich in options and there’s actually a lot of existing innovation across our ecosystem already (Wolverine, NServiceBus, MassTransit, AkkaDotNet, Rebus, Brighter, Microsoft’s own Dapr for crying out loud). I did not believe that the proposed tooling from Microsoft in this case did anything to improve the ecosystem except for the inevitable folks who just don’t want to have any dependency on .NET technology that is not from Microsoft
  • I’m continuously shocked anytime something like this bubbles up how a seemingly large part of the .NET community is outright hostile to non-Microsoft tooling in .NET
  • I will 100% admit that I was concerned about my own Wolverine project being severely harmed by the MS offering at the same time believing quite fervently that Wolverine would long remain a far superior technical solution. The reality is that Microsoft tooling tends to quickly take the Oxygen out of the air for non-Microsoft tools regardless of relative quality or even suitability for real usage. You can absolutely compete with the Microsoft offerings on technical quality, but not in informational reach or community attention
  • If Microsoft had gone ahead with their tooling, I had every intention of being aggressive online to try to point out every possible area where Wolverine had advantages and I had no plans to just give up. My thought was to just lean in much, much harder to the greater Critter Stack as a full blown Event Sourcing solution where there is really nothing competitive to the Critter Stack in the rest of the .NET community (I said what I said) and certainly nothing from Microsoft themselves (yet)
  • I think it hurts the .NET ecosystem when Microsoft squelches community innovation and this is something I’ve never liked about the greater .NET community’s fixation on having official, Microsoft approved tooling.
  • One thing the Microsoft folks tried to sell people like me who lead asynchronous messaging projects is that they (MS) were really good at application frameworks, and we could all take dependencies on a new set of medium level messaging abstractions and core libraries for messaging. I wonder if what they meant is what are now the various Aspire plugins for Rabbit MQ or Azure Service Bus. I was also extremely dubious about all of that.
  • As someone else pointed out, do you really want one tool trying to be all things to all people because that’s a recipe for a bloated, unmaintainable tool
  • I think the Microsoft team was a bit naive about what they would have to build out and how many feature requests they would have gotten from folks wanting to ditch very mature tools like MassTransit. I really don’t believe that Microsoft would have resisted the demands from some elements of the community to grow the new things into something able to handle more complex requirements
  • I don’t know what to say about the people who flipped their lids over the MassTransit and MediatR commercialization plans. I think folks were drastically underestimating the value of those tools, the overhead in supporting those tools over time, and in complete denial about the practicality of rolling your own one off tools.
  • The idea that Microsoft is an infallible maintainer of their development tools is bonkers
  • Regardless, the Critter Stack is going down the “Open Core” model

As my mentor used to say at my first real software development job, I feel better now and thank you for listening.

And now, back to Wolverine 5 and CritterWatch for me…

Little Diary of How JasperFx Helps Our Clients

For any shops using the “Critter Stack” (Marten and Wolverine), JasperFx Software offers support contracts and custom consulting engagements in support of these tools — or really anything you might be doing on the server side with .NET as well. Something we’ve had some success with, especially lately, is positioning these “support” contracts as essentially having JasperFx on call for adhoc consulting beyond merely assisting with production issues or bugs.

Just to try to illustrate the value of these engagements, I thought it would be interesting to describe what JasperFx has done for clients in just the past 30 days — in very general terms with zero information about our client’s business domain of course.

In no particular order, we’ve:

  • Helped several clients with CI/CD related tasks around Wolverine or Marten’s code generation as a way to find problems faster and to optimize cold start times. Also fixed some issues with the codegen for one of our support clients. As always, I’d rather we never had any bugs, but we do try to stomp those out relatively quickly for our clients
  • Explained and worked through error handling strategies built into Wolverine with a client who was dealing with a dependency on an external service that has somewhat strict rate limiting
  • Planned out identity strategies for importing data from a legacy system into Marten and for interoperability with that legacy system for the time being
  • Jumped on a Zoom call to pair program with a client who needed to use some pretty advanced Wolverine middleware capabilities
  • Did a Zoom call with that same client to help them plan for future message broker usage. Most of our support work is done through Discord or Slack, but sometimes a real call is what you need to have more of discussion — especially when I think I need to ask the client several questions to better understand their needs and the context around their questions before firing off a quick answer.
  • Helped a client troubleshoot usage issue with Kafka
  • Added some improvements for Wolverine usage with F# for one of our first clients
  • Developed some new test automation support around scheduled message capabilities in Wolverine for one of our clients who is very aggressive in their integration test automation
  • Built a small feature in Marten to help optimize some upcoming work for a client using Marten projections. I won’t say that building new features is an official part of support contracts, but we will prioritize features for support clients.
  • Interacted with a client team to best utilize the Critter Stack “Aggregate Handler Workflow” approach as a way of streamlining their application code and maximizing their ability to unit test business logic. If you’ll buy into Wolverine idioms, you can build systems with much less code than the typical Clean/Onion Architecture approaches.
  • Assisted several clients with using the test automation support features backed into Wolverine, with quite a bit of pure consulting on how to be more successful with test automation efforts in .NET.
  • Conducted more Zoom calls to talk through Event Sourcing modeling questions for multiple clients. I’m a big believer in Event Sourcing, but it is a pretty new technique and architectural style, and it’s not necessarily a natural transition for folks who are very used to thinking and building in terms of relational databases. JasperFx can help!
  • We fielded plenty of questions about the best usage of projections in Marten
  • Helped a client try to optimize their experience with Kubernetes helpfully stopping and starting pods while the pods were quite busy with Marten and Wolverine work. That was fun. Not.
  • Talked through Wolverine usages and made some additional changes to Wolverine for a client who is using Wolverine as an in memory message bus for a modular monolith architecture.

And answering plenty of small questions about features or approaches that probably just amount to giving our clients peace of mind about what they were doing.

As I was compiling this, I noticed that there hasn’t been any recent support questions about multi-tenancy or concurrency lately. I’m going to take that as a sign that we’re very mature in those two areas!

I would hope the point I made here is that there’s quite a lot of value we can bring to your organization through an ongoing support contract and engagement with JasperFx Software. Certainly feel free to reach out to us at sales@jasperfx.net for any questions about how we could potentially help your shop!

While I do enjoy interacting with our clients and I most certainly love getting to make a living off of my own technical babies, anytime I do some outright shilling and promotion like this post, I’m a bit reminded of this (and I’m definitely the “Ray”):

Update on Wolverine 5 and CritterWatch

We’re targeting October 1st for the release of Wolverine 5.0. At this point, I think I’d like to say that we’re not going to be adding any new features to Wolverine 4.* except for JasperFx Software client needs. And also, not that I have any pride about this, I don’t think we’re going to address bugs in 4.* if those bugs do not impact many people.

To catch you up if you want:

In addition to the new features we originally envisioned, add in:

  • A messaging transport for Redis from the community — which might also lead to a Redis backed saga model some day too, but not for 5.0
  • Ongoing work to improve Wolverine’s capability to allow folks to mix and match persistent message stores in “modular monolith” applications
  • Working over some of the baked in Dead Letter Queue administration, which is being done in conjunction with ongoing “CritterWatch” work

I think we’re really close to the point where it’s time to play major release triage and push back any enhancements that wouldn’t require any breaking changes to the public API, so anything not yet done or at least started probably slides to a future 5.* minor release. The one exception might be trying to tackle the “cold start optimization.” The wild card in this is that I’m desperately trying to work through as much of the CritterWatch backend plumbing as possible right now as that work is 100% causing some changes and improvements to Wolverine 5.0

What about CritterWatch?

If you understand why the image above appears in this section, I would hope you’d feel some sympathy for me here:-)

I’ve been able to devote some serious time to CritterWatch the past couple weeks, and it’s starting to be “real” after all this time. Jeffry Gonzalez and I will be marrying up the backend and a real frontend in the next couple weeks and who knows, we might be able to demo something to early adopters in about a month or so. After Wolverine 5.0 is out, CritterWatch will be my and JasperFx’s primary technical focus the rest of the year.

Just to rehash, the MVP for CritterWatch is looking like:

  1. The basic shell and visualization of what your monitored Critter Stack applications are, including messaging
  2. Every possible thing you need to manage Dead Letter Queue messages in Wolverine — but I’d warn you that it’s focused on Wolverine’s database backed DLQ
  3. Monitoring and a control panel over Marten event projections and subscriptions and everything you need to keep those running smoothly in production

Event Enrichment in Marten Projections

So here’s a common scenario when building a system using Event Sourcing with Marten:

  1. Some of the data in your system is just reference data stored as plain old Marten documents. Something like user data (like I’ll use in just a bit), company data, or some other kind of static reference data that doesn’t justify the usage of Event Sourcing. Or maybe you have some data that is event sourced, but it’s very static data otherwise and you can essentially treat the projected documents as just documents.
  2. You have workflows modeled with event sourcing and you want some of the projections from those events to also include information from the reference data documents

As an example, let’s say that your application has some reference information about system users saved in this document type (from the Marten testing suite):

public class User
{
    public User()
    {
        Id = Guid.NewGuid();
    }

    public List<Friend> Friends { get; set; }

    public string[] Roles { get; set; }
    public Guid Id { get; set; }
    public string UserName { get; set; }
    public string FirstName { get; set; }
    public string LastName { get; set; }
    public string FullName => $"{FirstName} {LastName}";
}

And you also have events for some kind of UserTask aggregate that manages the workflow of some kind of work tracking. You might have some events like this:

public record TaskLogged(string Name);
public record TaskStarted;
public record TaskFinished;

public class UserAssigned
{
    public Guid UserId { get; set; }

    // You don't *have* to do this with a mutable
    // property, but it is *an* easy way to pull this off
    public User? User { get; set; }
}

In a “query model” view of the event data, you’d love to be able to show the full, human readable User information about the user’s full name right into the projected document:

public class UserTask
{
    public Guid Id { get; set; }
    public bool HasStarted { get; set; }
    public bool HasCompleted { get; set; }
    public Guid? UserId { get; set; }

    // This would be sourced from the User
    // documents
    public string UserFullName { get; set; }
}

In the projection for UserTask, you can always reach out to Marten in an adhoc way to grab the right User documents like this possible code in the projection definition for UserTask:

    // We're just gonna go look up the user we need right here and now!
    public async Task Apply(UserAssigned assigned, IQuerySession session, UserTask snapshot)
    {
        var user = await session.LoadAsync<User>(assigned.UserId);
        snapshot.UserFullName = user.FullName;
    }

The ability to just pull in IQuerySession and go look up whatever data you need as you need it is certainly powerful, but hold on a bit, because what if:

  1. You’re running the projection for UserTask asynchronously using Marten’s async daemon where it updates potentially hundreds of UserTask documents a the same time?
  2. You expect the UserAssigned events to be quite common, so there’s a lot of potential User lookups to process the projection
  3. You are quite aware that the code above could easily turn into an N+1 Query Problem that won’t be helpful at all for your system’s performance. And if you weren’t aware of that before, please be so now!

Instead of the N+1 Query Problem you could easily get from doing the User lookup one single event at a time, what if instead we were able to batch up the calls to lookup all the necessary User information for a batch of UserTask data being updated by the async daemon?

Enter Marten 8.11 (hopefully by the time you read this!) and our newly introduced hook for “event enrichment” and you can now do exactly that as a way of wringing more performance and scalability out of your Marten usage! Let’s build a single stream projection for the UserTask aggregate type shown up above that batches the User lookup:

public class UserTaskProjection: SingleStreamProjection<UserTask, Guid>
{
    // This is where you have a hook to "enrich" event data *after* slicing,
    // but before processing
    public override async Task EnrichEventsAsync(
        SliceGroup<UserTask, Guid> group, 
        IQuerySession querySession, 
        CancellationToken cancellation)
    {
        // First, let's find all the events that need a little bit of data lookup
        var assigned = group
            .Slices
            .SelectMany(x => x.Events().OfType<IEvent<UserAssigned>>())
            .ToArray();

        // Don't bother doing anything else if there are no matching events
        if (!assigned.Any()) return;

        var userIds = assigned.Select(x => x.Data.UserId)
            // Hey, watch this. Marten is going to helpfully sort this out for you anyway
            // but we're still going to make it a touch easier on PostgreSQL by
            // weeding out multiple ids
            .Distinct().ToArray();
        var users = await querySession.LoadManyAsync<User>(cancellation, userIds);

        // Just a convenience
        var lookups = users.ToDictionary(x => x.Id);
        foreach (var e in assigned)
        {
            if (lookups.TryGetValue(e.Data.UserId, out var user))
            {
                e.Data.User = user;
            }
        }
    }

    // This is the Marten 8 way of just writing explicit code in your projection
    public override UserTask Evolve(UserTask snapshot, Guid id, IEvent e)
    {
        snapshot ??= new UserTask { Id = id };
        switch (e.Data)
        {
            case UserAssigned assigned:
                snapshot.UserId = assigned?.User.Id;
                snapshot.UserFullName = assigned?.User.FullName;
                break;

            case TaskStarted:
                snapshot.HasStarted = true;
                break;

            case TaskFinished:
                snapshot.HasCompleted = true;
                break;
        }

        return snapshot;
    }
}

Focus please on the EnrichEventsAsync() method above. That’s a new hook in Marten 4.13 that lets you define a step in asynchronous projection running to potentially do batched data lookups immediately after Marten has “sliced” and grouped a batch of events by each aggregate identity that is about to be updated, but before the actual updates are made to any of the UserTask snapshot documents.

In the code above, we’re looking for all the unique user ids that are referenced by any UserAssigned events in this batch of events, and making one single call to Marten to fetch the matching User documents. Lastly, we’re looping around on the AgentAssigned objects and actually “enriching” the events by setting a User property on them with the data we just looked up.

A couple other things:

  • It might not be terribly obvious, but you could still use immutable types for your event data and “just” quietly swap out single event objects within the EventSlice groupings as well.
  • You can also do “event enrichment” in any kind of custom grouping within MultiStreamProjection types without this new hook method, but I felt like we needed this to have an easy recipe at least for SingleStreamProjection classes. You might find this hook easier to use than doing database lookups in custom grouping anyway

Summary

That EnrichEventsAsync() code is admittedly some busy code that really isn’t the most obvious thing in the world to do, but when you need better throughput, the ability to batch up queries to the database can be a hugely effective way to improve your system’s performance and we think this will be a very worthy addition to the Marten projection model. I cannot possibly stress enough how insidious N+1 Query issues can be in enterprise systems.

This work was more or less spawned by conversations with a JasperFx Software client and some of their upcoming development needs. Just saying, if you want any help being more successful with any part of the Critter Stack, drop us a line at sales@jasperfx.net.

Working and Testing Against Scheduled Messages with Wolverine

Wolverine has long had some ability to schedule some messages to be executed at a later time or schedule messages to be delivered at a later time. The Wolverine 4.12 release last night added some probably overdue test automation helpers to better deal with scheduled messaging within integration testing against Wolverine applications — and that makes now a perfectly good time to talk about this capability within Wolverine!

First, let’s say that we’re just using Wolverine locally within the current system with a setup like this:

var builder = Host.CreateApplicationBuilder();
builder.Services.AddWolverine(opts =>
{
    // The only thing that matters here is that you have *some* kind of 
    // envelope persistence for Wolverine configured for your application
    var connectionString = builder.Configuration.GetConnectionString("postgres");
    opts.PersistMessagesWithPostgresql(connectionString);
});

The only point being that we have some kind of message persistence set up in our Wolverine application because the message or execution scheduling depends on persisted envelope storage.

Wolverine actually does support in memory scheduling without any persistence, but that’s really only useful for scheduled error handling or fire and forget type semantics because you’d lose everything if the process is stopped.

So now let’s move on to simply telling Wolverine to execute a message locally at a later time with the IMessageBus service:

public static async Task use_message_bus(IMessageBus bus)
{
    // Send a message to be sent or executed at a specific time
    await bus.SendAsync(new DebitAccount(1111, 100),
        new(){ ScheduledTime = DateTimeOffset.UtcNow.AddDays(1) });
    
    // Same mechanics w/ some syntactical sugar
    await bus.ScheduleAsync(new DebitAccount(1111, 100), DateTimeOffset.UtcNow.AddDays(1));

    // Or do the same, but this time express the time as a delay
    await bus.SendAsync(new DebitAccount(1111, 225), new() { ScheduleDelay = 1.Days() });
    
    // And the same with the syntactic sugar
    await bus.ScheduleAsync(new DebitAccount(1111, 225), 1.Days());
}

In the system above, all messages are being handled locally. To actually process the scheduled messages, Wolverine is as you’ve probably guessed, polling the message storage (PostgreSQL in the case above), and looking for any messages that are ready to be played. Here’s a few notes on the mechanics:

  • Every node within a cluster is trying to pull in scheduled messages, but there’s some randomness in the timing to keep every node from stomping on each other
  • Any one node will only pull in a limited “page” of scheduled jobs at a time so that if you happen to be going bonkers scheduling thousands of messages at one time, Wolverine can share the load across nodes and keep any one node from blowing up
  • The scheduled messages are in Wolverine’s transactional inbox storage with a Scheduled status. When Wolverine decides to “play” the messages, they move to an Incoming status before finally getting marked as Handled when they are successful
  • When scheduled messages for local execution are “played” in a Wolverine node, they are put into the local queue for that message, so all the normal rules for ordering or parallelization for that queue still apply.

Now, let’s move on to scheduling message delivery to external brokers. Just say you have any external routing rules like this:

using var host = await Host.CreateDefaultBuilder()
    .UseWolverine(opts =>
    {
        opts.UseRabbitMq()
            // Opt into conventional Rabbit MQ routing
            .UseConventionalRouting();
    }).StartAsync();

And go back to the same syntax for sending messages, but this time the message will get routed to a Rabbit MQ exchange:

    await bus.ScheduleAsync(new DebitAccount(1111, 100), DateTimeOffset.UtcNow.AddDays(1));

This time, Wolverine is still using its transactional inbox, but with a twist. When Wolverine knows that it is scheduling message delivery to an outside messaging mechanism, it actually schedules a local ScheduledEnvelope message that when executed, sends the original message to the outbound delivery point. In this way, Wolverine is able to support scheduled message delivery to every single messaging transport that Wolverine supports with a common mechanism.

With idiomatic Wolverine usage, you do want to try to keep most of your handler methods as “pure functions” for easier testing and frankly less code noise due to async/await mechanics. To that end, there’s a couple helpers to schedule messages in Wolverine using its cascading messages syntax:

public IEnumerable<object> Consume(MyMessage message)
{
    // Go West in an hour
    yield return new GoWest().DelayedFor(1.Hours());

    // Go East at midnight local time
    yield return new GoEast().ScheduledAt(DateTime.Today.AddDays(1));
}

The extension methods above would give you the raw message wrapped in a Wolverine DeliveryMessage<T> object where T is the wrapped message type. You can still use that type to write assertions in your unit tests.

There’s also another helper called “timeout messages” that help you create scheduled messages by subclassing a Wolverine base class. This is largely associated with sagas just because it’s commonly a need for timing out saga workflows.

Error Handling

The scheduled message support is also useful in error handling. Consider this code:

using var host = await Host.CreateDefaultBuilder()
    .UseWolverine(opts =>
    {
        opts.Policies.OnException<TimeoutException>().ScheduleRetry(5.Seconds());
        opts.Policies.OnException<SecurityException>().MoveToErrorQueue();

        // You can also apply an additional filter on the
        // exception type for finer grained policies
        opts.Policies
            .OnException<SocketException>(ex => ex.Message.Contains("not responding"))
            .ScheduleRetry(5.Seconds());
    }).StartAsync();

In the case above, Wolverine uses the message scheduling to take a message that just failed, move it out of the current receiving endpoint so other messages can proceed, then retries it no sooner than 5 seconds later (it won’t be real time perfect on the timing). This is an important difference than the RetryWithCooldown() mechanism that is effectively just doing an await Task.Delay(timespan) inline to purposely slow down the application.

As an example of how this might be useful, I’ve had to work with 3rd party systems where users can create a pessimistic lock on a bank account, so any commands against that account would always fail because of that lock. If you can tell that the command failure is because of a pessimistic lock in the exception message, you might tell Wolverine to retry that message an hour later when hopefully the lock is released, but clear out the current receiving endpoint and/or queue for other work that can proceed.

Testing with Scheduled Messaging

We’re having some trouble with the documentation publishing for some reason that we haven’t figured out yet, but there will be docs soon on this new feature.

Finally, on to some new functionality! Wolverine 4.12 just added some improvements to Wolverine’s tracked session testing feature specifically to help you with scheduled messages.

First, for some background, let’s say you have these simple handlers:

public static DeliveryMessage<ScheduledMessage> Handle(TriggerScheduledMessage message)
{
    // This causes a message to be scheduled for delivery in 5 minutes from now
    return new ScheduledMessage(message.Text).DelayedFor(5.Minutes());
}

public static void Handle(ScheduledMessage message) => Debug.WriteLine("Got scheduled message");

And now this test using the tracked session which shows the new first class support for scheduled messaging:

[Fact]
public async Task deal_with_locally_scheduled_execution()
{
    // In this case we're just executing everything in memory
    using var host = await Host.CreateDefaultBuilder()
        .UseWolverine(opts =>
        {
            opts.PersistMessagesWithPostgresql(Servers.PostgresConnectionString, "wolverine");
            opts.Policies.UseDurableInboxOnAllListeners();
        }).StartAsync();

    // Should finish cleanly, even though there's going to be a message that is scheduled
    // and doesn't complete
    var tracked = await host.SendMessageAndWaitAsync(new TriggerScheduledMessage("Chiefs"));
    
    // Here's how you can query against the messages that were detected to be scheduled
    tracked.Scheduled.SingleMessage<ScheduledMessage>()
        .Text.ShouldBe("Chiefs");

    // This API will try to immediately play any scheduled messages immediately
    var replayed = await tracked.PlayScheduledMessagesAsync(10.Seconds());
    replayed.Executed.SingleMessage<ScheduledMessage>().Text.ShouldBe("Chiefs");
}

And a similar test, but this time where the scheduled messages are being routed externally:

var port1 = PortFinder.GetAvailablePort();
var port2 = PortFinder.GetAvailablePort();

using var sender = await Host.CreateDefaultBuilder()
    .UseWolverine(opts =>
    {
        opts.PublishMessage<ScheduledMessage>().ToPort(port2);
        opts.ListenAtPort(port1);
    }).StartAsync();

using var receiver = await Host.CreateDefaultBuilder()
    .UseWolverine(opts =>
    {
        opts.ListenAtPort(port2);
    }).StartAsync();

// Should finish cleanly
var tracked = await sender
    .TrackActivity()
    .IncludeExternalTransports()
    .AlsoTrack(receiver)
    .InvokeMessageAndWaitAsync(new TriggerScheduledMessage("Broncos"));

tracked.Scheduled.SingleMessage<ScheduledMessage>()
    .Text.ShouldBe("Broncos");

var replayed = await tracked.PlayScheduledMessagesAsync(10.Seconds());
replayed.Executed.SingleMessage<ScheduledMessage>().Text.ShouldBe("Broncos");

Here’s what’s new in the code above:

  1. ITrackedSession.Scheduled is a bit special collection of all activity that happened during the tracked activity that led to messages being scheduled. You can use this just to interrogate what scheduled messages resulted from the original activity.
  2. ITrackedSession.PlayScheduledMessagesAsync() will “play” all scheduled messages right now and return a new ITrackedSession for those messages. This method will immediately execute any messages that were scheduled for local execution and tries to immediately send any messages that were scheduled for later delivery to external transports.

The new support in the existing tracked session feature further extends Wolverine’s already extensive test automation story. This new work was done at the behest of a JasperFx Software client who is quite aggressive in their test automation. Certainly reach out to us at sales@jasperfx.net for any help you might want with your own efforts!

Sneak Peek at the SignalR Integration in Wolverine 5.0

Earlier this week I did a live stream on the upcoming Wolverine 5.0 release where I just lightly touched on the concept for our planned SignalR integration with Wolverine. While there wasn’t that much to show yesterday, a big pull request just landed and I think the APIs and the approach has gelled enough that it’s worth a sneak peak.

First though, the new SignalR transport in Wolverine is being built now to support our planned “CritterWatch” tool shown below:

As it’s planned out right now, the “CritterWatch” server application communicating via SignalR to constantly push updated information to any open browser dashboards about system performance. On the other side of things, CritterWatch users will be able to submit quite a number of commands or queries from the browser to CritterWatch, when will then have to relay commands and queries to the various “Critter Stack” applications being monitored through asynchronous messaging. And of course, we expect the responses or status updates to be constantly flowing from the monitored services to CritterWatch which will then relay information or updates to the browsers, again by SignalR.

Long story short, there’s going to be a lot of asynchronous messaging back and forth between the three logical applications above, and this is where a new SignalR transport for Wolverine comes into play. Having the SignalR transport gives us a standardized way to send a number of different logical messages from the browser to the server and take advantage of everything that the normal Wolverine execution pipeline gives us, including relatively clean handler code compared to other messaging or “mediator” tools, baked in observability and traceability, and Wolverine’s error resiliency. Going back the other way, the SignalR transport gives us a standardized way to publish information right back to the client from our server.

Enough of that, let’s jump into some code. From the integration testing code, let’s say we’ve got a small web app configured like this:

var builder = WebApplication.CreateBuilder();

builder.WebHost.ConfigureKestrel(opts =>
{
    opts.ListenLocalhost(Port);
});

// Note to self: take care of this in the call
// to UseSignalR() below
builder.Services.AddSignalR();
builder.Host.UseWolverine(opts =>
{
    opts.ServiceName = "Server";
    
    // Hooking up the SignalR messaging transport
    // in Wolverine
    opts.UseSignalR();

    // These are just some messages I was using
    // to do end to end testing
    opts.PublishMessage<FromFirst>().ToSignalR();
    opts.PublishMessage<FromSecond>().ToSignalR();
    opts.PublishMessage<Information>().ToSignalR();
});

var app = builder.Build();

// Syntactic sure, really just doing:
// app.MapHub<WolverineHub>("/messages");
app.MapWolverineSignalRHub();

await app.StartAsync();

// Remember this, because I'm going to use it in test code
// later
theWebApp = app;

With that configuration, when you call IMessageBus.PublishAsync(new Information("here's something you should know")); in your system, Wolverine will be routing that message through SignalR, where it will be received in a client with the default “ReceiveMessage” operation. The JSON delivered to the client will be wrapped with the CloudEvents specification like this:

{ “type”: “information”, “data”: { “message”: “here’s something you should know” } }

Likewise, Wolverine will expect messages posted to the server from the browser to be embedded in that lightweight CloudEvents compliant wrapper.

We are not coincidentally adding CloudEvents support for extended interoperability in Wolverine 5.0 as well.

For testing, the new WolverineFx.SignalR Nuget will also have a separate messaging transport using the SignalR Client just to facilitate testing, and you can see that usage in some of the testing code:

// This starts up a new host to act as a client to the SignalR
// server for testing
public async Task<IHost> StartClientHost(string serviceName = "Client")
{
    var host = await Host.CreateDefaultBuilder()
        .UseWolverine(opts =>
        {
            opts.ServiceName = serviceName;
            
            // Just pointing at the port where Kestrel is
            // hosting our server app that is running
            // SignalR
            opts.UseClientToSignalR(Port);
            
            opts.PublishMessage<ToFirst>().ToSignalRWithClient(Port);
            
            opts.PublishMessage<RequiresResponse>().ToSignalRWithClient(Port);
            
            opts.Publish(x =>
            {
                x.MessagesImplementing<WebSocketMessage>();
                x.ToSignalRWithClient(Port);
            });
        }).StartAsync();
    
    _clientHosts.Add(host);

    return host;
}

And now to show a little Wolverine-esque spin, let’s say that you have a handler being invoked by a browser sending a message through SignalR to a Wolverine server application, and as part of that handler, you need to send a response message right back to the original calling SignalR connection to the right browser instance.

Conveniently enough, you have this helper to do exactly that in a pretty declarative way:

    public static ResponseToCallingWebSocket<WebSocketResponse> Handle(RequiresResponse msg) 
        => new WebSocketResponse(msg.Name).RespondToCallingWebSocket();

And just for fun, here’s the test that proves the above code works:

[Fact]
public async Task send_to_the_originating_connection()
{
    var green = await StartClientHost("green");
    var red = await StartClientHost("red");
    var blue = await StartClientHost("blue");

    var tracked = await red.TrackActivity()
        .IncludeExternalTransports()
        .AlsoTrack(theWebApp)
        .SendMessageAndWaitAsync(new RequiresResponse("Leo Chenal"));

    var record = tracked.Executed.SingleRecord<WebSocketResponse>();
    
    // Verify that the response went to the original calling client
    record.ServiceName.ShouldBe("red");
    record.Message.ShouldBeOfType<WebSocketResponse>().Name.ShouldBe("Leo Chenal");
}

And for one least trick, let’s say you want to work with grouped connections in SignalR so you can send messages to a subset of connected clients. In this case, I went down the Wolverine “Side Effect” route, as you can see in these example handlers:

// Declaring that you need the connection that originated
// this message to be added to the named SignalR client group
public static AddConnectionToGroup Handle(EnrollMe msg) 
    => new(msg.GroupName);

// Declaring that you need the connection that originated this
// message to be removed from the named SignalR client group
public static RemoveConnectionToGroup Handle(KickMeOut msg) 
    => new(msg.GroupName);

// The message wrapper here sends the raw message to
// the named SignalR client group
public static object Handle(BroadCastToGroup msg) 
    => new Information(msg.Message)
        .ToWebSocketGroup(msg.GroupName);

I should say that all of the code samples are taken from our test coverage. At this point the next step is to pull this into our CritterWatch codebase to prove out the functionality. The first thing up with that is building out the server side of what will be CritterWatch’s “Dead Letter Queue Console” for viewing, querying, and managing the DLQ records for any of the Wolverine applications being monitored by CritterWatch.

For more context, here’s the live stream on Wolverine 5:

Live Stream Previewing Wolverine 5.0 on Thursday

I’ll be doing a live stream tomorrow (Thursday) August 4th to preview some of the new improvements coming soon with Wolverine 5.0. The highlights are:

  • The new “Partitioned Sequential Messaging” feature and why you’re going to love this feature that’s going to help make Wolverine based systems much more able to sidestep problems with concurrency
  • Improvements to the code generation and IoC usage within Wolverine.HTTP
  • The new SignalR transport and integration, and how we think this is going to make it easier to build asynchronous workflows between web clients and your backend services
  • More powerful interoperability w/ non-Wolverine services
  • How the Marten integration with Wolverine is going to be more performant by reducing network chattiness
  • Some thoughts about improving the code start times for Wolverine and Marten

And of course anything else folks want to discuss on the live stream as well.

Check it out here, and the recording will be up later tomorrow anyway: